back to article Will this be one of the world's first RISC-V laptops?

As Apple and Qualcomm push for more Arm adoption in the notebook space, we have come across a photo of what could become one of the world's first laptops to use the open-source RISC-V instruction set architecture. In an interview with The Register, Calista Redmond, CEO of RISC-V International, signaled we will see a RISC-V …

  1. ShadowSystems

    Some serious questions.

    Yes yes yes, I know I know. Yes, I'm feeling fine, now stop signalling to the nurse to refill my dried frog pills. =-)p

    What are the pros & cons of a Risk-based system over an x86 or ARM? Does it run faster? Is it capable of handling more RAM? Much faster/better audio/video capabilities? The ability to address more total drive space? Does it have some other hardware-based wiz-bang capability/feature/trick up it's sleeve that makes it something a person might choose over the other two technologies?

    Is software designed for Risk more stable? Secure? Less buggy? Would an OS compiled for it be better somehow than if compiled for a different architecture (ARM, x86, Apple's M1, etc)? What software abilities/capabilities/tricks up it's sleeve does it have to recommend it over another?

    I'll root for them to succeed & produce a professional-grade laptop that really "knocks it out of the park" as a first attempt, mostly because I like rooting for the underdogs, but also because I'm keeping my fingers crossed that they do _not_ make the same mistakes in product design that all the others have done.

    *Pays the publican for a few thousand casks of their finest consumeable with instructions to deliver it to the Risk folks in charge of getting this project finished*

    Have a pint or two on me, but please stay sober enough to finish the project, ok? We expect great things from you, not some clunky monster like an old Compaq luggable, Commodore luggable, or a mutant bastard love child of the insane coupling of an Apple iMac & an old Coleco fantasy pinball electronic game. =-Jp

    1. Flocke Kroes Silver badge

      Re: Some serious questions.

      The most obvious advantage to users is the lack of an Intel Management Engine, AMD Secure Technology or equivalent.

      1. oiseau
        Thumb Up

        Re: Some serious questions.

        ... the lack of an Intel Management Engine, AMD Secure Technology or equivalent.



        And, quite obviously, no BIOS or a 100% open source whatever it needs to boot the OS.


        1. ShadowSystems

          Re: Some serious questions.

          Why is not having a BIOS a good thing? Is not being able to configure various bits of the hardware configuration at a pre-os-loaded state not something that might prove useful, such as being able to hardware-"kill" a mic, webcam, or biometric widget?

          ISTR a BIOS is where you configure such things as the ability to boot from other sources than the HDD, such as a USB stick, a CD/DVD drive, a network location (local or remote), or other form of boot media; adjusting the clock frequency & voltage settings to be able to overclock the system to run faster or slower depending on your needs/wants; the ability to configure a default screen resolution, refresh rate, & brightness so you don't have to fiddle with the function keys each time you boot; adjusting things like integrated audio to use a USB-attached alternative; etc, etc, etc.

          Are these things no longer useful?

          *Honestly confused expression*

          1. Old Used Programmer

            Re: Some serious questions.

            Recent models of Raspberry Pi manage to boot from alternate devices and pretty much everything else uou mentioned without a BIOS. Not needed.

            1. Dan 55 Silver badge

              Re: Some serious questions.

              It doesn't happen by magic though. The boot ROM is in the VideoCore IV GPU and it doesn't even run ARM code, that happens later, after the GPU starts the CPU up.

              Yep, it's all back to front.

      2. Justthefacts Silver badge

        Re: Some serious questions.

        Sigh. If this is *really* your beef with Intel, it’s fairly easy to disable Intel Management Engine.

        No, you can’t do it “from within Linux”. You will have to connect to the underlying hardware via HW debugger, and have read the actual CPU documentation properly. But it’s perfectly within the capabilities of a decent software engineer with embedded experience.

        The “difficult” bit is rewriting what are effectively drivers (from my POV) that now need to be external, for the main CPU boot, to take over what the IME is supposed to do. But of course, once you’ve written a script, then it’s done.

        I wrote one (various reasons, practical engineering rather than paranoia related), but since I’m only an average rather than a great software engineer, it’s a bit flaky and targeted to the one specific CPU running our kit.

        If this were a real problem, it would be fairly easy to productise this onto a consumer module….if the alternative is “investing” hundreds of billions of taxpayer money into re-inventing the entire semiconductor supply and design chain on nationalist grounds.

        1. Greybearded old scrote Silver badge

          Re: Some serious questions.

          Really? To quote Not The Nine O'Clock News:

          Please note that your life jacket is under your seat. Place it over your head, then tie the straps around you. To inflate pull the green tab, press the yellow button, unzip the toggle pocket, unscrew the air valve anti-clockwise and yell, “Inflate you stupid bugger.”

          It's rarely good to expect everybody to be experts in your field.

          1. Justthefacts Silver badge

            Re: Some serious questions.

            I’m not expecting everybody to be able to do this. My skills are relatively orthogonal to most IT people - there are many “very basic” IT-type tasks that I don’t have any knowledge or skills in at all, I’m far more on the hardware/embedded end.

            I’m simply pointing out that, if, as claimed, Intel ME is such a big deal to the end-customer, it’s relatively easy and cheap for a middleman to design & make a specialist module that takes it out of the loop. And if the hobbyists can’t get one made, they should just shout across the wall to some decently-skilled embedded-specialist mate. It’s far cheaper and lower impact to do that, than upend the entire CPU ecosystem.

            But, as a matter of fact, I *don’t* believe Intel ME is a real problem to anybody but the chattering classes looking for a stick to beat Intel with. I can’t think of any *real* reason to worry about it’s security other than shroud-waving: any secure system is going to be air-gapped anyway, plus TEMPEST etc. If you are worried about someone reaching in through the ME, then why aren’t you equally worried about people snooping in over the JTAG and the debug buses for RISCV? On RISCV, it’s open season for hunting pheasants. Or Differential Power Analysis? Have any of RISCV implementations been checked to be DPA-safe? No, of course not. Or anyone snooping the airwaves for radiation from the memory bus? Do any of the RISCV implementations support native encrypted RAM? No, of course not.

            “Worries” about Intel ME are just FUD with an agenda.

            1. Dan 55 Silver badge

              Re: Some serious questions.

              Yep, granny should roll her own a custom firmware for the ME.

              Don't look behind the curtain, or the other curtain, or indeed the other curtain.

              1. Justthefacts Silver badge

                Re: Some serious questions

                Motherboard manufacturers can do it, as an optional extra, if there were a demand, not “granny”.

                [also, not a firmware for the ME. Effectively an external BIOS that bypasses and disables the ME.]

                But obviously, Normal Human Beings *don’t* care, they can just have a standard mobo that leaves ME enabled. It’s *your* hypothesis, not mine, that there is some “security-conscious” customer somewhere who does care, enough to pay £20 extra for an extra widget on the mobo. I’ve seen no evidence other than the fevered imaginings of this board that anybody anywhere cares.

                1. Dan 55 Silver badge

                  Re: Some serious questions

                  Dell offers "ME and AMT disabled" as do Apple, System 76, Tuxedo, and Purism. Google and no such agency disable it in their computers as a matter of course.

                  So obviously there are people who care.

                  1. Justthefacts Silver badge

                    Re: Some serious questions

                    Fine. I didn’t know that. I’ve got no issue with that.

                    We agree then, it’s just not a big deal to design the external module to bypass it, and it’s fully commercially available to those who want to. I’ve got no idea how the bypass module is priced, as that depends more on customers willingness to pay on a niche product. But the additional manufacturing cost is unlikely to be more than a couple of quid for a small flash. Hooking up a couple of pins differently on the PCB and a one-dollar chip is just nothing to consider jumping to a totally different CPU family.

              2. Justthefacts Silver badge

                Re: Some serious questions.

                Like I said. Those Intel CVEs refer to external people being able to hop directly from USB to control chip internals, *incorrectly without authentication*. On the RISCV chips *there’s no authentication to start with*.

                What you probably don’t know (I’m assuming based on your comments), is that pretty much every chip ever has been available to hardware debug over JTAG, connected to your outside PC by a USB to JTAG debugger bridge, and in the last couple of decades that USB has been integrated into the chip.

                You can *always* single-step the entire chip, write anything you want into any register, read back all data without hindrance, and disable any security. Over the USB. For obvious reasons, that’s a security problem. So for consumer PCs etc, the USB debug port is not connected to anything one can access from outside. At the *chip module* level, not silicon. Effectively, Intel have said you can leave in that connection off-chip by adding an authorising on-chip entity.

                RISCV chip implementations discussed have *no authorising on-chip entity*. If you just look at the silicon, if you hooked up to the debug USB, you simply have complete root access to the whole thing, single-stepping and everything. The only Defense is that the module manufacturer didn’t hook up the Debug USB to the outside. That’s it. That’s really all we are talking about.

                1. Dan 55 Silver badge

                  Re: Some serious questions.

                  RISC-V has a Trusted Execution Environment but as far as I know nobody has designed a version which does remote administration, which is the problem that people have with Intel ME.

                  1. Justthefacts Silver badge

                    Re: Some serious questions.

                    Sorry, you’ve totally missed my point. The remote administration is simply Intel exposing “the Crown Jewels” via an authenticated interface to the outside. *All* platforms have “the Crown Jewels” exposed at the silicon, but the others simply don’t route it out of the module.

                    “Trusted Execution Environment” makes zero difference, if someone can attach directly to the chip pin out and single-step your code with breakpoints. This is nothing to do with threat model, or whether it is realistic. Simply that if you are unhappy with the remote administration, your problem is with the module, *not the silicon*, and it is the module you need to fix which isn’t even made by Intel.

                    1. Dan 55 Silver badge

                      Re: Some serious questions.

                      How many attacks are remote exploits and how many attacks are done by opening up the computer and messing around with components? The second type of attack can be measured in fractions of a percent.

                      Why, then, have a remotely exploitable server in the CPU which nobody can realistically do anything about, when the great majority of users will never even need to use it even once in the entire lifetime of the computer?

                      It's as simple as that.

                      1. Justthefacts Silver badge

                        Re: Some serious questions.

                        I’m not debating the threat model. All you’re doing is saying “Intel made a bad security decision, and I don’t like it”. But people who agree with you, apparently are provided in the market anyway. ME-disabled modules exist.

                        This is *not* a silicon issue. It’s just *wild* to jump and want to leave the x86 ecosystem entirely.

                        Plus, it doesn’t address the problem *at all*. Let’s say it succeeds and RISCV becomes widely available. Many RISCV manufacturers are bound to put remote management cores on their silicon too. The EU has proposals to *legally require* them to, to enforce a secure networking enclave around the whole of Europe, but that’s another matter. Now you are faced with several different RISCV implementations; if you pick the one that happens not to have put a remote management core on, their *core* RISCV is still proprietary implementation. It will have its own CVEs. Just like Spectre has been mitigated for Intel silicon, every single distinct RISCV implementation needs its own security analysis and individually specified set of CVE workarounds. So what have you gained?

                        1. Dan 55 Silver badge

                          Re: Some serious questions.

                          In case you didn't notice people left x86 years ago. There is life after Intel, it's been proven in the market - why is this suddenly *wild*? It seems like you're here to cheer on Intel and nothing they can do is wrong more than anything else.

                          Every CPU will have to support remote access? You're going to have to come up with a source for that.

                          The point is here - today, people are bothered about ME (your contention was they weren't), some laptop manufacturers can offer to disable it, and some organisations do disable it (you apparently didn't know this) and Intel for some reason are extremely reluctant to listen to what the customer wants and have done the bare minimum in the past 15 years.

                          Happily there is a new CPU architecture that doesn't have this "feature" here and now, and in the future if it ever does have this feature it will probably be an option due to its open nature. Maybe Intel will start listening to whay the customer wants now.

                          Everything about rewriting firmware, exploits which require physical access, jtags, and whatnot is tangential to the conversation. Very impressive, but ultimately not the least bit important. Let's stick to the point - Intel makes it difficult to disable or remove the ME that they've foisted upon everyone and RISC-V doesn't have an ME and this is a selling point for this CPU.

                          1. Justthefacts Silver badge

                            Re: Some serious questions.

                            “Wild” is changing CPU family, just because of differences in how some module manufacturers hook up the pin out on the module.

                            “Intel for some reason”….but we all know what the reason is. A small niche customers apparently care, and there is a solution for them, at the module level. If it were a larger niche, Intel would make a separate CPU without ME. But that would increase costs, to support more SKUs, and they assess that having the module manufacturer solve their problem is just fine. This isn’t really that complicated.

                            It’s just nonsensical to say “[RISCV] is a CPU architecture that doesn’t have this feature”, because the feature isn’t part of a CPU core architecture at all. And even if it were, RISCV is an ISA, not an architecture. What I think you are confused about, is that you really mean “Intel has this chosen to implement this feature [on x86 chips], whereas the companies that would like to manufacture RISCV chips if they could make it commercially viable, currently have no plans to implement the feature”

                            If you think the chip that Esperanto or SiFive make is so great, buy it! Oh look, *they aren’t selling them as a volume proposition*. Look what happened to SiFive:


                            Take a step back. Really?

                            NOTE: Due to incredible customer demand, we quickly sold out of our production run of HiFive Unmatched boards; with COVID-related supply chain issues, we have decided to focus on the upcoming, more powerful development system based on the Horse Creek SoC and platform co-developed with Intel rather than trying to restart HiFive Unmatched production.

                            So….they sold “ a lot” whatever that is. But couldn’t make it at the price they needed to sell it, and had no industrial muscle. I bet *I* can sell lots of ten pound notes at five pounds each. It’s not hard.

                            And at a time when every other semiconductor company was making out like bandits….they decided *not* to attempt to sell stuff; and partner up will Intel instead. That’s Evil Intel to you.

                            Do you think this Horse Creek SoC will have an ME on it? Bet it does.

                  2. Justthefacts Silver badge

                    Re: Some serious questions.

                    The other thing that is important to consider about RISCV security: Unlike Intel or ARM, there is simply no canonical RISCV implementation that we can have confidence about vulnerabilities, nor even a definitive set of such. This will not be like “Pentium has a floating point bug”. It will be “some significant proportion of the worlds CPUs are vulnerable, but we don’t know in which products and can’t check”. Think Log4j, but *in hardware*.

                    So, for example, in 2018 people were worried about Spectre/Meltdown, asked about RISCV, and SiFive categorically said “well, we are not affected…..[because we don’t use out-of-order speculative execution].

                    Spin forward to 2022, where there are lots of out-of-order RISCV implementations. So I took a quick squizz at the top hit on GitHub. This one:


                    What I can tell you from inspecting the Verilog is that this specific implementation as of today * is definitely vulnerable to Spectre*. There don’t appear to be any bug reports, so I don’t think anyone has even thought about it It’s touted as being “very fast, low footprint, works on FPGA”, so probably the authors weren’t that interested or skilled in security. No, I don’t have any interest at all in “responsible notification” or helping fix their code or any of that shizz. Nor in finding out how many other RISCVs are vulnerable, or which ones we should care about, nor going down the classic excuses of “works on my machine”, “fixed in tomorrow’s version” or “Yeah that microphone driver never worked on Pop, try Mint instead”.

                    1. Bruce Hoult

                      Re: Some serious questions.

                      I don't know why you think it appropriate to look at what is clearly a personal project of a handful of people and extrapolate that to OoO RISC-V cores from companies such as SiFive, Alibaba, Andes employing experienced CPU designers who have previously worked at ARM, Intel, AMD, Apple ...

                      It's been known for quite a few years now how to avoid Spectre/Meltdown and that it's pretty easy if you incorporate that into your design from the outset.

                      Here's something from four years ago:

                      The presenter designed OoO BOOM as a student, later ET-Maxion at Esperanto, and now works as a core designer at Intel.

                      1. Justthefacts Silver badge

                        Re: Some serious questions.

                        “Ah, you’ve picked the wrong implementation”….

                        Your problem as the manufacturing buyer of the chip for an electronic end product, let alone the end consumer, is knowing whether such guidelines have been adhered to.

                        Currently, you know your chip has an ARM Cortex on it. That’s really all you need to know. It’s a brand, and tells you how much real engineering stands behind it.

                        But *by definition* if open source becomes a thing, brand defining security reputation disappears. You tell me that Esperanto is good. I have no idea. That presentation certainly doesn’t fill me confidence, it’s very surface schmooze, he obviously doesn’t understand the deeper issues. But if Esperanto becomes the new ARM…..then that’s all it’s done. A new proprietary leader.

                        Please tell me that the open source world understands Spectre deeper than that Esperanto guy. Otherwise we are in for a world of pain.

    2. Proton_badger

      Re: Some serious questions.

      So all three are RISC architectures at this point. However x86 and to a lesser extent also ARM carries some baggage due to their age. Furthermore they both contain everything you could dream of in a general purpose processor, and more.

      RISC-V is super simple and most functionality, even floating point operations, is optional extensions. This means companies will be able to build versions of it for embedded systems, microcontrollers, cameras, automotive, etc etc specifically designed with the extensions for the relevant purpose and nothing else.

      It’s also open, so companies wouldn’t have to pay for a licence, in theory, because they would in most cases still have to buy a design somewhere… Maybe there would be companies offering custom designs to the industry.

      Also because it is so new and carry so little baggage, general purpose versions for PC’s have the potential to have very good power/performance. It should at least be able surpass x86 to rival ARM.

      1. Will Godfrey Silver badge
        Big Brother

        Re: Some serious questions.

        It's also likely to be more 'honest'. It will be harder to hide little extras in it - although obviously not impossible.

      2. Dave 126 Silver badge

        Re: Some serious questions.

        Thank you. I didn't know about the Use Only The Parts You Need aspect of the optional extensions until you expressed it as you did.

        I knew about the Open Source nature of the project, which I now see goes hand in hand with being able to leave out the parts you don't need when you're designing a chip for a task.

      3. juice

        Re: Some serious questions.

        > However x86 and to a lesser extent also ARM carries some baggage due to their age. Furthermore they both contain everything you could dream of in a general purpose processor, and more

        It's worth bearing in mind that RISC-V was introduced 12 years ago, and is based on David Patterson's RISC designs, which he first drew up in 1990 while in academia. So it's not *that* new, though at least RISC-V was effectively a "clean slate", since it didn't have to carry any significant baggage over from its school days.

        > RISC-V is super simple and most functionality, even floating point operations, is optional extensions

        You can't add or remove things from an x86 chip, but ARM very much lets you pick and choose what you want in your SoC.

        In fact, that's part of the reason why Apple's ARM chips are dominating things at present, since they've picked the bits they want and then done their own engineering and custom design work atop that!

        It's also worth bearing in mind that all that extra flexibility carries costs of it's own. One reason why ARM took so long to make any inroads against x86 dominance of the "PC" market was that while there were millions (if not billions) of ARM devices out there, each one was based on a unique design and OS/software had to be custom tailored to each one.

        It wasn't until we got Android (and to a degree, iOS) that we started to get properly standardised ARM hardware designs which you could build standardised software packages for. And that then led to things like the Raspberry Pi and it's ilk.

        > It’s also open, so companies wouldn’t have to pay for a licence, in theory, because they would in most cases still have to buy a design somewhere… Maybe there would be companies offering custom designs to the industry.

        It'll be interesting to see how pricing pans out over time; for most "commercial" Open Source stuff, the main charges come from support and training, and I'd be surprised if those costs were roughly on par with those for x86 and ARM chips.

        > Also because it is so new and carry so little baggage, general purpose versions for PC’s have the potential to have very good power/performance. It should at least be able surpass x86 to rival ARM.

        Perhaps. The last article I saw about such things indicated that while a prototype RISC chip was significantly beating the competition in the power stakes, it was also only running about a quarter of the speed.

        And even then, that comparison was based on benchmark results provided by the manufacturer and using a deliberately simplified single-process benchmark, since Ars didn't actually have a sample of the RISC chip to test.

        And we all know just how reliable and unbiased manufacturer-supplied benchmark figures can be!

        Fundamentally, RISC-V may be improving faster than it's competition, but that's partly because it's so far behind them. And it remains to be seen whether it'll be able to become truly competitive on both price and performance, especially given that x86 and ARM both get a lot more design-money thrown at them, and are able to licence patents etc.

        On the other hand, there's plenty of niches where low-power (and/or patent-free/politically unencumbered) CPUs can be slotted into. So I think RISC-V is here to stay, regardless!

        1. Bruce Hoult

          Re: Some serious questions.

          Just about everything in your post is wrong.

          RISC-V was not introduced 12 years ago, some students and their professor had a crazy idea in a pub to START it 12 years ago. It was essentially introduced to the world a little under 7 years ago.

          Dave Patterson invented the term "RISC" and the first RISC I CPU around 1980-1981, not 1990. I can only assume you weren't born at those times and consider them prehistoric.

          ARM does NOT allow you to add or remove things from their CPU core or the instruction set. Of course you can add whatever you like else in the SoC, as you don't license that from ARM and ARM doesn't make such IP.

          The Raspberry Pi is very far from standard. There are simply a lot of them. (Compared to other SBCs, not compared to phones or tablets)

          1. juice

            Re: Some serious questions.

            > Just about everything in your post is wrong

            Oh noes!

            > RISC-V was not introduced 12 years ago, some students and their professor had a crazy idea in a pub to START it 12 years ago. It was essentially introduced to the world a little under 7 years ago.

            You might want to go and update Wikipedia with your detailed knowledge then, since that's where i sourced my timescales from. Or indeed, the official RISC-V history page.

            Personally, I'd differentiate between the initial development of RISC as a concept, and the actual implementation of RISC-V. Since as the name suggests, RISC-V is actually the fifth generation of RISC design!

            Beyond that...



            major RISC-V milestones were the first tapeout of a RISC-V chip in 28nm FDSOI [...] in 2011, publication of a paper on the benefits of open instruction sets in 2014 2, the first RISC-V Workshop held in January 2015, and the RISC-V Foundation launch later that year with 36 Founding Members.

            For me, the fact that the design was open-sourced and taped out in 2011 is the key date; it may have then taken 5 or 6 years for RISC-V to be publically debuted, but that doesnt change when the "1.0" specification was released.

            > Dave Patterson invented the term "RISC" and the first RISC I CPU around 1980-1981, not 1990. I can only assume you weren't born at those times and consider them prehistoric.

            Ooo. It's always a pleasure to be considered younger than i actually am ;) Sadly, while I was a little young to be using computers in 1980, I did start poking buttons on a ZX Spectrum in 1983 or so.

            And again, I was quoting Wikipedia:

            The term RISC dates from about 1980 [...and academic research...] resulted in the RISC instruction set DLX for the first edition of Computer Architecture: A Quantitative Approach in 1990 of which David Patterson was a co-author, and he later participated in the RISC-V origination.

            RISC may have been "named" in 1980 - ARM originally stood for Acorn RISC Machine back in 1983 - but again, for me, the release of the DLX paper in 1990 is where RISC-V was born, especially since the author of that paper - David Patterson himself - had a huge hand in designing RISC-V.

            But then, the nice thing about opinions is that everyone has one!

            > ARM does NOT allow you to add or remove things from their CPU core or the instruction set. Of course you can add whatever you like else in the SoC, as you don't license that from ARM and ARM doesn't make such IP

            Odd. I must have dreamed up the link which I added to my post, about how ARM lets you add custom instructions to your CPUs.

            No, wait...


            Certainly, Arm Custom Instructions support the intelligent and rapid development of fully integrated custom CPU instructions. sounds like exactly what you're talking about, unless im missing something major.

            > The Raspberry Pi is very far from standard. There are simply a lot of them. (Compared to other SBCs, not compared to phones or tablets)

            This one is a bit more subjective. But for me, the argument would be that once something has a significant market share, it's effectively a standard, as also happened with Apple's iOS devices. Certainly, there's a very large and healthy eco-system for both iOS and RPi devices, with far more peripherals, expansions, etc available than for any other devices - or indeed, all other similar devices combined.

            I'd also note that the RPi is based on a fairly standard ARM "mobile phone" SoC, which means it's able to hook into standard Linux/Android/Windows toolings and libraries.

            So while I can see some merits to your argument, I still think that from a practical perspective, the RPi is a standard!

          2. Justthefacts Silver badge

            Re: Some serious questions.

            ARM doesn’t allow you to remove instructions, but they do allow you to add your own:


            It’s not widely taken up commercially, because *it’s not actually a very good idea*. As ARM explain in their white paper, making a co-processor is a perfectly good alternative in most cases.

            The main benefit of having custom instructions is having very low latency 1-3 clock cycles, which probably has its uses, but very, very rare. Almost always when you want to optimise something, you are looking at something in a tight loop, which means operating on a body of data. Then it’s not only more efficient to do it full-custom silicon copro rather than faffing with CPU registers, and hogging CPU data bus bandwidth. But it also avoids reverifying and re-place and routing the CPU core (signigificant cost and risk).

            I’m not saying it’s *never* useful. But I have worked on half a dozen programs where this has come up, and in the end after careful tradeoff, the answer has *always* been a copro. I’d be interested to hear from anyone where the correct answer has genuinely been to add a special instruction - not an R&D project, but one which released a chip that was commercially successful.

  2. Henry Hallan

    Obvious Fake is Obvious

    Look on the bottom row and observe the Windows key. A RISC-V laptop would not run Windows.

    Someone photographed an old, worn generic laptop and put together an ambiguous press release. And you lot fell for it.

    I would love for there to be a RISC-V laptop but that ain't it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Obvious Fake is Obvious

      You might be right. But if it's a prototype, why not - assuming they fit - shoehorn the new innards into a generic - and possibly second-use laptop case? If it worked and didn't overheat (or whatever other problem might occur), you could be reasonably confident that at least that part of final production might be easier.

      1. Graham Cobb Silver badge

        Re: Obvious Fake is Obvious

        It doesn't even have to fit. It looks to me like the screen is taped to the box behind it. I'm guessing the prototype motherboard is inside the box - for ease of access and making hardware fixes if nothing else. If I was in the early stages of prototyping a new laptop I wouldn't want to have to dismantle it every time I wanted to swap in a new rev of a chip.

        This is probably the (low clock speed) prototype they gave the test guys to try using it like a real laptop and report on what fails ("the Rev 13B chip fixes the annoying flicker in the top left of the screen but the random disk errors that were fixed after Rev 12 are back").

    2. Greybearded old scrote Silver badge

      Re: Obvious Fake is Obvious

      Or they used an existing off-the-shelf keyboard.

    3. Dylan Martin

      Re: Obvious Fake is Obvious

      I don't think that's the smoking gun you think it is. It makes sense to me that someone would use a cheap, readily available laptop chassis from an ODM to test a RISC-V chip for a laptop. Like we said in the article, this is likely a prototype, not a commercial product, and ISCAS already said it was planning RISC-V laptops for 2022. It's also important to note this came from a presentation that was largely ignored by the press. I just happened to have the gumption to get up super early in the morning on my side of the pond to listen in on a livestream. They were mainly showing this to RISC-V developers.

    4. Dave 126 Silver badge

      Re: Obvious Fake is Obvious

      I wouldn't be surprised if it was cheaper to buy 20 or 100 keyboards with Windows English US keycaps already printed than it would be to source blank or custom keyboards.

      It doesn't matter to the user of a prototype device.

      1. the spectacularly refined chap

        Re: Obvious Fake is Obvious

        Particularly when for something like you need to give the software guys 3-6 months at a minimum to put a basic software load on there. Forget simulation or "use this other board instead" - at times you really need something at least close to what will ship.

        I've had a Lego model of a crane on my bench before. Yes I was working on a full size one but for some reason the one made mostly of Lego was more convenient.

    5. Joe W Silver badge

      Re: Obvious Fake is Obvious

      My Laptop has a Windows key despite not running Windows and me not having the intention to do so. Your point was? (hey, it's a generic laptop keyboard...)

    6. werdsmith Silver badge

      Re: Obvious Fake is Obvious

      It's even got a £ sign on the 3 key. A UK keyboard, so not even a decent attempt to hoax us. If it was in China I don't expect a UK keyboard. A US one might be believable.

    7. Adrian 4

      Re: Obvious Fake is Obvious

      Acorn once produced a laptop by making an Arm-based motherboard to fit in some Olivetti plastics. A good way to manufacture a device where all your costly development is put into the electronics rather than spent on injection mouldings and other engineering.

    8. heyrick Silver badge

      Re: Obvious Fake is Obvious

      Ever think that maybe it's just using an off the shelf keyboard unit?

      (reads down: yup, obvious reason is obvious)

  3. Peter D

    It looks like

    A Gumtree advert for a stolen Dell corporate laptop.

  4. Anonymous Coward
    Anonymous Coward


    ...I'm not entirely sure why people would buy one at this stage, though. I think that the change to new processor ISAs is going to be gradual. Apple have shown that you can do it without end users noticing.

    Maybe qemu needs to be built into the Linux environment a bit more deeply - just run whatever binary turns up on whatever processor you happen to have?

    1. Bruce Hoult

      Re: Cool...

      Look up "binfmt_misc". I've been using it for many years to run x86, ARM, RISC-V (and others) binaries transparently on whatever board I'm currently using.

      Slower than native, of course, but much faster than Python.

  5. Dave 126 Silver badge

    Does any laptop have a video-passthrough mode where it behaves as a dumb monitor (plus keyboard and mouse) when a computer is attached via USB C? I mean hardware natively here, not through the OS.

    Today you can plug an Android phone or iPad into a TV or monitor and use it as you might a desktop. This model of use might be convenient for people developing for RISC V - have a RISC V computer in a small box that uses your existing (x86, ARM) laptop's screen and keyboard.

  6. breakfast Silver badge

    Been this way before

    I feel for whoever has this - we had an Archimedes when I was at secondary school so I understand how knowing your computer was built on a RISC based architecture and technologically superior does very little to compensate when your friends are all talking about the cool games they can play on their Atari STs and Amigas, none of which ever came out on the Arc.

    Chocks Away was good, though, fair play.

    1. mark l 2 Silver badge

      Re: Been this way before

      I doubt even if this prototype goes into production they will be aimed at gamers. It will more than likely come with a Linux distro and even on X86 Linux most new games are Windows games running through WINE, so unless it comes with some sort of X86 to RISC-V emulator which would allow games for Windows to run at a reasonable speed through WINE, then the best you might achieve is ports of games like Doom and Quake, where the code is open sourced and can be recompiled for RISC V.

      It does mention that a port of Android 12 has been made for RISC V so that might open up a few more commercial games, although from my experience with Android on X86 a lot of games have been complied for running on ARM and won't run on other CPUs.

      Not everyone wants to play games on a computer though, there are millions of PCs out there that are only used for none gaming stuff such as coding, office software, or just general internet usage, which a RISC V laptop should easily be able to do.

    2. Fifth Horseman

      Re: Been this way before

      Zarch (I think it was called, long time ago...), the full game version of the 'Lander' demo that came with the Archimedes was good too, much better than the ports that were eventually done for the Amiga and the ST.

      The ARM in it's initial implementation was an off-the-shelf general purpose CPU, rather than an SoC. It was designed by Acorn, but manufactured and marketed by VLSI Technology (a major player in the early IBM PC clone motherboard chipset game), and went by the catchy name of VL86C010. Three other chips effectively 'made' the Archimedes, it is pretty easy to guess their part numbers.

      Up until recently I still had some engineering samples and original manufacturers' documentation, provided by a friendly VLSI distributor for some nerdy project a few of us were planning at University. The project never happened due to the collective discovery of women, alcohol, electric guitars, motorbikes and rock climbing, in no particular order.

      The Arch is the one computer I have owned that I regret no longer having.

  7. Nathan 6

    Solaris Box

    For folks who like vintage compute gear, a good way to get some marking for these this chip might be to port/run Solaris onto the hardware and sell as a SBC. Not sure how much work that would involve, but it's the only reason I would even care about this chip given all the other x86/Arm options out there.

    1. Bartholomew

      Re: Solaris Box

      > might be to port/run Solaris onto the hardware

      Looks like there is already a port in the pipeline at least for illumos, which is derived from OpenSolaris, for RISC-V (and aarch64 AKA ARM64).

      They have already removed everything SPARC, while leaving any code that would help RISC-V and aarch64 ports.

  8. werdsmith Silver badge

    Absolute bullshit. I would buy a RISC V laptop tomorrow but that quite elderly windows laptop with a UK keyboard is never going to be it.

  9. trevorde Silver badge

    I want a RISC-V laptop

    said no one ever

  10. heyrick Silver badge

    For the majority, does it matter?

    Talking about normal people here, not us nerds or the gamers. For them, nobody gives a crap about what's inside. The question is "can it do X" where X is going to be some social media drivel or a streaming video platform. Having email and some sort of word processor might make it aimable at the lightweight WFH types, too.

    However I think the main problem is going to be the problem that affects everything that isn't mainstream. Does it have software? A lack of apps for Windows Mobile no favours. If it has software, is it compatible with the rest of the world? That means Word, and maybe these days some degree of Google/OneDrive integration.

    And does it have a good battery life? Always a useful thing on a laptop.

  11. ecofeco Silver badge

    OK, and?

    What are the benefits of a RISC laptop?

    What are the downsides?

    1. DJO Silver badge

      Re: OK, and?

      The answer to both of your questions is: Geek bragging rights.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like