RISC-V on the Verge of Broad Adoption

(eetimes.com)

303 points | by childintime 1897 days ago

10 comments

  • ChuckMcM 1896 days ago
    Hah!

    EEtimes 2002 "Infiniband on the Verge of Broad Adoption" whoops :-)

    But more seriously, I think having a license free CPU core with software support is a huge win, it will enable people like TSMC and GF to make 'jelly bean' CPUs that can be low cost and high volume. But they won't be that much different than the low end ARM cpus which are basically the cost of packaging these days anyway.

    The market forces in a 14nm (last year's process node) world are pretty interesting. 90% of the cost is the dicing, testing, and packaging the parts.

    So the other market force is the Western Digitals of the world who will make RISC-V embedded SOCs with exactly the right set of peripherals they need to make a billion disk drive motor controllers. But most people won't see those chips in the marketplace. And there probably won't be a 'design to order' chip house that will make small volumes of these things.

    Imagine you are a Microchip or a ST Micro, what does RISC-V do for you? It lets you avoid the few cents you pay for the ARM license per chip, does it let you differentiate any more? Look how well that worked out for the AtMega16 for Atmel.

    So that is the embedded market, kind of a wash.

    But what about "bigger" systems? Laptops or desktops or servers? Do the pareto analysis on those systems, compare an ARM SoC to a RISC-V one. The costs are in memory, support chips, PCBs, tooling etc. Not the CPU license.

    Will it let people build chips for phones like Apple does with their own GPUs and microarchitecture? Sure, and at less cost, but what then does that give the average consumer? A cheaper phone with market features? Ok that's a win but they won't care it is RISC-V versus ARM v8 or what not.

    Bottom line is that I think it is great to have the ability to create computers that are not beholden to a certain vendor but I don't see the vector for getting them into general distribution where consumers actually benefit from the change.

    • Nokinside 1896 days ago
      ARM royalties 1.0% - 2.0% of the chip cost and negotiable. Higher the volume, smaller the royalty. Foundry pays extra 0.5% for the physical IP package.

      When somebody decides to invest into large volume laptop/desktop/server microarchitecture to compete with Intel and AMD, nothing prevents them from adding their own proprietary ISA extensions. RISC-V allows classic embrace and extend tactics. RISC-V for Microsoft surface and RISC-V for Google Chrome may not be fully compatible with each other and RISC-V AWS cloud processors.

      • microcolonel 1896 days ago
        > RISC-V for Microsoft surface and RISC-V for Google Chrome may not be fully compatible with each other and RISC-V AWS cloud processors.

        It is up to software distributors to decide whether it's worthwhile to use those extensions; provided they are even allowed to run their own software on the thing to begin with (which was not the case with Surface RT anyway, extended ISA or not).

        > When somebody decides to invest into large volume laptop/desktop/server microarchitecture to compete with Intel and AMD, nothing prevents them from adding their own proprietary ISA extensions.

        Nothing prevents AMD and Intel from doing this either, and they do it all the time! You just don't notice because they agreed privately to cross-license the extensions.

        And at the end of the day, the parts of the platform that you'd use for a standard operating system (notwithstanding new tagged memory or other exotic architectures, which I'd argue are a good, innovative form of incompatibility) are fully standardized already on RISC-V. If Microsoft wants to make their next version of NT on RISC-V rely on Qualcomm-proprietary instructions, that's their prerogative.

        • int_19h 1896 days ago
          > provided they are even allowed to run their own software on the thing to begin with (which was not the case with Surface RT anyway, extended ISA or not).

          Surface RT allowed Windows Store apps, including native ones.

          • cyphar 1896 days ago
            I think you're agreeing with GP -- it only allowed software signed by Microsoft (in other words, you couldn't run your own software or operating system unless Microsoft said so).

            In this hypothetical proprietary RISV-V scenario, you might not even be able to run unsigned code on the device and so nobody is going to bother supporting its weird instruction set.

      • est31 1896 days ago
        Embrace & extend are a big danger indeed. We might be headed into an era where there aren't a few well-understood proprietary ISAs but instead there are lots of badly understood proprietary ISAs that base on some open source core.
        • pjc50 1896 days ago
          Ah, yes. "Know how every ARM system boots slightly differently with different peripherals? How about we encode this wacky incompatibility into the instruction set too?"
      • baybal2 1896 days ago
        > RISC-V allows classic embrace and extend tactics. RISC-V for Microsoft surface and RISC-V for Google Chrome may not be fully compatible with each other and RISC-V AWS cloud processors.

        The zoo of endianness, and extensions support did not do anything to ARM's rise.

        • pertymcpert 1896 days ago
          Endianness is not the same, it's a standard part of the architecture.

          And the extensions could be tested for based on ISA version. Most of ARM's history is non-divergent.

      • nickik 1896 days ago
        Most open software that is not vender provided will run on RISC-V. Meaning no matter if Google or Microsoft, that stuff will work.

        It want use their proprietary extensions but often I don't really care about that and if they pay for developing or updating all open source software so it runs on their extension then that's fine also.

    • AnthonyMouse 1896 days ago
      > But what about "bigger" systems? Laptops or desktops or servers? Do the pareto analysis on those systems, compare an ARM SoC to a RISC-V one. The costs are in memory, support chips, PCBs, tooling etc. Not the CPU license.

      But it could be a step towards a "GPL for hardware" type of shift. Someone puts in the work or the money to do a core which is competitive on some important metric (power, cost, etc.), and licenses it at no cost but under the terms that if you make changes you have to publish them and under the same license.

      Then some people use it because it's good enough and free-as-in-beer, but if any of them improve it then it gets better. So then more people use it, until it's the Linux of hardware.

      Moreover, if you have to publish the changes then you have to document the changes (equivalent of releasing source code), which means hardware that isn't a black box will gain a competitive advantage over hardware that is.

      • Nokinside 1896 days ago
        New high performance microarchitecture costs hundreds of millions to develop. You need to develop new microarchitecture every five years or so to stay competitive.

        It's not hacking VHDL in your basement type thing where everyone can adds new stuff and it just works together trough common agreed api. All changes must run trough long chain of verification and testing from functional models to physical placement, testing and verification.

        • AnthonyMouse 1896 days ago
          Which is why you don't start off by trying to challenge AMD and Intel on raw performance.

          But more to the point, most of Linux isn't developed in basements either. Someone like Google/Facebook/Amazon/Microsoft decides it's worth their resources to make a more power efficient chip for their datacenters and more valuable to get further improvements from third parties than to try to sell it externally at commodity margins, or that "commoditize your complement" would be a good thing to do for cell phone chips, and now you've got a billion dollars in funding.

          • Nokinside 1896 days ago
            I believe there is a good economic model for open hardware design but it's not the software model. "commoditize your complement" means that only those whit physical hardware can make money.

            IMHO Syndicate with clever licensing around RISC-V infrastructure would be the best idea.

            The clever licensing part:

            1) mandatory licensing: licensees must license to everyone with the same conditions. licensees can't refuse to license.

            2) pricing model: licensees periodically announce valuations at which they commit to sell their IP, and must sell their intellectual property for that value to anyone who is willing to commit to smaller license fees. There can be a periodic auction each year.

            • AnthonyMouse 1896 days ago
              > "commoditize your complement" means that only those whit physical hardware can make money.

              Those companies are the ones with physical hardware.

              But it's also not true that they'd be the only ones to make money. If you lower your supplier's margins on hardware and then pass half the savings on to your customers, you get both higher margins and higher volumes. But your customers still get lower prices, which means they make more money too, on top of the transactions that it made feasible that weren't previously.

        • dfox 1896 days ago
          Reportedly, Intel does mostly fully automated synthesis and instead of manually tuning the layout, they improve their essentially fully in-house Verilog to silicon toolchain.
          • Nokinside 1896 days ago
            Everyone uses automated toolchains, but getting physical layout from toolchain does not mean it works or performs.

            What makes it possible to design small volumes of custom asics for reasonable price is old and well known process, increasing the error margins (and sacrificing performance) or using existing modules. Simulation mostly works and can be trusted, yield is predictable etc.

      • pjmlp 1896 days ago
        In a time where most sponsors for FOSS software are moving away from GPL.
        • AnthonyMouse 1896 days ago
          The GPL continues to be a popular license. It just got popular enough that in v3 they could take a more aggressive stance on patents and tivoization. Which is inherently a trade off between goals and adoption. They chose to achieve more of their goals at the expense of some usage.
          • pjmlp 1896 days ago
            To the point that all major FOSS alternatives on the embedded space, including Zephyr from Linux Foundation aren't based on GPL.

            Android already removed their dependency on GCC, following Apple's footsteps, and depending on how Fuchsia turns out, eventually the Linux kernel as well.

            • protomikron 1896 days ago
              But Android is still based on Linux and you need GCC to compile the kernel (yeah I know, other compilers can do it too, but GCC is in some kind of symbiosis with Linux, which is a good thing I guess).

              I know about the Fuchsia project, but so-far it can't compete with Linux or other established OSes.

              • pjmlp 1896 days ago
                No you don't, Android has removed all dependencies on GCC with Treble changes, upstream just did not took all the changes done by Google.

                Quote:

                "Android 8.0 and higher support only Clang/LLVM for building the Android platform. Join the android-llvm group to pose questions and get help. Report NDK/compiler issues at the NDK GitHub.

                For the Native Development Kit (NDK) and legacy kernels, GCC 4.9 included in the AOSP master branch (under prebuilts/) may also be used."

                From https://source.android.com/setup/build/requirements#software...

                "Removed GCC and gnustl/stlport. Added lld."

                From https://android.googlesource.com/platform/ndk/+/master/docs/...

                There are also a couple of Linux/Clang Conferences where Google goes through the kernel and clang changes they have done to accomplish it.

                Android might have the Linux kernel under the hood, but it isn't Linux as many know it.

                • snvzz 1895 days ago
                  >Android might have the Linux kernel under the hood, but it isn't Linux as many know it.

                  Not like it matters. They're replacing even that. See Fuchsia.

                  And that's a good thing: Fuchsia seems to have a better design (microkernel, multiserver).

              • pertymcpert 1896 days ago
                Why is gcc dependency a good thing for the kernel?
                • saagarjha 1896 days ago
                  It keeps both GPL?
                  • sigjuice 1896 days ago
                    How?
                    • onli 1896 days ago
                      You're right to ask, it does not. In the FOSS world the license of the compiler has nothing to do with the license of the compiled software, and that also goes for the GPL 3.

                      Probably some leftover thinking from proprietary products that tried to apply license restrictions on the compiled output.

                      • pjmlp 1896 days ago
                        Actually GCC has an explicit clause that GPL doesn't apply to the generated binary, otherwise you could not link to the C runtime library

                        "GCC Runtime Library Exception"

                        https://www.gnu.org/licenses/gcc-exception-3.1.en.html

                        This document was also the inspiration for the Classpath exception in OpenJDK.

                        • pertymcpert 1896 days ago
                          Right but that's because the compiler is emitting code into the binary that exists verbatim in the GCC source.
                          • pjmlp 1895 days ago
                            Which makes the assertion false, otherwise there wouldn't be a need for the "GCC Runtime Library Exception".

                            > "In the FOSS world the license of the compiler has nothing to do with the license of the compiled software".

                            • onli 1895 days ago
                              There is no other FOSS compiler that influences the license of the compiled software, afaik. So I think that statement holds up anyway.
    • smallstepforman 1896 days ago
      The info everyone is missing is code density comparison with ARM. Risc-V is more efficient and has about 10% denser code, which translates to more instructions fitting in i-cache, less memory pressure, and ultimately better performance and battery life. Long term roadmap, thats a win for Risc-V.
      • bsder 1896 days ago
        > Risc-V is more efficient and has about 10% denser code, which translates to more instructions fitting in i-cache, less memory pressure, and ultimately better performance and battery life. Long term roadmap, thats a win for Risc-V.

        Only in the most extreme cases.

        1) Battery life isn't dominated by run current for the vast majority of embedded devices. Sleep current dominates (most cases) or peripheral current dominates (RF transmit/receive, for example). You try to dial down the number of times you turn on until it's below the amount of energy you burn while off.

        2) RAM is expensive; flash not so much. Code space isn't the issue--10% almost certainly not. Correlated: this is why I expect you really won't see 64 bits making a lot of inroads into embedded--doubling RAM consumption is expensive on embedded.

      • ip26 1896 days ago
        I'm sorry, did you just describe the core advantage of a new RISC CPU against incumbents as smaller code size? Where am I, and what is happening?
        • wmf 1896 days ago
          RISC-V has a code compression extension so it's not classic RISC but it's still far simpler than CISC. https://riscv.org/wp-content/uploads/2015/05/riscv-compresse...
        • imtringued 1896 days ago
          That's a pretty damn good argument. It's 10% ahead of the best ISAs that took decades of development. Just think of how adoption would be affected if it was 30% worse.

          In other words: not only is it better in terms of royalties and ecosystem, it also better at everything else too. Isn't that terrific?

          • marcosdumay 1896 days ago
            I read the GP as talking about how code size was always a weakness of RISC, and seemingly the largest one.

            And here it is compared against a classical CISC platform and a hybrid one highly optimized for code size, and winning. What just makes RISC-V even more awesome than just any non-optimized design beating the incumbents.

        • dbcurtis 1896 days ago
          Is it a core advantage? Maybe. But smaller code size has beneficial effects on the silicon cost. Choice 1: If you can benchmark the same on important work loads with 10% smaller I-Cache, make the die smaller. Manufacturing costs go down with a greater than square law effect with die area. Choice 2: Use the die area freed up to put more functional units in the same area.

          Core advantage? I will let others debate that. Significant: surely.

        • pcwalton 1896 days ago
          x86-64 isn't very space efficient anymore, so it's not hard to beat. Even AArch64, with a fixed 32-bit instruction size, competes well with x86-64.

          REX prefixes really killed the space efficiency of the x86 architecture.

        • zerohp 1896 days ago
          Smaller code size makes your caches more effective. L1 instruction cache is size limited because it's on a critical timing path. Increasing its size limits your operating frequency.
        • hrydgard 1896 days ago
          Still matters in many embedded applications.
          • 0x1DEACAFE 1896 days ago
            Code size for RV32IMAC is still pretty mediocre with the current GCC/RISCV compiler. And the standard library they use by default is pretty sub-optimal. I know they're working on it, and it's clear they're making quick progress, but it's not easy at the moment. The last project I worked on, I had to abandon ABI conventions and hand craft large chunks of code.
      • duskwuff 1896 days ago
        Is "10% denser" comparing RV32 or RV64 against A32, T32, or A64? And is that with or without the Compressed Instructions extension?
        • microcolonel 1896 days ago
          As of early 2016, with the GCC port at that time, RV32GC was as dense as Thumb, and RV64GC was denser than AArch64 and every other major 64-bit ISA, including AMD64. Though RV64G (no C) was in some extreme cases up to 50% larger than AArch64 (due to inlining memcpy and memset, which are a bit larger without compressed instructions), but usually around the same (except MIPS64, which is way larger than the other 64-bit ISAs, probably because of exposed delay slots). [0]

          There's some indication that density should have increased somewhat since then, but I haven't looked at it myself.

          [0]: https://youtu.be/Ii_pEXKKYUg

          • Symmetry 1896 days ago
            That's why I have a lot more faith in RISC-V's ability to take on relatively high end embedded tasks than lower end ones. I'd expect compression to be too expensive, transistor wise, for many roles where you'd use an ARM Cortex M2 or such and program memory is at a premium in those places.
            • audunw 1896 days ago
              > I'd expect compression to be too expensive

              It's not the kind of compression you might be thinking of. It's just 16-bit "shortcuts" for some of the common 32-bit instructions. The impact in gate count should be minimal. In a lot of these applications you'll have the code in on-chip non-volatile memory which means reducing code size may also reduce chip area.

              I think with relatively little increase in gate count you could also make some sequences of two 16-bit instructions execute simultaneously, which could yield nice performance improvements for micro-controller cores.

              Also, you might be surprised at how "big" many micro-controllers are becoming these days.

            • microcolonel 1896 days ago
              > I'd expect compression to be too expensive, transistor wise, for many roles where you'd use an ARM Cortex M2 or such...

              Decoding the "compressed" instructions is actually pretty straightforward, it doesn't add much complexity to a design. ARM Cortex M0+/M3/M4 implements a similar (but more complex) "compressed" instruction set called Thumb, and comparable RISC-V cores available from SiFive are smaller, faster, and more efficient.

              In a very small RISC-V core by the venerable Clifford Wolf called PicoRV32 [0], you can look at the complexity introduced by configuring it with the COMPRESSED_ISA option.

              > ...and program memory is at a premium in those places.

              Program memory is one thing, but on processors of all sizes, code size has a big impact on performance in common types of program.

              [0]: https://github.com/cliffordwolf/picorv32

            • TomVDB 1896 days ago
              The cost of compression is very small for low performance designs (single instruction in-order issue). It's very straightforward to implement.

              It gets harder for more complex designs though.

              But for cases where you want to replace a Cortex M2, the area increase will be trivial.

            • duskwuff 1896 days ago
              Cortex-M2? ¿Que? CM2 doesn't exist -- the naming scheme jumps straight from CM1 (which is FPGA-only) to CM3.
      • 0x1DEACAFE 1896 days ago
        I think this is wrong. Certainly the GCC toolchain spits out some remarkably mediocre code. RISC-V compressed is generally on par with Thumb2, and where it differed, Thumb2 seemed to be a tiny bit more dense.

        If you compare GCC/ARM with GCC/RISCV the difference isn't too great, but even the IAR ARM compiler gives you noticeable improvements over GCC/RISCV. And ARM's compilers are actually quite good with respect to code size; MUCH better than GCC/RISCV (or even GCC/ARM).

        That being said, were I to add some custom instructions, I would COMPLETELY prefer to do it with RISC-V than with ARM.

        [] Though the gcc/riscv toolchain is getting better pretty quickly.

      • ChuckMcM 1896 days ago
        That is pretty cool, is there a ThumbV2 vs RISC-V paper somewhere for 32 bit RISC-V ?
    • kragen 1895 days ago
      I wonder if you can salvage useful RISC-V SoCs from WD disks three or five years down the road. That kind of thing hasn't been useful in the Moore era; now that it's over, maybe it'll make more sense.

      I'm not sure what happened with the ATMega. As far as I can tell, Atmel basically stopped developing AVR chips almost entirely around 2006, just selling the old ones; presumably they were having a hard time competing. With Cortex-M0s like the LPC2100? With PICs? With bargain-basement 10¢ Chinese microcontrollers made with obsolete process nodes? I'm not sure. The fact that they eventually sold the AVR line to Microchip makes me suspect it was PIC, but nowadays the chips that look like good AVR alternatives to me are almost entirely 32-bit ARMs.

      The reasons they look like good alternatives, though, don't have a lot to do with "differentiation". The AVR was attractive because, as an 8-bit chip, it could be used in places where you couldn't afford a 32-bit or even a 16-bit chip, and it used less power. But then fabrication processes improved to the point where a 32-bit chip costs the same as an AVR, uses less power, runs far faster, and has more memory. Maybe if they'd kept developing the AVR that wouldn't be true — or maybe at that price point almost all the cost goes to dicing, testing, and packaging, which is what you seem to be saying.

      > But what about "bigger" systems?… Not the CPU license.

      I feel like the major cost of the CPU license is not the money you pay Intel but the built-in IME backdoor it ships with.

    • 0x1DEACAFE 1896 days ago
      Yup. Everything you say is true.

      I would add though there are a couple places in the low end where one or two cents seem to matter. Or at least product managers think that it matters.

      In the mid-range, RISC-V providers (certainly SiFive) are pushing the idea that it's easy/easier to add custom logic to your RISC-V based die. I'm not able to judge whether that's really true, but since they're starting with a clean slate in terms of interface logic then maybe. I can't imagine it being harder to add your own tile to a RISC-V design than an ARM design.

      The tools seem to me to be pretty primitive, though. IAR says they'll have a compiler soonish. GCC still emits some "not completely great" code. (I mean, don't get me wrong, it's not horrible, but compared to the mature toolchains like ARM, Intel & MIPS, it's kind of bad.) Though it's certainly getting better over time.

      If you look at the architecture though, it does seem a bit easier to implement than ARM and there's more to open designs than just the "free as in beer" argument.

      RISC-V seems to be appealing to people who have a strong desire to "play around with" an architecture or a solution or are financially motivated to add logic that's somehow hard to add to ARM.

    • pault 1896 days ago
      > Will it let people build chips for phones like Apple does with their own GPUs and microarchitecture? Sure, and at less cost, but what then does that give the average consumer?

      Isn't the biggest benefit of Apple's hardware the user experience enabled by top to bottom ownership of the software and hardware stack? I would imagine being able to replicate that without the enormous upfront design costs would be game changing, but I am not familiar with the unit economics of CPUs.

      • ChuckMcM 1896 days ago
        It is certainly a benefit, and it would allow a handset maker to differentiate their offering more easily than they can with just software. That said, actually building custom chips for a phone is a very high risk spend because you are in a cut throat market for "off brand" phones, you don't know the volumes you can achieve, and the upfront costs of the chip are going to be large. Trying to get the calculus right so that you make money will be quite challenging.
    • milesvp 1895 days ago
      I've only recently become aware of risc-v, but it was in the context of FPGAs. I've been following FPGAs for years, and recently the open tooling is good enough for cheap FPGAs that I'm seeing a lot of projects starting to use them. You can get an FPGA that can simulate a risc-v for $6 in small batches. While you could probably get a much cheaper ARM cpu that is way more powerful, you have an FPGA that can do things that no comparably priced cpu can do, and change it without replacing the chip.

      Personally I think having a license free cpu will be an integral part in continuing to make FPGAs more and more viable, and I think we're going to start seeing them in places that no one ever really imagined 15 years ago.

      https://hackaday.com/2018/12/25/how-a-microcontroller-hiding...

    • DannyB2 1896 days ago
      > but I don't see the vector for getting them into general distribution

      There once was a time when Linux was this teeny little player. Many people thought it would never amount to anything. After all Microsoft was the big player, support and tooling for it was everywhere. It was possible to license it for embedded use. The only benefit of Linux was freedom, and that can't possibly be enough of a benefit.

      • cwyers 1896 days ago
        Linux didn't start off competing with Windows, it started off competing with commercial UNIX. (Most people were running Linux on hardware that had Windows licensing costs rolled in.) The price gap between Linux and a "real" UNIX were huge, especially given that you had to buy hardware to match. Is RISC-V really a huge difference in price for any ARM CPU (as opposed to ISA) licensee?
        • TomVDB 1896 days ago
          This was discussed just a couple of days ago here as well, so I'll just link to one of my comments there: https://news.ycombinator.com/item?id=19119398

          So the answer is: it really depends.

          I think RISC-V will quickly infiltrate the invisible on-chip microcontrollers. The ones that manage power regulation, SDRAM calibration training, etc. There is very little friction there.

          Then it will slowly enter low cost microcontrollers where cost is absolutely essential.

          The high-end will be IMO negligible for years to come.

          • rwmj 1896 days ago
            This! I gave a talk about how difficult it will be for RISC-V to enter the server space: https://rwmj.wordpress.com/2018/05/21/my-talk-from-the-risc-...

            For full disclosure, I work for Red Hat and am keeping an eye on RISC-V for servers, and I hope it does succeed but there's a mountain to climb and lots of ways to screw up.

        • pjmlp 1896 days ago
          Quite right Linux contributed more to commercial UNIXes downfall than anything else.

          For example, gcc was pretty much ignored until Sun started the trend of selling UNIX SDK tooling instead of bundling it with the OS.

        • marcosdumay 1896 days ago
          > it started off competing with commercial UNIX

          It started off as a Unix-like toy/learning-platform that people could run on devices too cheap to run Unix.

          Then it evolved into a Unix that people could use on devices too cheap to run real Unixes.

          It took the best part of a decade to make it competitive with the other Unixes. What just reinforces your point, I guess.

        • protomikron 1896 days ago
          But there was definitely a time where Linux was competing with Minix, which was low price (affordable to a Hacker) and had more features.

          However its development wasn't done in the open and it was not "free" (in the FOSS) sense, so it's usage in a specific setting (e.g. commercial) was not possible. I guess Linux open license played a huge role in its adoption and not just the fact that it was free as in "beer" (compared to expensive traditional Unix systems) - and within a short time it surpassed Minix features. There might be disagreements about technical choices made (see legendary conversation about kernel architecture), but in terms of features it surpassed Minix in a short time (and today sets the state of the art for commercial Unices).

      • zepto 1896 days ago
        There’s no analogy here.

        It only took a few hundred dollars worth of hardware to use Linux back in the day, and a windows license was a significant percentage of that.

        To use a chip architecture? It takes a design team and booking of fab time. I.e. 10s of millions of dollars.

      • pjmlp 1896 days ago
        That teeny little player profited from the contributions from IBM, Intel, Compaq, Oracle, and nowadays even Microsoft.

        Non-copyleft UNIX clones weren't so lucky.

        Even if RISC-V succeeds in the market, there isn't any guarantee that we won't have a plethora of incompatible extensions.

    • nickik 1896 days ago
      A couple things.

      Dealing with are is not just % cost, put lawyers and time. If you are small company that is very valuable.

      The choice WD made is not about cost but about an architecture that is adoptable adaptability. The whole argument is that you DON'T have to be apply to make it worth it to get a costume chip.

      Also, you will have more venders to choice from when doing a product.

      > Bottom line is that I think it is great to have the ability to create computers that are not beholden to a certain vendor but I don't see the vector for getting them into general distribution where consumers actually benefit from the change.

      End consumers never benefit from any individual technology. They buy product and that work. RISC-V is more important for the overall industry and specially for those interested in open source.

      As long as we base everything on proprietary ISA we can not have open chip projects that run lots of common software and that stops open silicon in its tracks, and with that open hardware as a whole.

      RISC-V enables open source culture to be even legally possible and making it practically viable.

    • HeadsUpHigh 1896 days ago
      There is also the market for tv top boxes and a lot more embedded applications for stuff like e.g. remote controllers, IoT and more. Then on top of that you have stuff like the kindle. Think how many copies these market segments have sold. Arm takes a couple cents per copy. Now keep in mind that these are rather small and simple cpus. Some alwinner chips sell for less than 4$ a piece and they sell millions per quarter. That's enough revenue saved for them to cover the moving to RISC-V costs.
  • snazz 1896 days ago
    I wonder what kind of BIOS/EFI replacement there will be on RISC-V motherboards. A brand new ISA seems like a great time to throw out any legacy features and focus on a future proofed low level firmware.
    • wyldfire 1896 days ago
      Sadly everything that's come since x86 has done bootstrapping much more device-specifically and it always comes with signed bootloaders. If you're lucky you can opt-out. RISC-V will likely be designed with the goal of eliminating royalties, not making extensible/portable bootloaders.
      • est31 1896 days ago
        I think Microsoft was actually a force for standardization here: they didn't want to make multiple versions of Windows for multiple IBM PC implementations. So everything is designed in a way that you can have one OS image that boots on a wide range of hardware. Now compare that to ARM :). Here basically every device/SoC family needs its own linux build.
      • chriswarbo 1896 days ago
        I was a fan of OpenFirmware, which AFAIK was royalty-free and cross-platform (with a layer of Forth abstracting between software and hardware)
        • zzo38computer 1896 days ago
          Another thing that it is useful with Forth is that it means that even if there is no operating system it is able to boot, still the computer will be usable.
      • nickik 1896 days ago
        Its absolutely a goal of RISC-V to make it friendly to good bootloaders and there are lots and lots of discussion going on how these interfaces should look.

        This stuff is still in flux but go into working groups for privilege architecture and security and you will see all these discussions.

        RISC-V was never and will never be designed with the goal of eliminating royalties.

    • tyingq 1896 days ago
      We want something like this. But I suspect "broad adoption" is more along the lines of Western Digital not paying ARM royalties for embedded SATA controllers (or similar).

      Edit: Intel could use it for their management processor just for the irony :)

      • dejaime 1896 days ago
        I suppose that's true, it'll probably get adopted on those "embedded yet invisible" uses.

        Still, I believe that does bode well for more general adoption, once it starts replacing ARM on the OEM side of things, by having a positive effect on its hardware price.

    • ronsor 1896 days ago
      UEFI/EFI is that "future proofed low level firmware" -- I think there's already a port of it to RISC-V processors
      • nickik 1896 days ago
        UEFI is a pretty terrible firmware and a huge mess. HP ported it for RISC-V but they seem to have lost interest.

        The open source firmware people like are however doing a lot, coreboot, u-boot, linuxboot and so on. There are discussions ongoing about how to design the low level interfaces.

        • StillBored 1894 days ago
          I don't see how you can say a specification is a mess. Particularly since it (and openfirmware) tend to implement far more of what is required to boot a "generic" OS than any of the firmware projects you mention. Projects that are all mostly linux bootloaders. Further, there is an open source mostly complete UEFI (tianocore) implementation. Some people dislike tianocore for mostly religious reasons (it has 2 character tabs, and an odd build system), but those reasons actually have nothing to do with base UEFI.

          Finally, UEFI as the OS interface is actually being embraced by u-boot and coreboot as the standard OS loader. Thats because they have realized that the services provided by UEFI actually do solve many of the problem users of uboot/etc systems experience. For one, it standardizes the update process for the actual firmware, as well as providing services/controls for managing the OS boot process following update/etc. It also has interfaces for plug in cards (PCIe option roms) and many other features that turn out to be critical to building a generic computing device.

          • nickik 1892 days ago
            > I don't see how you can say a specification is a mess.

            A specification can absolutly be a mess. Over-complicated for what is needed 90% of the time and in the other cases its also not optimal.

            > Further, there is an open source mostly complete UEFI (tianocore) implementation.

            Tinocore is not really complete for what you actually need and most venders are so far down stream that the advantages of real open source don't apply.

            > Thats because they have realized that the services provided by UEFI actually do solve many of the problem users of uboot/etc systems experience.

            I understand that. But there is a reason why Facebook, Google and other providers move away from UEFI.

            > For one, it standardizes the update process for the actual firmware, as well as providing services/controls for managing the OS boot process following update/etc. It also has interfaces for plug in cards (PCIe option roms) and many other features that turn out to be critical to building a generic computing device.

            The way the update process is implemented is incredibly sub-optimal and I have heard people from Intel agree that it is so.

            The problem is that it creates and unnecessary parallel universe that is far more insecure, far harder to understand and with way worse tooling.

            Check out this talk by one of the people who wrote UEFI and he admits many of the issues: https://www.youtube.com/watch?v=1XDYORK2z_M

            Then check out this by one of the Linuxboot people going into many of the existing problems from his perspective: https://www.youtube.com/watch?v=ZyZfS00LZ70

            • StillBored 1892 days ago
              Google and facebook are about the opposite of who you should be listening to for advice on general purpose systems. Their machines are basically embedded servers, and exist in a very strict environment with lots of engineering oversight. Its almost the exact definition of a mono-culture. And google is the mother of android, the very definition of huge mess. You need look no further than the handsets being abandoned after a year because its basically impossible to actually maintain without having a staff of dozens of engineers just to keep a platform working.

              UEFI, is a specification designed to allow a machine to boot and be managed by a multitude of OS's. That means, yes it may be a bit over complicated in places but those over-complications tend to serve a purpose (or did). I don't think anyone imagines that UEFI is perfect, its not, but tossing out uboot or whatever as an alternative is extremely myopic, as uboot doesn't really even provide enough firmware services for linux in its current state much less windows, or some future OS not yet thought off. That is the point with UEFI, to attempt to fill the gaps in what is possible with a given platform without creating a wild west of incompatible formats and hacky solutions for every platform (which is the current state of uboot/DT despite nearly a decade of work).

              • nickik 1892 days ago
                > Google and facebook are about the opposite of who you should be listening to for advice on general purpose systems. Their machines are basically embedded servers, and exist in a very strict environment with lots of engineering oversight.

                They have so many servers and different configuration needs that they need to boot their servers reliably, security, with integrity and they need to boot a wide variety of different systems.

                > UEFI, is

                You didn't seem to watch any of the sources that I provided. I'm not making an argument for uboot. The systems that I recommend as better the UEFI can do everything UEFI can do and actually do much more and are much more flexible, not to mention far more secure.

                > That is the point with UEFI, to attempt to fill the gaps in what is possible with a given platform without creating a wild west of incompatible formats and hacky solutions for every platform (which is the current state of uboot/DT despite nearly a decade of work).

                You don't seem to know how actually UEFI works on servers. Each vender uses tons of old unsecure bloated firmware full of different drivers that are very badly maintained and don't get security updates. UEFI is at the point where for a commercial server there are more lines of code then the linux OS you are booting into. Its a total security nightmare and a horrible situation in terms of open source as most of these things are closed source. The UEFI core might be open source but even that is not actually used and tracked upstream, vendors all use their own forks.

                UEFI is complete separated layer that is its own OS, reinventing the wheel putting a totally insecure ring under your OS.

                • StillBored 1891 days ago
                  I think your conflating UEFI, with the entire firmware stack. Anyway, there is a bit of LOL here. UEFI DXE/EFI drivers don't have a security model, once they are trusted they have free reign (similar to how a linux module can mostly to anything it wants). But... they don't tend to run under the OS, outside of the very thin UEFI runtime services. Most of UEFI is tossed/overwritten when exit boot services are called. Plus, they are effectively running in the context of the OS kernel which provides the security guarantees. AKA there is nothing gained by finding a bug in the RTC or if someone manage to break the signing chain and provide a bogus firmware update because the exploit happens outside of UEFI itself and doesn't provide more functionality then they already have.

                  Now, that said, you have various ME/BMC processors scattered about, and those are the ones that have frequently been exploited to great advantage. The real chuckle here is that most of the BMC's are running uboot (or similar) firmware/OS stacks which don't tend to be upgraded for the very reasons I pointed out earlier. So yes, your BMC gets owned over the network, and it manages to own the OS running on the main processors because it can inject things into the address space during any part of the boot/runtime. But that isn't a UEFI failing, its a failing of the BMC vendors who don't have a clean way to audit/control the code being built into the images. If you look at OpenBMC its a yocto based system. Which means like android the vendors are on the hook for assuring their system works, and having ongoing development control of the upstream tree's. That all works about as well for BMC's as it does for android.

    • zik 1896 days ago
      Most embedded systems which run linux use u-boot as their boot loader. There's no BIOS really, just a boot loader.

      https://www.denx.de/wiki/U-Boot

    • yellowapple 1896 days ago
      I hope OpenFirmware makes a comeback here.
      • 0x1DEACAFE 1896 days ago
        Where is the "LIKE" button when I need it?

        +1

  • JohnJamesRambo 1896 days ago
    Can someone eli5 why people keep wanting to make RISC-V happen? I'm not in tech and I keep seeing posts like this.
    • pcwalton 1896 days ago
      The dominant machine languages—the language compilers compile programs to so that the code can run on the silicon—are all proprietary and protected by patents. RISC-V offers a free, open-source machine language, as well as open designs for the hardware. You can more or less download their designs for free and send them off to a factory to make high-quality processors.
    • tenebrisalietum 1896 days ago
      Intel and AMD have put backdoors in their CPUs - Intel with the Management Engine and AMD with the Platform Security Processor.

      These are separate sub-CPUs required by both platforms to boot and run. Both of these CPUs run proprietary, encrypted, otherwise-inaccessible and inauditable code that is always running and has full access to everything the CPU has access to.

      The Intel ME in particular is part of Intel's offerings that allow remote access to a system even if the operating system fails--it's accessible remotely by design, and flaws have already been found in it. Only Intel can update the ME. Only Intel knows what's in the ME and what it's doing, same for AMD and the PSP.

      I think people want RISC-V to succeed because it gives privacy-conscious individuals a chance to have a truly secure platform not 100% under control of a single company with 100% auditable code from each stage of the boot process.

      • mda 1896 days ago
        I think Risc-V does not have this goal and ISA can not enforce it. There could be a Risc-V processor with worse characteristics than Intel's.
    • kabacha 1896 days ago
      It's also a much more efficient processor architecture. On paper it's same as x86 or ARM but RISC-V is modular and open that means that ever manufacturer can extend it and modify it for specific cases. That means more efficient niche devices like small computers, tablets, robots and all sorts of cool stuff and all of them running on same architecture!

      The fact that we're stuck with this blob for processor architecture is just sounds so archaic when you learn about RISC-V.

      Everyone should be excited for this!

      • kingosticks 1896 days ago
        However, as soon as you start modifying/extending hardware you take on enormous risk and require considerable more expertise and development time. Yes, a modular design may allow you to disable your extension (if it's bugged) and still get a saleable device but then you've spent all those extra resources for no benefit. Not to mention the huge quantities you still need to make this financially viable. Basically, it'll cost you a load more to do something custom, you could have just used the next model up for less. RISC-V can't solve any of this.

        I guess you could argue that you can build your more efficient hardware with an older, cheaper process and maybe still end up with something competitive.

        I don't see the tablet market being served by RISC-V, what benefits does it bring?

        Someone like WD getting onboard makes some sense but that's not exciting, they just want to save a bit of cash.

      • renox 1896 days ago
        Color me skeptical: ForwardCom or the Mill are CPU designs which bring real improvements, but for me the RISC-V is a tweaked MIPS with an open ISA.
    • msopena 1896 days ago
      RISC-V is an open ISA (Instruction Set Architechture). An ISA is the "language" that the silicon speaks and which eventually all the programs are "compiled" into. Being an open ISA means anyone can implement a chip based on it for free. To this day, most ISAs are either locked-down or available under a license + royalty scheme. That means, you have to either buy a chip off the shelf from a vendor (either physically or a design you include in yours) for a cost or you have to design one yourself.

      Again, being open means anyone can design one, and several free (as in beers) designs have popped up already. Anyone that needs a CPU in their system can pick one of these designs and use it "for free". Moreover, a company can design one such chips, and then outsource to another company the support for it (or the other way around). Or, even swap support companies if needed. The fact that the RISC-V is open (and claimed to be patent free) means there is very little barried to entry. Which means the market offers space to more companies working together and competing at the same time.

      For lots of designs, the CPU is a commodity. You need it, but the specs are not that important in the sense that you do not need the performance equivalent of the latest Intel chip. For those markets, haven a proven design with lots of software support and that is gratis, is way more appealing. Another commenter pointer rightly so to the "hidden" CPUs that are everywhere inside devices and gadgets. That's the market that I believe RISC-V is going to conquer quickly because the "cost" of moving to another architecture is low compared to other segments where it is very expensive. Think of a PC or smartphone which carry years and years of software developed all over the world and built against a specific ISA. Those markets are unlikely to move soon, if ever. However, the "hidden" CPU ones is easy, the washing machine software is built inside the wasching machine company which can probably built their software against a different ISA in a couple of days, if needed.

      Another important aspect is the software tools. It is cosly both in time and in money to develop a high quality set of tools to support an specific ISA, i.e. compilers, support libraries and optimized libraries for your ISA, etc. Therefore, not all architectures receive the same "love". Some "niche" markets do, with lots of effort in the closed source space, which again means vendor lock-in. RISC-V, being open, is getting lots of attention from all open source tools, and it's expected its support will be on par with other mainstream architectures like x86_64 or ARM ones.

      One final note is that of developers and knowledge. The fact that is free and anyone can experiment with it, means lots of universities are turning their focus on it for reasearch and education. As the new wave of engineers come out of the university with expertise in RISC-V, it's going to be easier to hire them and their friction-less path is going to be towards RISC-V.

    • AnIdiotOnTheNet 1896 days ago
      Well for one, techs love hype. Otherwise it is mostly the association with the word "open".
  • jeffdavis 1896 days ago
    Dumb question: is an ISA really proprietary? Like I can't make an ARM emulator without violating some IP law? Or just an ARM physical chip?
    • dejaime 1896 days ago
      Yes, the architecture is proprietary, and anyone who uses it needs a proprietary license, what sometimes involve flat payments or royalties. Check this link:

      https://en.wikipedia.org/wiki/Arm_Holdings#Licensees

    • writepub 1896 days ago
      yes - ARM ISA is copyrighted. To implement it, you need an ISA license. ARM typically makes this license cost almost the same as the license for an already implemented core, which is why most vendors drag-and-drop ARM cores into their designs
      • kevin_thibedeau 1896 days ago
        Their implementation is copyrighted. What prevents royalty-free clones is patents. MIPS was similar with certain key patents preventing clones until their recent pivot. You can clone legacy ARM architectures that aren't patent encumbered.
        • tachyonbeam 1896 days ago
          I think most posters in this thread are forgetting this point. RISV-V will have a key advantage because not only will it be free, but you’ll have access to high quality free implementations you can redistribute and remix without copyright issues.
          • pjc50 1896 days ago
            People seem to forget that manufacturing and distribution of hardware is the expensive bit. "You" is never going to be an ordinary member of the public.
            • sitkack 1896 days ago
              Making ones own small batch riscv run is in the the price range of a good used Nissan Leaf to a new 3 Series. That is for delivered silicon.

              Fpga implementations start at $5.

              Individuals “waste” both of those amounts of money all the time.

      • FPGAhacker 1896 days ago
        But does this cover software emulation?
        • wmf 1896 days ago
          QEMU exists so I guess they're OK with software emulation but Arm seems to go on the attack at the slightest whiff of an unlicensed hardware implementation.
          • mcbits 1896 days ago
            Are they really OK with it or are they just legally powerless to stop it? That is the question.
            • bluejekyll 1896 days ago
              It may not be in their interest to stop it, as those are generally used for testing and building software to target ARM, which helps with their potential market share.

              Similarly, MS offers free VMs to developers for Windows so that you can build and test against Windows without cost.

            • ohazi 1896 days ago
              Qemu is used to by software developers to build better software/infrastructure/tooling for the chips that arm gets royalties from. It would be stupid for them to put up a roadblock here.
        • monocasa 1896 days ago
          That's grey area.

          Intel has publicly asserted that their patents apply to emulation, but is hasn't hit the courts AFAIK.

          https://arstechnica.com/information-technology/2017/06/intel...

  • IshKebab 1896 days ago
    I'm still unconvinced by the modular nature of the ISA. Are we going to end up shipping fat binaries with a dozen binaries for all the different variants that are allowed?

    On x86-64 there's a reasonable minimum supported set of instructions, and you only really need runtime CPU detection for the newer vector extensions.

    • audunw 1896 days ago
      I wouldn't expect server/desktop/phone variants of RISC-V to drop any of the current extensions of the ISA. Why would you, if gate count isn't what you're optimising for?

      There could be further, more advanced extension that may not be in every processor, but that's just the same as x86

      The modularity in the ISA as it is now, is something that's more geared towards embedded devices or FPGAs, where you'll be compiling code specifically for that target anyway.

    • kabacha 1896 days ago
      Easy solution: Just GPL it - every extension of RISC-V has to be open as well.
      • IshKebab 1895 days ago
        I don't see how that is a solution. The problem is the variability in what the hardware implements, not the availability of extension specifications.
  • lsllc 1896 days ago
    All that's missing now is an affordable Linux RISC-V development board.
    • epx 1896 days ago
      Expecting some sort of Riscberry pi to spend my money on
    • drudru11 1896 days ago
      I would like to buy SiFive’s unleashed dev board. It has everything I need: RV64GC, gigE, 8 mB ECC, and usb UART. I just won’t pay $1000. I wonder what the economics would have to be to get it to $300.
      • imtringued 1896 days ago
        Below 100 thousand units it doesn't make any sense to have custom silicon. The ARM SBCs are all based on existing TV boxes, including the original Raspberry Pi so you are unlikely to see a development board before a proper laptop or NUC-like desktop is released.
        • 0x1DEACAFE 1896 days ago
          meh. i think the unit count is down to about a thousand. also strongly dependent on what node size you use. 180nm and 350nm are pretty cheap these days. but you're probably only off by an order of magnitude if you're talking about 28nm (which is what the SiFive HiFive Unleashed chip uses.)

          but... your point was probably more along the lines of "getting custom silicon is still going to need a market of greater than several thousand," and that's probably always going to be true.

      • ahmedalsudani 1896 days ago
        Oof that's steep. Assuming ARM doesn't try to kill RISC-V, I imagine the economics will improve significantly over the next few years. Probably larger than 75% drop for similar specs over two years as the low hanging fruit has not been picked yet... But it's important that there's still demand for RISC-V, which is where ARM might be able to choke it off.
    • nickik 1896 days ago
      If you don't need a real linux style OS there are tons such boards available already. From SiFive, Gap8, OpenISA, Pulp and so on.
      • lsllc 1896 days ago
        I have the HiFive 1, but it's just an Arduino equivalent. Fun for a little bit of low level hacking, but it's a RPi or BeagleBone Black equivalent that I'm looking for.
        • nickik 1896 days ago
          I didn't read 'Linux' in your text. Sorry. Hope something comes out soon.
    • tasty_freeze 1896 days ago
      • neuralzen 1896 days ago
        I think they mean x86, as those dev boards are $1k.
    • dejaime 1896 days ago
      In hopes for a RISC-V Pi
  • walrus01 1896 days ago
    Where can I buy a motherboard and CPU, in super low volumes, right now?
    • reportingsjr 1896 days ago
      https://www.sifive.com/boards/hifive-unleashed

      You have to get the extension board if you want PCI-E, SATA, and M.2 connections, but it doesn't look like there are any available right now.

      • walrus01 1896 days ago
        A thousand bucks? Seriously? This is why the Qualcomm centriq arm server cpu never caught on for use with Debian, centos, or any mainstream distribution. Same problem. Unavailability of normally priced cpu+motherboard.

        I'll check again in six months...

        • baobrien 1896 days ago
          The HiFive board really isn't aimed at the hobbyist or datacenter. It's a dev board with a low volume part made on a high end process. The goal is more to get silicon in the hands of people who want to work with SiFive cores and those who are working on the RISC-V software ecosystem.
  • BrandonMarc 1896 days ago
    Even ESR himself praised them a short while ago.

    http://esr.ibiblio.org/?p=8242

  • steamport 1896 days ago
    Open source CPU?

    where can i buy one :O

  • openloop 1896 days ago
    You asked for it.