Is there something inherently complicated in adding a SATA/M.2 port to board like this?
The RaspberryPi is also "disk-less", which to me is one of the major limitations.
It's a super interesting little board, and I love that it's a RISC-V, that could really help getting the CPU in the hands of people. I just don't know enough about these things to understand why there are no storage connectors (other than an SD card slot).
On the systems I can control, I do all work "disk-less". I strongly prefer it. I like to keep programs and data segregated. The basic setup on NetBSD is as follows, with endless variations possible therefrom. The kernel(s) and userland(s), along with bootloader(s), are on external media, like an SD card or USB stick, marked read-only. There's a RAM-disk in the kernel with a multi-call binary-based userland (custom-made using BusyBox or crunchgen). After boot, a full userland from external media or the network is mounted on tmpfs-based or mfs-based overlay and then we chroot into the full userland. From there, the work space is all tmpfs (RAM).^1 I can experiment to heart's content without jeopardising the ability to reboot into a clean slate. The "user experience" is also much faster than with disk-based. Any new "data" that is to be retained for future use across reboots is periodically saved to some USB or Ethernet connected storage media. I do not use "cloud" for personal data. Sorry. This forces me to think carefully about what needs to be saved and what is just temporary. It helps prevent the accumulation of cruft. The number one benefit of this for me though is that I can recover from a crash instantly and without an internet connection. No dependencies on any corporation.
This BeagleV looks like it would work well for me as it has Ethernet and USB ports.
1. With NetBSD, I can run out of memory with no major problems. With Linux, this has not been the case. I have to be more careful.
I disable swap. If I run out of RAM, thrashing may occur but it is not fatal. Some process that needs to write to "disk" might fail, but the system does not. I just delete some file(s) to free up the needed RAM, restart the process and continue working. I don't think NetBSD has anything like "OOM killer".
Note that one drawback with "disk-less" via read-only SD card or USB stick is how to have a good source of entropy at boot time.
I actually came up with this myself just playing around with NetBSD. You will probably not find anyone advocating disabling swap. I do not run X11 anymore. I stay in VGA textmode. Doubt anyone would want to do exactly what I do.
NetBSD users tend toward DIY and each has their own preferences. If you study how NetBSD's install media are created that will teach you almost everything you need to know. Happy to walk you through it though if you want to try NetBSD.
>I actually came up with this myself just playing around with NetBSD. You will probably not find anyone advocating disabling swap.
Actually I saw recently some discussion exactly about that. I think many people would be glad to do it if they new how. It is not easy when you relatively new in the area. It is not too complicated but a lot of information and it is not obvious what is important and what not, so you read all and get overwhelmed easily.
>I do not run X11 anymore. I stay in VGA textmode. Doubt anyone would want to do exactly what I do.
>If you study how NetBSD's install media are created that will teach you almost everything you need to know. Happy to walk you through it though if you want to try NetBSD.
Thank you very much. This advice is already precious, because it helps to orient yourself in tons of info, and now I would know where to start digging. I would love to try once I’ll have equipment and time. I really wish to encourage you though to write down this process, I am sure there are some people who are desperately looking for this setup and how to achieve it properly, bothered with many questions. This guide could also be a good practical introduction into a NetBSD by the way.
>Chromebooks are claimed to be 100% safe from certain types of attacks. The developers came up with this silly "whitewash" gimmick. Their motivation for a disk-less-like system is to force users to store personal data in the cloud. Ugh.
it’s a shame really and done for wrong purpose. I never considered Chromebooks seriously for that reason.
What I really like about it is the ability of fast recovery, which gives more freedom for experiments. I love it.
Note I did all the experiments many years ago, on i386, and using NetBSD's venerable bootloader. For the Pi, some things will differ obviously.
Not only does this type of setup give freedom for experiments (which NetBSD really does in general) but it makes expts easier. You can put multiple different kernels on the USB stick and reboot into each of them to test pros and cons of different release versions and/or configurations. Or you might boot one computer with kernel A, pull out the stick, insert into another computer and boot kernel B and run them simultaneously. No HDDs needed.
I might do a writeup; some users have done writeups on "disk-less" in the distant past. I have some simplification tricks no one has ever written about. However NetBSD has such great documentation relative to almost every other project and the developers tend to be "quietly competent" in the best way. It was never a culture of "HOW-TO's" as you will find in Linux. Studying the source and following source-changes and other mailing lists is better than any writeup, IMO.
It is like the old saw about teaching a man to fish. It is worth learning. Things do not change in NetBSD so fast or dramatically that what you have learnt will be later considered "obsolete".
I would agree that following source-changes is better then any writeups if there is time and dedication. I like the culture you describe and it could fit very well with the way I like to do things. I use those “how to” just as quick tutorial to get into the new topics and then I go deeper. It is easier to learn when you actually do something and if there is no person around who can introduce me into the topic properly which is much faster and better of course then I use some “how-to” for that. Writeups can help too so if you know some good ones it’s worth mentioning.
Sensible spec. The price isn’t outlandish for what is basically a desktop with a relatively exotic CPU. 16GB ram is minimum for development in my view even though you can do a lot in 8GB.
I'm sure people would start comparing it pricewise to a i7 or something made at significant economies of scale. I think that is an unfair comparison due to the exotic CPU. Exotic in the sense that these aren't commodity CPUs.
16GB DDR4 2400 DIMMs run about $67 retail. So about 1/10 of the total cost of that board. It's a little apples and oranges, but since it seems like they just plucked down the same 8 chips you'd find on a DIMM right to the board maybe not that much.
I note that it has a PCIe x8 slot, but they don't have drivers for a video card (or any card) yet so it's kinda useless.
You would have to integrate a storage controller into your design and verify that it works. It's not insurmountable, but it's engineering time that has to go toward something that a lot of people aren't ever going to use, along with an increase in BOM cost.
My pet peeve about all these small single board computers is that none of them have multiple ethernet ports, which severely limits their usefulness as networking hardware. They'd otherwise be very well suited to being various kinds of packet routing appliances.
You can sometimes hang a usb ethernet dongle off them but performance on those tend to be somewhat limited.
Check out solidrun, they have a variety of boards with multiple Ethernet ports.
I used to wonder why embedded boards didn’t have multiple Ethernet boards and why it’s not common to use ethernet to connect to peripherals instead of “old interfaces”. Until I tried it for a project. It turns out Ethernet uses roughly 0.5 to 2W per port depending on speed (that’s 1-4W per connection).
Perhaps what you want is a range of boards with many combinations of features. Some with 4xethernet, some with 2xethernet, some with PCIE and 1xethernet, some with M.2 and PCIE, some with M.2 and 2xethernet, etc. to fill out that big matrix of combinations. But is the market big enough for such a huge range to be economical?
Some of them have PCIE which you could connect a network card to. That seems like a more practical way to allow flexibility than having a lot of special purpose boards.
The support for PCIe cards is very limited on that board. I've spent too much time trying to get a disk controller to work, and I'm not alone. I ended up buying old server boards with low power xeon processors to accomplish my goal.
Why would you need SATA when you have SD, also USB3 and Gigabit Ethernet? I have no problem booting from the SD then accessing data on my SATA drives using a USB-attached controller, also plan to get a NAS.
Sure, it would be great to have connectors for everything and support for every cool standard but these guys have to give up all what is non-essential to keep it small, cheap and possible to engineer by a small team.
I wish folks would stop conflating boot media and root media. There's no reason your SD card has to consist of anything more than u-boot.
SD is fantastically simple, so that the boot rom can get at a bootloader without much effort (generally the bootrom just looks at a memory offset on the mmc device). Once you start speaking newer faster protocols, this simplicity is lost. You're not likely going to find a bootrom that implements all the bits needed to get u-boot from a sata device.
In a perfect world, once these are no longer developer devices, the mmc would be replaced with some spi flash (or even an emmc) with just u-boot.
On 90% of embedded dev systems, you're better off thinking of the sd card as a "bios chip" then a hard drive. The fact you can also use it as a block device to store a rw filesystem is almost incidental, and should probably be avoided.
SD cards speak SPI. In fact SPI is all SiFive supported on their first 2018 HiFive Unleashed. As you say, it's good enough to bootstrap. That old board has gigE. Once you're up enough to TFTP you're away. I've never actually bothered to set that up on mine -- I boot a full kernel with NFS support and then switch to that.
The existence of Raspberry Pi as it is, Raspberry Pi 4 in parricular, is is a huge leap forward for the humanity already. The next leap is going to be the same kind of board but with no connectors besides an increasing number of full Thunderbolt 4 ports letting you connect anything you can imagine (incl as many GPIO connectors as you need). There is no need in a zoo of connectors like M.2, U.2, SATA, HDMI etc when we can conduct PCIe, DisplayPort and USB over a unified standard wire.
By the way, switching to a single super-fast (for the time) connector for everything (including internal hard drives) was already anticipated before the invention of SATA and even USB2 - FireWire (IEEE 1394) was meant for that.
There's value in having GPIO pins not behind any bus or controller at all. Don't Thunderbolt controllers need firmware uploaded and such before they start working? How complex is Thunderbolt device and bus enumeration?
Perhaps you're right. In fact I was going to write about connectors of 3 kinds: GPIO, dedicated fan connectors and Thunderbolt-enabled USB-C but then I came to the conclusion you can also put a GPIO controller or a fan on Thunderbolt the Occam's razor suggests we should only leave Thunderbolt - because we can. Apparently Apple thinks the same as it only left USB-C conectors on their recent laptops.
CFast is CompactFlash, but with a SATA-based interface (up to ~600 MByte/sec) instead of IDE. It was designed because video cameras were making too much data for CF to handle. The downside is its size: 43x36x5 mm vs 15x11x1 mm for microSD.
Also, CFast aren’t as ubiquitous as microSD. I can go to Target or Walmart and get a wide “selection” of microSD, but I’d be hard pressed to find a CFast card. So the chances of an SBC using CFast over eMMC or microSD is low.
Using f2fs and setting up the SW on my SBCs to not write data needlessly to SD card all the time (some programs are really bad at this), and having a very high quality power supply and cabling was enough to make my boards work for years on a single SD card. Some are at 4 years currently and still work fine. I'm using Sandisk only, since a few years ago, because it's the only manufacturer that allows me to verify online whether the card I bought is genuine and provides A1 rated cards at the same time. Experience with about 20 boards.
I guess too little too late. None of the 4 dead Samsung cards I could find have the V mark (too old for that), required for validation, and they probably started adding support for regular EVO cards just a year ago or so.
I have a special SD card made up that has all of the correct boot configs set and a small script to update everything and set up the Pi for PXE boot. After everything is configured, I take out the SD card and let it just pull boot images over the network. My main home server serves everything up over tftp. The only downside is that you can't get this to work over wifi. The wifi Pis get read only SD cards that are all configured the same so it's easy to reimage a new one if the card dies.
It costs more upfront, but it's worth it because I don't have to spend my time investigating why it's not working.
Also, when comparing costs you should take into account things like enclosure, power supply, as card(s) etc.
I'd say I probably paid around $250 or so for the NUC and a SODIMM and I had a SATA boot disk already. I had previously owned ODROID, libre computer (le potato), and Pi(s). These were probably in the ballpark of $80-$120 each, when including all the necessary equipment.
I suspect they don't necessarily mean spinning platters. A SATA port also allows the connection of an SSD which is almost always a huge boost in usability over SD slots. The fact that you can add a large platter for a small NAS solution is just a nice option.
I'd far rather see a 2x PCIe port that could be used for whatever you wanted.
I think it's just that most people want to use the Raspberry Pi for toy projects. They're trying to keep the cost down, and to keep the board small. If you want something like micro ATX ARM motherboard, that's a different use case. The large majority of Raspberry Pi users are fine with USB 3.0 ports for I/O.
A PI4 compute unit with 8GB of RAM instead of 1GB (faster, DDR4 instead of DDR3 too) and 32GB eMMC and a quad-core 64-bit CPU with much higher IPC while only costing $90 which is around $50 cheaper than a BeagleBoard AI (the same comparison exists for the Black too). You also get direct PCIe access to do what you want.
Beagle just doesn't offer anywhere near the same value.
Two different targets, I have used both.
I have so many 5V power supplies sitting around, they git in the way. Including 12v/5v buck converters for auto stuff.
You can configure for headless when you set up the SD card before you ever boot. I have never booted a Pi with attached display/keyboard.
The Pi is cheap, for a reason, but it is still more powerful than anything I use it for.
But the same goes for the Beagle Board.
Choose the board you want, for the project you are doing.
Note: For every RPi project I have done, I have done two or three ESP8266/ESP32 projects along with five or six AVR/M0/M2 Arduino projects.
> It doesn't have the same ecosystem. Which is half the point of the rpi to start with.
Now that's a valid argument. And I would argue it's way more than half.
If you want a one-off mumblesomething and can stay at the Linux OS level (ie. Web application, USB peripherals or maybe the most basic of GPIOs), the RPi ecosystem is going to let you get there much faster even though it will crash and burn occasionally. If that's "good enough" ... Douzou! ... Ganbatte! ... get moving and get going.
I have the same comment about Arduino. It ain't real reliable (but, to be fair, it's quite a bit more reliable than the RPis), but the ecosystem is awesome.
However, when you start asking something like "Gee, how do I send a single address byte over the I2C subsystem?" or "That signal needs a response in 50 nanoseconds, can I make that work?" you will thank the TI folks for producing that 5000 page (not joking or exaggerating) Technical Reference Manual.
One other thing that people who live in the RPi system always overlook in the Beagle ecosystem are the PRU cores. You can do HARD real-time work on those and still live in the Linux world. That's something that the RPi series just simply cannot do no matter how much you hack at them. And it often means the difference between a design which needs an FPGA and one that doesn't.
However, yes, you are going to live in that Technical Reference Manual for the Beagle series. If you're not comfortable doing that, then the Beagle stuff probably isn't for you.
There are really good reasons to use and love RPi's--ease of use is huge. But the whole "It's cheaper" thing just chaps my hide.
I'll take your word for it, but it sounds like a very different use-case than most use the pis for though.
When you spend that much time with your device cost isn't a factor (imho). But the pi is cheap enough that I can gift one if I know the end result will work well. And it is quite nice to just have one lying around for when you get an itch.
Using netboot I have never had an issue with the pi. All reliability problems I've experienced can be tracked down bad power and SD-cards (note: I have not used any IO other than USB). Both are annoying enough to look for alternatives but both are also fixable (netbooting likely disqualifies it for many usecases though).
Minimizing reads on the sd card usually means that they work at least several years untouched (in my experience, obviously limited sample size), which is likely good enough for many (a true read-only system might work even better). But when the audio streamer in the vacation cabin dies when you aren't around to fix it is still frustrating. So I'm in the moment of figuring out how to move the last of them to be netbooted. Which has some very nice side-effects as well, such as trivial remote system backup+restores.
It also comes down to time. I know the pi and the pitfalls. Researching an alternative would take many hours and I'd still have to experience some of them. If the goal of the project isn't to learn a new sbc then that alone is a dealbreaker.
That said, PocketBeagle seems to support wifi netboot (which neither the pi and pi zero w do). Might be able to find a project for that!
Practically they end up being really different. SD Cards end up being bottom of the barrel flash chips (even from the 'good' brands), whereas eMMC uses flash good enough that you can count on it being non replaceable.
You want "industrial" cards from a vendor tailored to the space. I wouldn't trust even those new industrial sandisk cards.
Yeah as someone who has deployed SD-based and eMMC-based systems in industrial spaces, SD cards are hot garbage compared to eMMC and I would strongly recommend not using them in anything that needs to be reliable.
What you're asking for does exist in the industrial market (and I'm skeptical of those 'industrial' branded sandisk cards). They still use pseudo SLC (storing one bit in every flash cell as 00(0) or 11(1) and correcting up or down on read back). They also will work with you so you can track BOM changes as they internally change the cards so you can track failure rates. They also give you access to tons of internal perf counters so you can do preventative maintenance (ie. swap the card if it's getting long in the tooth). They also just generally treat you as a business partner rather than a consumer, wrt support and what have you.
It costs a pretty penny though. $40 for a 2GB non micro card was the last quote I got.
I'm certainly no expert, but I've never been able to run a spinning disk HDD off of a raspberry pi without a powered USB hub. I imagine the thermals and form factor would be hurt pretty badly by including a plausible power supply.
RPI 4 -- the original revision. Whenever I ran it off the "official" power supply, I got disk errors. As soon as I moved over to a powered USB hub, I had no other issues. This was also with a keyboard and mouse plugged in, so maybe that had something to do with it.
I was also trying to run it as a NAS, so it had a decent amount of IO.
Maybe it could have been fixed with a better power supply, but switching over to the powered hub was easier to figure out.
Honestly even with a high quality supply I'd tend to prefer if the drive were externally powered. I have flashbacks to the first generation Pis that would brownout and reboot if you plugged in a mouse or keyboard while it was running.
If I had to guess, and this is just a guess, so take it with a grain of salt. I'd bet most storage devices these days are just thin wrappers around PCI-Express lanes, most of the hardware in the silicon running that stuff is on die for AMD/Intel CPU's and you likely run into cost/power/board space limitations in a device like this
I've been unable to use Beagle boards in the past as they ship with an old kernel and uboot without the sources to update or config them (this was specifically with the black variant). It probably had something to do with vendor NDAs with chipsets or something but it made them entirely unusable to me and more expensive than competitors by almost 2x to boot.
I would love a RISC-V board to play with that is a bit more stable and about the size of a Raspberry Pi within a reasonable price range. The SiFive development boards are pretty pricey, definitely showcase hardware (look at all these peripherals or this is basically a desktop computer!).
I'm hoping with the explicit call out of open hardware and open software that this board won't have the same issues as the Beaglebone Black...
It's pretty easy to use to build an image for Debian Buster/Stretch or Ubuntu Bionic Beaver, he has various configurations that cover IoT, console only/headless, GUI and a few other combos. It's pretty easy to create your own config with the Kernel and packages that you want.
The images can be used for flashing to the eMMC via an SD card (or via USB).
I've found images built this way to up very up to date and absolutely rock solid thanks to Robert's curation.
I've never had this problem with Beagle Boards and sources. Sure the kernel or u-boot that ship with them might be slightly older but sources have always been available.
And TI are quite decent at contributing to the upstream kernel and u-boot trees. Generally only a few months after a new TI SoC is announced there's enough support in the upstream u-boot and kernel to boot the board and do some useful things. Generally by the time silicon is buyable by mere mortals mainline is in pretty decent shape.
The biggest reason I used them in production devices (and still use them at home) was the eMMC. Which made it well worth the price, even with the slow processor.
We had to work with the Balena.io (formerly resin.io) team to get full hardware support on the BealgeBone Green Wireless (wifi drivers were the biggest hangup, iirc) a few years, but they were incredibly responsive and have done a fantastic job maintaining a stable distro for these boards.
If you want a straightforward out-of-the-box experience, I highly recommend Balena.io.
The board I have has only 4GB of MMC on board which wasn't big enough for the later versions of the OS, but you could boot off of the SD instead if you held down a button on the board while powering it up.
There was a way to tweak it so you didn't have to hold the button down, but it was kinda involved IIRC so I never got around to it.
> Although the first hardware run will be entirely $140 / 8GiB systems, lower-cost variants with less RAM are expected in following releases.
> The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well.
Sounds like a direct competitor to the Raspberry Pi. I don't know if the Imagine GPU planned for the next iteration is playing catch-up or leapfrog. The Arstechnica article links to SiFive creates global network of RISC-V startups  which I think demonstrates that SiFive is strategically leveraging or responding to the geopolitics surrounding Chinese technology.
Imagination GPU :( Notorious for being hard to support in open source. I'm not even sure there was a single free driver for those.
That likely means those devices are going to be stuck on an outdated kernel, unless Imagination steps in and provides ongoing binary support for newer kernels for their GPUs like x86 GPU manufacturers do. However, this being RISC-V with 2 existing devices total, I don't count on it.
Except for cost... which has been a problem for the BeagleBoard line of SBC's since the beginning. They actually predated the original Raspberry Pi by a couple of years but when the Pi came in at ~25% of the cost, they caught up and overtook the BeagleBoard in popularity fast. The BeagleV looks interesting from an early adopter standpoint but the hobbyist market will probably standardize around whatever decent RISC-V board comes in at sub-$50 first.
To me, they seem to serve different markets. The various BeagleBoards have more industrial specs like a wider operating temperature range, on-board EMMC, etc. Also, the pair of PRU's make them useful for things where more precise timing is important.
Regarding the BBB/BBG, in the last 3-5 years the RPis have gotten significantly faster (RPi3 & 4) and gone 64-bit whereas the BBB & BBG haven't changed much (aside from a bit more eMMC and a very minor CPU bump) since they were launched. These days the 1GHZ 32-bit AM3358 (BBB RevC) is comparatively much slower and with only 512MB RAM, that's a lot less than a stock RPi 4.
Having said that, the BBBs are a great device! They're rock solid and have far better I/O options than the RPi: 4 UARTS, multiple I2C, SPI & CAN buses, EHRPWM, a ton of GPIO, 2x PRU processors, LCD driver, both USB and USB Gadget, oh and of course, the onboard eMMC is great compared to booting from an SD.
Indeed, the geopolitics works both ways. I think the Chinese are looking at RISC-V as a safe-guard against American embargoes of the kind that killed/maimed HiSilicon, the non-Chinese nation-states are looking for full transparency of silicon design, and the manufacturers want full access to a truly global market that includes China. I'm not sure that SiFive RISC-V designs can be competitive with ARM/x64 in the short-term but the geopolitics creates a potential niche.
I think you are conflating assembly and manufacturing. TSMC is Taiwanese and Samsung is South Korean. Personally I'd prefer that all nation-states and their security organizations followed the Golden Rule and promoted free trade rather than protectionism.
> made with slave labor from congo, china and other countries
I don't equate low-wage manufacturing/assembly with exploitation and certainly not slavery but I understand that this is a common metaphor. Contemporary slavery  is a real thing and, until I see contrary evidence, I'm assuming it makes zero contribution to high-tech assembly or manufacturing.
We have min wage in the US, and even that is not equal across all states - we do not have UBI or universal health care, we have shitty industries, such as insurance (forced hedge funds) and in general, we have brainwashing people to accept it as normal.
YC even wants you to think that all VC is all-truistic NOPE.
Highly unlikely to be very competitive. At this stage it's about getting RISC-V hardware into developers' hands, and previous boards either cost $1000 or were limited in ways where they could not run Linux well. (I am one of the Fedora/RISC-V maintainers.)
I have a couple on order, and I've talked to one of the developers. It looks nice - PCIe, NVRAM SSDs, mini ITX format, 16GB RAM, more cores, etc - but not in the same price point or market segment as an SBC. We will likely buy a pile of them to do Fedora builds.
Fedora have been shipping on RISC-V for about three years already. Last I saw, around 95% of packages work. The main exceptions have been thing that need some JIT that hasn't been ported yet -- gcc and llvm have been working for years.
Why would you want use these as build machines? It seems more efficient to just cross-compile on your fastest build machines. You get much faster CI feedback that way. Obviously you want to validate RISC-V code on native devices but using them as builders seems wasteful.
I'm genuinely curious why that's desirable. Maybe I just misunderstand your comment and you want the RISC-V boards for validation i.e. make sure you can self-host the distro even if in general releases are cross-compiled on your (I assume AMD64) build fleet.
I'm sure the Fedora builds aren't designed for cross compilation (which is far from trivial for most packages not designed for it). Also, man-power is the most precious resource so it would be a waste to spend time trying to cross-compile what could be built natively.
As long as you've got the target toolchain on the host, the code has no idea it's being cross compiled. If you've got the tooling set up it's a lot of setting of envars. It's not trivial but it's also not super difficult. A compiler running native on an architecture doesn't produce any different output than the same compiler doing a cross compilation.
Even if you wanted to run a native RISC-V build process, running it in QEMU on a much more powerful AMD64 system will get you the same results in a fraction of the time. This gets much faster feedback in the CI system.
The older HiFive Unleashed (1.5 GHz single-issue) builds things considerably faster on a per-core basis than qemu-system on current amd64 (or at least did in the Skylake generation). The dual-issue cores in the HiFive Unmatched should be around 50% faster.
amd64 machines do have the advantage that you can get them with 32 or 64 cores and hundreds of GB of RAM -- at a price, especially in power consumption.
The three year old HiFive Unleashed uses around 5W to 6W at the wall when building flat out, considerably less than the maybe 25W a quad core i7 or Ryzen will use, with lower performance running qemu-system.
The new HiFive Unmatched probably has the same or lower power consumption, at 50% higher performance.
Running qemu on a Pi 4 would use about the same power as the RISC-V chip but be waaaaaaay slower.
That's running qemu which is unnecessary for cross compiling. I'm not seeing any advantage to building on native silicon rather than cross compiling on existing machines AMD64 machines. Even with the power usage of the AMD64 chips being higher, the faster compilation speed will likely net less overall energy used for the builds.
I fully understand wanting native silicon to do development on and to work on some architecture specific branch. It's easy to just recompile your working copy locally. The original poster was talking about "build machines". I'm not seeing the utility as "build machines", especially if they have far less power than AMD64 machines.
There are many packages that compile some kind of tools and then run them to produce other things necessary for the build, possibly several layers deep.
In theory you can set up the build system to correctly know which things should be compiled native and which should be cross-compiled. Sometimes you even need native and cross-compiled versions of the same thing.
In practice, this is a capability that few people use and even if everything is set up correctly at some point most people contributing patches do not think about or test maintaining the ability to cross compile and it gets broken.
When you're building hundreds or thousands of packages for a distro such as Debian or Fedora this is a huge problem.
Building natively in a full system emulator or on real hardware is the only way, in practice.
Only for very simple systems. Many are complicated which produce artifacts that are executed to produce other artifacts (which are also compiled). As arbitrary binaries are executed during the build, and the correctness may depend on native execution, this is far from as trivial as you might think. Have you ever compiled Emacs (at least in its 18 version)?
Building under QEMU is obvious, however the native hardware is actually faster.
> All GPIOs can be configured to different functions including but not limited to SDIO, Audio, SPI, I2C, UART and PWM
Does supporting audio mean that these GPIOs can be used as analog-to-digital converters? There are home automation applications where reading a voltage in an ethernet connected device is a good fit but from what I've seen in a raspberry that requires extra hardware connected to the IO ports.
As an aside, I really wish that these boards would include more than two PWM outputs. The Raspberry Pi has two as well, and it feels really limiting. Analog control instead gets farmed out to microcontrollers, when you could probably make it work with a single board if there were more pins to work with.
I used to love the beaglebone for that. Run application logic on the main arm core, farm the microcontroller stuff out to the embedded PRU microcontrollers (which could access all the IO functions). Still a single board solution.
Honestly, I'm still mostly getting started in the area as well, but I'm looking to use them for motor control. An example of this would be this paper , which I'm hoping to replicate to some degree, but requires at least 4 PWM outputs. I'm currently planning on using an Odroid-C4, which has 6 PWM outputs.
But the work I've needed to put into that in order to get a motor controller working was way more than I needed than for a RPi4b, since libraries for it already exist for the Pi vs needing to re-write them for the Odroid board. It would have been cool to try it with an open-hardware board as well. But it's not just quadcopters--a lot of other projects like rovers or position control with multiple stepper on different axes benefit from extra PWM, and doing it in software can lead to too much jitter, so the hardware timers are necessary.
Eh, a decent audio DAC pretty much requires another chip since it takes up a ton of die space, so you'd be an absolute fool to use the same process node as your logic. Since this looks to be beagleboard compatible, you should be able to use an audio cape like this https://www.element14.com/community/docs/DOC-67906/l/beagleb...
If you don't care about it being decent, you can just use the PWM channels like the RPi does.
That's the extra hardware I mentioned. I guess I wish the RPi just included one of those chips and had say 4 to 8 pins for ADC. It would make things simpler versus having to find something already well integrated or building it.
Looks like their Hard IP is proprietary but based on open high-level designs such as Rocket and BOOM. The peripherals situation is mixed, but they've stated in the past that they're quite OK with using open designs whenever feasible.
SiFive has E-series (aka "Freedom Everywhere") and U-series (aka "Freedom Unleashed") cores, both of which seem to be based on Rocket. And they do provide high-level designs for both on their GitHub, under a free license.
Western Digital's open sourced SweRV cores are approximately the same performance as the closed-source SiFive 7-series ones.
The main difference is the WD cores are 32 bit, no MMU, probably DTIM rather than cache (not sure).
The guts of the dual issue decode, register file, pipeline is there, which is the hard part. The other bits could be relatively easily added by the open source community, probably cribbing components from RocketChip.
Open source refers solely to the software side of things in these groups and types of projects. Thus, all of the drivers and software you need to use the SiFive is open source and thus you can say it is an open source design. However, it is not an "open hardware" design in that the IP used to design the chip is not released.
> However, it is not an "open hardware" design in that the IP used to design the chip is not released.
The marketing page for the BeagleV specifically says "Open Hardware Design", but I agree that they probably didn't mean Open Hardware beyond just the PCB layout or something similar.
It would be very surprising if they released the detailed schematics for the SiFive cores. Until there's some kind of common micro-fab standard where every major university can have a legit semiconductor fab for small scale operations, giving people the design doesn't really do much.
I really wish that someone would work on the problem of making affordable, small-scale semiconductor fabrication possible on a reasonably modern node (<= 32nm). It's a hard problem... but everyone in the world being dependent on a few large fabs is also a hard problem.
Speaking as someone at Beagle, we see this board as an important step to more openness in the ecosystem, especially helping software developers improve the state of open source for RISC-V. It is also just a really cool board. Beagle will do more to try to get more openness at the RTL-level moving forward, perhaps even with FPGA boards at an interim step. The shuttle services are starting to make releasing a new chip design in reasonably modern nodes more possible.
> The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well.
I don't think that necessarily says that ImgTec will be upstreaming the open drivers, more that they don't have a better option at the moment and will be replacing closed source components with each revision.
I hope they will, but I'll believe it when I see it. They've been extremely allergic to open source in the past.
The post specifically says upstreaming open drivers, here is a quote:
Imagination is also creating a new open-source GPU driver to provide a complete, up-streamed open-source kernel and user-mode driver stack to support Vulkan® and OpenGL® ES within the Mesa framework. It will be openly developed with intermediate milestones visible to the open-source community and a complete open-source Linux driver will be delivered by Q2 2022. Imagination will work with RIOS to run the open-source GPU driver on the PicoRio open-source platform.
I agree it seems like quite a change of heart and I definitely won't be holding my breath.
That'll be interesting to see how much is actually new. I'm pretty sure they licensed the core of their shader compiler stack, so there was no way it was going to just be opened up, but their GPU ukernel looks totally homegrown and would be a shame to throw away.
They've been delivering that vacuous promise every few years. Being bought by Canyon Bridge, a private equity fund owned by the Chinese government, a few years ago has unfortunately not changed anything.
Having reverse engineered a bit of the drivers, I think it's because they culturally think that all of their value add is in the software. Patents have expired on the TBDR fixed function hardware blocks. The rest is just a combo of a little RISC core that does job dispatch (Programmable Data Sequencer in their parlance), and a cluster of SMT barrel scheduled cores (used to be called USSE in the SGX days, not sure now) that do the heavy lifting wrt shaders that don't really have any secret sauce AFAICT.
The value add is all in the software stack where they run a full little ukernel on the main GPU cores, and optimizing the shit out of the software that runs on those cores from their pretty clever compiler.
I bet they think that if they open source the drivers, that's giving away the one thing that makes PowerVR GPUs special in the first place.
If a IMG person reads this: y'all are wrong with that last piece. Your company is dying without opening the drivers, and you'll be able to control the hardware/software co-design in a way that nobody else can even if you give away the software. You'll have to keep doing work to have new hardware available and stay ahead of the curve, but that's true anyways and is the sign of a healthy business. Sure beats withering away as your patents expire.
Can someone explain the technical benefits of this architecture over the competition? That is should I be excited if I don't care about e.g. openness? Or is it simply an effort to create something that is a half-decent cpu alternative but open?
Is there anything about RISC-V that is "better" simply because it is a later design than others? Is it likely to evolve faster because it is open or more modern?
> Is there anything about RISC-V that is "better" simply because it is a later design than others?
A lot of it is because it is newer, and the designers have learned from previous architectures. It is a relatively clean and straightforward instruction set, designed to be easily and efficiently implemented.
There's not anything that is super crazy revolutionary, in contrast to the (still vaporware) Mill CPU architecture.
> Is it likely to evolve faster because it is open or more modern?
They have a good extension mechanism that allows relatively clean additions to the instruction set. Some of the recent ones like the vector extension aren't finalized yet. Anyone can propose their own extension. Historically, ARM might work with their most important customers to implement an extension, but good luck getting their attention if you're not already paying them millions per year.
The mill has been in development for... 18 ~years now! Soon they will be able to hire engineers who are actually younger than the company. I wonder if there has ever been a tech company that survived so long without bringing a product to market. Duke Nukem Forever took ~14 years.
According to Ivan on their forums (so take this with a grain of salt as it's from the horses mouth rather than an external assessment) they were apparently supposed to be levelling-up in the summer of 2020.
They have at least secured a decent patent portfolio, particularly on the belt.
From what I understand, all the developers have day jobs or are independently wealthy and can afford to work on it without (much?) pay. They haven't accepted VC money, even though that would likely have sped up development considerably.
I think in the real world "No percentage of each sale payments to ARM" is what will drive RISC-V. An "open" ISA doesn't force anything else to be open.
So, use cases like Western Digital, where they can quit paying ARM a percentage of every hard drive they sell, for example.
As for technical advantages, each RISCV vendor has their own choice of how to implement, so it's hard to say anything broad that applies to all RISCV implementations. The Berkeley BOOM project is hitting really good DMIPS/MHz numbers. LowRISC has some interesting memory tagging and "minion core" ideas, etc.
Edit: I left out perhaps the most important reason RISCV has a lot of hype. They've been successful getting first class support from the Linux kernel maintainers.
Hasn't WD been doing their own silicon (at least from a design standpoint, they still use someone else's fab) precisely because the 'small percentage' ARM charges matters for their margins? In a world where we have ESP-01 boards which retail for $2, even a couple of percent matters.
The ISA is pretty nice, simple, and well documented. And since it's "open", people can create their own implementations. Like this guy, who is creating a RISC-V processor from scratch, without using an FPGA.
One of the other replies points to the RISC-V extensions feature. I think for someone who "doesn't care about openness" would at least benefit from that in the architecture. It means the same compiler can be used to bootstrap things and simple steps can be added to greatly optimize specific types of code, like AI stuff. This board really stands out in AI performance.
Also, having things open means that the supply-chain can be more stable, with less chances of a single glitch in the system halting deliveries for any time. This is driving a lot of interest in RISC-V right now.
The main difference is that RISC-V is a lot more modular, so it's going to be difficult to distribute binaries for but more flexible if you're doing something completely vertical. Also a lot of the modules have bundle relatively common/easy instructions with niche/difficult ones. E.g. multiply with divide.
> The main difference is that RISC-V is a lot more modular, so it's going to be difficult to distribute binaries for but more flexible if you're doing something completely vertical. Also a lot of the modules have bundle relatively common/easy instructions with niche/difficult ones. E.g. multiply with divide.
I don't think it'll be worse than ARM and it's decidedly better than x86.
There are SEVEN major revisions of ARMv8. Then there's v8-R, v8-M, and additional 32-bit variants of each instruction set in addition to both ARMv7 and ARMv6 which also still ship billions of chips per year. Oh, and under pressure from companies, ARM also allows custom instructions now. Those aren't just theoretical either -- Apple at least added a ton of custom matrix instructions to the M1.
For x86, supporting only semi-recent processors (2006 Core or greater) leaves you still checking for support for: SSE3, SSE4, SSE4.1, SSE4a, SSE4.2, SSE5, AVX, AVX2, AVX512, XOP, AES, SHA, TBM, ABM, BMI1, BMI2, F16C, ADX, CLMUL, FMA3, FMA4, LWP, SMX, TSX, RdRand, MPX, SGX, SME, and TME. That's 29 instruction sets and not all of them have use on both Intel and AMD chips.
RISC-V seems at least that cohesive. If you're shipping a general purpose CPU, you'll always have mul/div, compression, fusion (not actually instructions), privilege, single precision, double precision, bit manipulation, and probably a few others.
Where you'll run into mul/div missing or no floats are microcontrollers or "Larabee" style GPU cores. In all of those cases, you'll be coding to a very specific core, so that won't really matter.
Thankfully, we've had ways to specify and/or check these kinds of things for decades.
> leaves you still checking for support for: SSE3, SSE4...
Find me a processor that supports SSE4 but not SSE3. That's the problem. With x86 you pretty much can say "we're targeting processors made after 2010" or whatever and that's that. You make one binary and it works.
RISC-V allows a combinatorial explosion of possible CPUs. You can have a CPU that supports extension X and not Y, but another one that supports Y and not X.
If you're in an embedded situation where you're building all the software yourself then that's fine.
If you're on a general purpose PC/smartphone with packaged software then the OS vendor specifies a base set of extensions that everything must implement -- for Linux at the moment that is RV64IMAFDC aka RV64GC.
All of those extensions (except maybe A) are very generally useful and pervasive in code.
Some other extensions, such as the Vector extension, will provide significant benefits to applications that don't even know whether the system they are running on has them -- you'll just get dynamically linked to a library version that uses V or doesn't, as appropriate.
To take a very trivial example, on a system with V, every application will automatically use highly efficient (and also very short) V versions of memcpy, memcmp, memset, bzero, strlen, strcpy, strcmp and similar.
The same will apply to libraries for bignums, BLAS, jpeg and other media types, and many others.
If you're doing something embedded nothing prevents you implementing multiply but not divide. RISC-V gcc has an option to use an instruction for multiply but runtime library call for divide.
In fact, even if you claim to implement the M extension (both multiply and divide) all that is necessary is that programs using those opcode work -- but that can be via trap and emulate. If your overall system can run binaries with multiply and divide instructions in them then you can claim M-extension. Whether the performance is adequate is between you and your customers. Note that there are also vast differences in performance between different hardware implementations of multiply and divide, with 32-64 cycle latencies not unheard of.
The same applies for implementing a subset of other extensions in hardware. You can implement the uncommon ones in the trap handler if that will meet your customer's performance needs.
Interesting, I might get myself one of these to play with.
But the board has an HDMI output, however the description doesn't describe the specifications on display processor / GPU functionality, or even if it's just a simple framebuffer, etc. There's specifications on Video processing, but I get the impression this is for camera / video input, not output.
"The initial pilot run of BeagleV will use the Vision DSP hardware as a graphics processor, allowing a full graphical desktop environment under Fedora. Following hardware runs will include an unspecified model of Imagine GPU as well."
> BeagleV™ is the first affordable RISC-V board designed to run Linux. Based on the RISC-V architecture, BeagleV™ pushes open-source to the next level and gives developers more freedom and power to innovate and design industry-leading solutions with an affordable introductory price at $149.
That's not the point. Risc-V machines are not yet price competitive, but presumably will be at some point. This is for people who are interested enough in Risc-V to spend some time and a ~$100 on it, but not thousands of of dollars. And the main reason many people are interested in Risc-V over Arm is that it's open and license free.
Raspberry Pi 4 with 8 GB RAM is $75, and that's not counting a power supply, SD card, or HDMI cable. Adding those at pishop.us (16 GB card) takes the price to $95.85.
We don't know whether the BeagleV price includes those necessary items or not but either way coming within a factor of 2 of Raspberry Pi price is pretty impressive at this stage. Up until now you've paid $3000 for a 1.5 GHz RISC-V setup with HDMI and USB etc, if you've got one actually in your hands (HiFive Unleashed plus MicroSemi HiFive Expansion board), or $665 for one that will start delivery in March (HiFive Unmatched). There is also the $499 Icicle which is quad core but only single-issue (like the HiFive Unleashed) and only 600 MHz.
From the Guidelines: Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful.
Still, the page loads in ~3secs on my 2012 i7-3770K with 100Mbps...
This looks really awesome. It's been a longstanding desire of mine to build a custom laptop with one of these powerful small SOCs (originally thought of something like the lattepanda, which is similar to an Intel macbook air, but as a slightly bigger tan pi soc).
If the form factor is reduced to something like an NSLU2 you can attach a large spinning disk and have a desktop server/NAS device. i.e., an unlocked WD MY Cloud with open source community android and iphone app for the device
Why the linux boards are always designed to be cheap? What about those who have little more dough and are willing to put some cash to get extra? Just create some expensiver premium models, not only cheap stuff, damn it.
I think everyone is well aware of that, but unless you're making and able to sell millions of boards and have hundreds of millions in cash flow, there's no real path to using 10nm or smaller nodes. FWIW AllWinner are currently manufacturing 5 million RISC-V chips (with TSMC IIRC). I don't know what node they are using.
Likely that it supports the Supervisor execution environment from the riscv spec . This means it can run the typical ring 0 and ring 3 for kernelspace and userspace respectively, and importantly the board supports virtual memory.
I like to think of it as an open-standard like where anyone can download the TCK versus something where you need to pay for the SPECs from ISO (for example Prolog is https://www.iso.org/standard/21413.html which is 185€/$250 USD).
It's more about the licensing of the ISA. Try creating your own compatible x86 processors, and find out how long it takes before Intel's attack lawyers come down on you. ARM is a bit better but you still have to pay licensing fees. For people who want to git clone a design and manufacture it without involving lawyers or licensing fees, RISC-V is likely the best choice.
Think of it as MP3 vs Ogg Vorbis. It's equivalent, slightly newer and a bit nicer, but really the benefit is that it's patent-unencumbered and you are free to tinker with the ISA and build your own versions at will.