Somewhat related is MirageOS, a microkernel library for producing applications that run directly in a hypervisor, like Xen. So no virtual environment necessary. And thanks to the memory safety of OCaml (given certain assumptions about the style of programming, because any sufficiently versatile language comes with footguns) you don’t even really need the virtual memory system and other conveniences of a modern operating system.
> No one has reported trying to run this on a Mac M1 or M2 yet but since these run ARM we don't feel that it is a good laptop to be using if you are deploying to X86 servers. At best you will experience slowness as the machines will need to emulate a different architecture. This isn't something we expect software updates to fix and both Docker and VMWare state the same thing. Even if you wish to deploy to ARM servers we don't feel that the M1s and M2s are going to be helpful as they are very different from most commodity ARM servers.
Erm... they're not Mac people then...
EDIT: I was probably being a bit grumpy - read the full replies below for further context...
There are 2 large problems with macs for shipping to x86 servers in the cloud (our main target):
* Different file format: elf vs mach-o - this is why many devs that use mac rely on things like docker or vagrant.
* x86 vs arm: We do support ARM to a degree right now but the vast majority of our end users are deploying to x86 vms.
The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved.
Looks like one of our users has been playing around with it though so YMMV:
> The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved
But presumably most people will be doing this step in a CI system for deploying to production right? I certainly don't build anything locally on my Mac that I then deploy to a production server. Nor do I expect anything I do build and run locally to run like it will on my production servers (in terms of performance).
I get that it might not be an _optimal_ experience to develop on an M1 but the wording of your FAQ is a little offputting as it currently stands - given that it advises that ARM Macs are likely not suitable, but says that no one has actually tried yet... while simultaneously referring to a Mac chip (M2) that doesn't exist :-)
The website and project look great. I'm just giving you my honest first impressions!
Appreciate the feedback. The community site could definitely use massive amounts of documentation and it hasn't been updated recently either.
I'd agree anyone that is actually taking something to prod will be using a ci/cd server of some kind, but in terms of just monkeying around or using as a dev laptop the M1s don't have the same ergonomics.
We're not against M1s at all. If enough people want that support it can be added - whether it is native ARM builds or binary translated X86 - it is mostly just a word of warning on expectations.
There's no such thing as a "Mac M2" for a start. (Yet)
But I also found this paragraph a bit odd, because the machines I use locally never match the architecture of what I'm deploying to. In fact (in my view) it's largely irrelevant.
I develop on Macs and deploy to Linux and Windows servers, with a mixture of ARM/Intel. I don't quite understand this sentiment. Not to mention the fact that they start off by saying that no one has even tried it yet (presumably to run Nanos in a VM on an M1 Mac).
It just seems a bit uninformed and unnecessarily opinionated. As a Mac user it puts me off digging much deeper into the project. Maybe that's the wrong takeaway, but that was my first reaction.
It would be great if the picture demo running the node application included timestamps. The landing page keeps using "fast" to mean "bandwidth" but without any mention of latency - my primary question is how long it takes to boot the kernel and start launching the userland process (i.e. cold start time) but there's no mention of that.
We've seen boot times in the 60s and 70s of ms but have put absolutely no work into optimizing that. We could drive that down substantially for payloads like virtual network functions.
I should point out that boot time is highly dependent on two things: infrastructure of choice and application payload. For instance, your typical rails or JVM payload might take longer to init then actual boot time. Similarly booting on Azure can be different than booting under firecracker on your own hardware.
See https://nanos.org/faq where there is a comparison table but it's not complete or out of date, because OSv website says:
"OSv supports many managed language runtimes including unmodified JVM, Python 2 and 3, Node.JS, Ruby, Erlang as well as languages compiling directly to native machine code like Golang and Rust"
This seems like a big deal on the Nanos side:
"Another big difference is that Nanos keeps the kernel/user boundary. In our testing removing the large process to process context switching that general purpose operating systems still removes quite a lot of the perceived cost in other systems. We keep the internal kernel <> user switch for security purposes. Without it page protections are basically useless as an attacker can adjust the permissions themselves with privileged instructions."
This would seem to me that they are slower than OSv.
>osv looks like FreeBSD with some machinery around it to package single applications and run them on boot
Just because ZFS? Because everything else is not from BSD's, it's a unikernel made to run on a hypervisor made to run linux-bin's, i really don't see a difference....or any plus to use nanos.
The scope of "everything that is needed to run" is a lot higher than might appear and since it is common code that is applicable to every app that would be deployed it is packaged as a 'kernel'. Something has to talk to the network, to the disk, keep track of time, etc. You might be surprised at how much code is involved in merely writing a byte to disk, efficiently and in a manner that a random webapp might use.
One very common misconception about unikernels is that they don't have kernels which every single implementation out there has one - it just might be smaller or less-featured than others.
So, at least in our view, it's not about having a 'small' kernel it is more about the architecture.
You can have libraries that can implement device driver functionality and talk directly to devices. Actually there are some (DPDK - Data Plane Development Kit and SPDK - Storage Performance Development Kit, for example).
> The scope of "everything that is needed to run" is a lot higher than might appear
Having written one algotrading framework with full kernel bypass which required me to account for every single piece of kernel functionality in use by the application (mostly to eliminate its use) I actually think it is the opposite. Most applications do not need a lot from the kernel to function and what they are using could be supplied as library.
Main reasons to have kernel -- protect shared resources and impose security constraints -- are not present when you intend to only have one application in the system.
Whether code is packaged a a library or inside a base kernel is definitely open to interpretation/design. We, for instance have the concept of a 'klib' which is code we don't want packaged in the base kernel but is optional and can be included at build-time. For instance deploying to Azure requires a cloud-init call to check-in to the meta-data server to tell it the instance has booted - not something you want to put inside every image if you are only deploying to say Google Cloud. Likewise, we have another klib that provides APM functionality but checks-in to a proprietary server so clearly not everyone wants that inside their image either.
However, there is a lot more than just networking/storage drivers and some of it is very common code. Page table management for instance. Do you have 2,3,4 page table levels? How do you allocate memory to the app? IOAPIC vs PIC? Do you support calls the way GLIBC wants? RTC vs TSC vs PIT? I'm not saying any of these can't be librarized but they are most definitely not choices I would expose to the vast majority of end-users to choose at build-time for.
Mmm. Fascinating ... am wondering already how well the JVM would run on it. Also maybe this could the solution I've been looking for for "Just enough OS for Virtualbox" and have it run on baremetal. VBox has one of the best VM management UIs in my opinion.
With the splitting of applications into microservices all running in separate containers this does make a lot of sense again. Why have a full multiuser os running for each process. Will look into this more.
that's nice. It doesn't change the fact that its incredibly bad practice to ever run a piped curl, thus its incredibly irresponsible to make it the top of page delivery mechanism for your shiny new project. Its even more irresponsible to put that silly little shell script behind a url shortener. That's like, two major security, and just plain old fashioned hygiene violations and I haven't even moved past the first line.
Things like this shouldn't be 'easy'. they should be well-documented, sane and safe. This is certainly an interesting and novel approach to tackle some of the absolute mess Docker has made of the world, but at the end of the day, if a system is compromised, its compromised, whether that be the docker user or the kvm/hvf group members.
https://nanovms.com/dev/tutorials/debugging-nanos-unikernels...
https://nanovms.com/dev/tutorials/finding-memory-management-...
https://nanovms.com/dev/tutorials/profiling-and-tracing-nano...
> Does this Work for My Mac M1 or M2?
> No one has reported trying to run this on a Mac M1 or M2 yet but since these run ARM we don't feel that it is a good laptop to be using if you are deploying to X86 servers. At best you will experience slowness as the machines will need to emulate a different architecture. This isn't something we expect software updates to fix and both Docker and VMWare state the same thing. Even if you wish to deploy to ARM servers we don't feel that the M1s and M2s are going to be helpful as they are very different from most commodity ARM servers.
Erm... they're not Mac people then...
EDIT: I was probably being a bit grumpy - read the full replies below for further context...
There are 2 large problems with macs for shipping to x86 servers in the cloud (our main target):
* Different file format: elf vs mach-o - this is why many devs that use mac rely on things like docker or vagrant.
* x86 vs arm: We do support ARM to a degree right now but the vast majority of our end users are deploying to x86 vms.
The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved.
Looks like one of our users has been playing around with it though so YMMV:
https://github.com/imarsman/dockerops
Is it a Mac M2? :-)
> The problem here of course is that ops produces machine images that are ran on a hypervisor so this works great on an x86 mac (for dev/test) but is very slow on apple silicon cause of the translation involved
But presumably most people will be doing this step in a CI system for deploying to production right? I certainly don't build anything locally on my Mac that I then deploy to a production server. Nor do I expect anything I do build and run locally to run like it will on my production servers (in terms of performance).
I get that it might not be an _optimal_ experience to develop on an M1 but the wording of your FAQ is a little offputting as it currently stands - given that it advises that ARM Macs are likely not suitable, but says that no one has actually tried yet... while simultaneously referring to a Mac chip (M2) that doesn't exist :-)
The website and project look great. I'm just giving you my honest first impressions!
Appreciate the feedback. The community site could definitely use massive amounts of documentation and it hasn't been updated recently either.
I'd agree anyone that is actually taking something to prod will be using a ci/cd server of some kind, but in terms of just monkeying around or using as a dev laptop the M1s don't have the same ergonomics.
We're not against M1s at all. If enough people want that support it can be added - whether it is native ARM builds or binary translated X86 - it is mostly just a word of warning on expectations.
I may take a closer look!
There's no such thing as a "Mac M2" for a start. (Yet)
But I also found this paragraph a bit odd, because the machines I use locally never match the architecture of what I'm deploying to. In fact (in my view) it's largely irrelevant.
I develop on Macs and deploy to Linux and Windows servers, with a mixture of ARM/Intel. I don't quite understand this sentiment. Not to mention the fact that they start off by saying that no one has even tried it yet (presumably to run Nanos in a VM on an M1 Mac).
It just seems a bit uninformed and unnecessarily opinionated. As a Mac user it puts me off digging much deeper into the project. Maybe that's the wrong takeaway, but that was my first reaction.
I should point out that boot time is highly dependent on two things: infrastructure of choice and application payload. For instance, your typical rails or JVM payload might take longer to init then actual boot time. Similarly booting on Azure can be different than booting under firecracker on your own hardware.
The above also wrote a Uni kernel but it seems they abandoned that as was too large a problem and just link and package now
https://github.com/cloudius-systems/osv
"OSv supports many managed language runtimes including unmodified JVM, Python 2 and 3, Node.JS, Ruby, Erlang as well as languages compiling directly to native machine code like Golang and Rust"
This seems like a big deal on the Nanos side:
"Another big difference is that Nanos keeps the kernel/user boundary. In our testing removing the large process to process context switching that general purpose operating systems still removes quite a lot of the perceived cost in other systems. We keep the internal kernel <> user switch for security purposes. Without it page protections are basically useless as an attacker can adjust the permissions themselves with privileged instructions."
This would seem to me that they are slower than OSv.
nanos is recently written for this particular use case and uses lwip for networking
osv looks like FreeBSD with some machinery around it to package single applications and run them on boot
Just because ZFS? Because everything else is not from BSD's, it's a unikernel made to run on a hypervisor made to run linux-bin's, i really don't see a difference....or any plus to use nanos.
didn't suggest that nanos was better - if it does indeed use big hunks of freebsd code, that's had considerably more cooking time than nanos.
ZFS is optional.
Show me that.
There is one little bsd folder that's it:
https://github.com/cloudius-systems/osv/tree/master/bsd
Absolutely nothing written about BSD:
https://github.com/cloudius-systems/osv
One very common misconception about unikernels is that they don't have kernels which every single implementation out there has one - it just might be smaller or less-featured than others.
So, at least in our view, it's not about having a 'small' kernel it is more about the architecture.
> The scope of "everything that is needed to run" is a lot higher than might appear
Having written one algotrading framework with full kernel bypass which required me to account for every single piece of kernel functionality in use by the application (mostly to eliminate its use) I actually think it is the opposite. Most applications do not need a lot from the kernel to function and what they are using could be supplied as library.
Main reasons to have kernel -- protect shared resources and impose security constraints -- are not present when you intend to only have one application in the system.
However, there is a lot more than just networking/storage drivers and some of it is very common code. Page table management for instance. Do you have 2,3,4 page table levels? How do you allocate memory to the app? IOAPIC vs PIC? Do you support calls the way GLIBC wants? RTC vs TSC vs PIT? I'm not saying any of these can't be librarized but they are most definitely not choices I would expose to the vast majority of end-users to choose at build-time for.
There are advantages to an approach like Nanos or OSv in that development is easier and you have better compatibility.
And have it 'operate' it directly on hardware with only a minimal 'system' layer for common operations.
I feel old.
With the splitting of applications into microservices all running in separate containers this does make a lot of sense again. Why have a full multiuser os running for each process. Will look into this more.
There are also packages available through AUR/homebrew and the like: https://ops.city/downloads .
The script is only there facilitate the 'install' such as ensuring you have qemu installed locally or assessing whether you have kvm/hvf rights/etc.
Also, I don't think this is documented yet but you can target various PRs/builds with ops via this way:
ops run /bin/ls --nanos-version d632de2
Things like this shouldn't be 'easy'. they should be well-documented, sane and safe. This is certainly an interesting and novel approach to tackle some of the absolute mess Docker has made of the world, but at the end of the day, if a system is compromised, its compromised, whether that be the docker user or the kvm/hvf group members.