My favorite thing to do with TCL is to create a live distro that boots of off usb/cd and eject the device it boots off of entirely while the OS is still running. I would do this in libraries/cafes all the time. No device plugged into the computer but running a different OS.
Back when I was in High School, I would boot a floppy with UnetBootin (the BIOS did not support USB boot), and boot a flash drive with TinyCore, then put the floppy and flash drive back in my pocket. I would end up with a working browser before anyone else was able to login to Windows XP, and my computer wouldn't freeze, crash, etc.
Judging by the comments, it seems like people haven't realized yet that the distribution is precisely this small so the entire thing can run in RAM. TinyCore wants to be super fast, so they try and got everything in RAM.
Some people perhaps are missing this. Others surely understand.
Personally, I have been running my computers this way for many years now. I do not use Linux kernel, but like "tinycore" everything fits in RAM. Size range is usually about 13-17MB for x86.
The USB stick or other boot media with the kernel+bootloader can be removed after boot. Depending on how much free RAM I have to spare, I can overlay larger userlands on top of this, and chroot into them. With todays RAM sizes, I can hold an entire "default base" (BSD) in memory if desired.
This filesystem on the media is not merely an "install" image. It has everything I need to do work, including custom applications. If I want distributions sets I download them; they are not stored on the media.
In normal use, I do not use any disk. Therefore I do not use a swap file.
Recently on HN a discussion of sorting came up, and I mentioned the issue of sorting large datasets using only RAM, no disk. Obviously BSD and other UNIX-like kernels date back to a time of severely constrained memory and are designed around space-saving concepts like "virtual memory" and "swap", "shared libraries", etc.
When one runs from RAM with no disk, working with large files means "thrashing" is a real possibility. Aside: Did the architects of virtual memory contemplate a world where users are doing work without using HDDs or other writeable permanent storage. (Start a new thread if want to answer/debate this.)
Sorting large files is something I do regularly so I am always open to new ideas. Currently I use k/kdb+ for large files instead of UNIX sort.
the answer you are looking for to your core question is “No”.
Generally all kernels will assume _some_ backing store. The reason for this is dealing with RAM being full.
This is because updating MMU is generally expensive, so it’s easier to mark an address as belonging to your process, and commit the pages after the fact.
Now this can be changed in settings. Ensure pages are backed before allocated, and running swap off can be done.
But then EVERY and let me repeat EVERY SINGLE piece of code you run has to able to handle malloc failing gracefully. And next to none do. This is a strange condition because how you handle it depends on your kernel configuration, and how much stack you have left. Which is difficult to know unless you wrote every piece of code FOR your custom kernel config.
Ultimately just having swap enabled and assuming a back store is easier. The few places where you get away with 100% ram and a read only store is embedded (generally, not always) and in these conditions you can closely tie software to kernel configuration.
Yes, for me this has been the best balance between
(a) the simplicity (size, 3rd party dependencies, etc.) and robustness of the system for building from source and
(b) the hardware support in the source tree.
No question Linux wins on (b) given all the corporate backing, but whenever I revisit systems like Buildroot or Yocto I feel the system I am using wins on (a). More likely than not I am just resistant to change and not wanting to invest the effort to master those systems.
Purely subjective, but I suspect the system I devised for myself is simpler and more "stable" in the sense it is less reliant on third parties and thus less brittle. Those qualities are important to me.
Not sure if anyone has mentioned VoidLinux yet in this thread. Probably well worth a look at least as an example if "small and simple" are design goals.
On the same note, I take it your systems are very "I'm using the [whichever]BSD kernel and userland, but this is my own creation" as opposed to stock [whichever]BSD.
A related tangential question:
I've been on the fence about BSD distributions for a really long time, specifically the viability of maintaining them. The few that exist seem to end up forking the entire system, as opposed to just running off with the stock kernel and maybe doing package management differently. That puts huge pressure+responsibility on the person doing the fork.
My question is, are things the way they are because people wanted to fork the kernel for a specific reason and their fork naturally pulled userspace along, or is there a specific reticence toward forking userspaces in the *BSD world?
"minimal live Linux system based on Micro Core (Tiny Core Linux) that uses scripts to download select packages directly from vast Debian or Ubuntu repositories and convert them into useable SCEs (self-contained extensions). "
I'd also look into Buildroot . I'm currently using Buildroot for a project and the whole distribution (including libraries and executables) is under 40MB. Since I am using a fixed dev board (i.e. peripherals are not going to change), I used lsmod to detect which drivers are needed and only build those, which really shrinks the kernel.
Buildroot also includes a cross compiler, so that you can rebuild the entire toolchain, kernel, and libraries in one go.
Buildroot is great! I'm using it to build distributions for several pieces of embedded hardware.
But I think Buildroot targets different hardware than Tiny Core. Buildroot targets embedded systems. It therefore lacks a package manager. Tiny core seems to target bigger systems, like notebooks, servers and desktops.
I was under the impression that if you really want to optimize for space that you should avoid kernel module overhead entirely and build in your modules statically. Maybe it only saves a few bytes and a couple clock cycles at load time but it sounds like it’s worth a try for you.
SliTaz was another great distribution, very feature complete and highly polished for its incredibly small size. Unfortunately - and you could see this coming a mile away - like so many other projects (open source or otherwise) they decided they’d “do a rewrite” and the project really lost its way and fizzled our thereafter.
It was indeed. Saved my butt any number of times before it zombied out in the transition.
When I workded in public IT, on a few occasions I booted Slitaz on troubled machines that idiot colleagues had battled with for hours or days, trying to install som Windows. In less than a minute, I had a working, webbrowsing desktop running entirely from RAM. Tellingly, nobody ever as much as asked a simple how? - somehow reminding me of the Australian aborigines simply ignoring the arrival of a UFO in the shape of captain Cook's ship.
I am sorely tempted to push my own little OS for another round but just porting the whole thing from 32 bit to 64 bit has me depressed. It should be a lot easier than the first time around though, now that we have VMs to test with, that is a lot faster turnaround wise than to have to reboot a physical machine every time you mess up in kernel code or some critical device process.
That 'really fast' bit that you perceived was your processor slicing 200K+ times / second versus maybe 2000 times/second under Linux or Windows.
QNX is crazy fast in that respect simply because it has almost no context to switch. The soft-realtime aspect of the kernel also helps tremendously in keeping things moving, everything that is interactive runs at a high priority so you'll never see frozen mousepointers or stuff like that.
Yes! That was amazing, specially compared with the basic KDE/Gnome Linux distro at the time that would take a whooping 100mb. Now I install OS X apps that are > 150mb on a regular basis. It's ridiculous.
I boot it up in virtualbox periodically and play with it. Setting up the clustering and dragging Photon windows between two VMs is really satisfying.
I wish RIM/Blackberry hadn't bought it, closed the source¹, and killed off Photon. QNX was a really solid and fast OS.
1. For those who don't follow QNX development closely, the source was available for a few years but was not open in the usual sense. It was still under a proprietary license. RIM bought QNX and closed it off completely.
If they had I would have never left QNX. Quantum had a great thing going, then first the sale to Harman and after that RIM buying it may have been a good business decision for RIM but it was terrible for the mass adoption of QNX.
Even so, there probably are still untold millions (or even 10's of millions) of embedded QNX installations out there besides the Blackberries that are still in use.
That little os is about as elegant as they come this side of plan9/inferno.
FreeBSD, where that one floppy was enough to set up a modem and dial up to establish a PPP session and then download the rest of the installation. All while allowing you to open another terminal to telnet to a shell somewhere else and pass the time with e-mail and irc while FreeBSD downloads and installs.
> tomsrtbt (pronounced: Tom's Root Boot) is a very small Linux distribution. It is short for "Tom's floppy which has a root filesystem and is also bootable." Its author, Tom Oehser, touts it as "The most GNU/Linux on one floppy disk", containing many common Linux command-line tools useful for system recovery (Linux and other operating systems.) It also features drivers for many types of hardware, and network connectivity.
But back on the subject of the post, I'm happy to see this project. I went searching a small Linux distribution a few months ago, and most of the projects I'd relied on the in the past seemed to have stopped development, like DSL, or Puppy.
I built a Linux floppy-based bootable disk imaging environment to roll out masses of Windows 95/98 machines back in 1997-1999. You'd compile a minimal kernel down to 600 - 800 KB then pack-up your userland in an itty-bitty gzipped filesystem archive and concatenate it with the kernel at an even sector boundary, dd the whole thing to a disk, and set some bits to tell the kernel where to load the initrd. I've never gone back to see how much of that functionality still exits in modern kernels. (Very little, I'd assume...)
These days, finding a machine with a floppy drive, let alone an actual floppy disk, might be harder than shrinking a system down enough to fit on a floppy disk.
EDIT: Don't get me wrong, it is really cool people can build such tiny systems. But the smallest storage device in my household it a 2GB SD card. Outside of embedded devices and low-end routers, I do not see the point other than for hack value.
Few months ago at work we used a 25 years old spectrometer. It got 5 inch floppy. Initially we considered to get a drive for it to get data from the spectrometer, but after few quick searches on eBay we realized that to get one that can be connected via USB was not so trivial or cheap or fast. Fortunately it turned out that spectrometer can send everything over serial port as long as the serial link has proper 12 volts voltage. So we got our data using true usb-to-serial dongle costing like 120$.
I've worked with connecting equipment that we super finicky on what was on the serial port, and after 2-3 tries, we gave up and just sourced an old machine with hardware serial. We only have a few old machines left!
https://www.youtube.com/watch?v=kTrOg19gzP4 seriously, seriously did my head in. I was certainly fascinated by this, but I fear that I'd contribute one character, accidentally put it in the wrong place, break the build for two days because nobody can figure out what broke, and then feel really bad for weeks afterwards.
(Okay, okay, diffing... but still. I'd break my own build, at least.)
I _am_ genuinely interested, and I'd LOVE to play with this, but I'm really, really conservative, and would far prefer to be a fly on the wall for a bit for a while first.
It still is, there are hundreds of contributors and a new release (Tcl v8.6.8) came out 22-DEC-2017. There are plans for a new major release, Tcl 9, as well.
Tcl still has many unique features that other languages lack. One new highly experimental feature is TclQuadCode, which can compile Tcl to machine code using LLVM. It's been in progress for over 5 years and it is amazing work. Compiling a dynamic language is a difficult task.
I once created a windows floppy with winsock ini settings, irc, netscape, eudora and an ntp client, all pre-configured for an ISP I was working at. Users just had to put in their username and password, oh dialup isp days...
The other floppy OS I tinkered around with was QNX, but it was just a demo of what QNX could do.
I use it to produce 50-minute radio shows for my country's public broadcasting. I work with a Thinkpad T42 from circa 2004. Swapped the PATA HDD for a Compact Flash card -- prior to this, everything was just surprisingly snappy thanks to Tiny Core Linux's RAM boot; now the machine is also wonderfully quiet.
Granted, this is pushing it, but I've been using this setup every day for almost two years. (I just like to use old hardware until it dies -- or, is this what old IPS-screened Thinkpads generally turn people into?)
Sure, you probably should be a "computational minimalist" by nature (e.g. there is an older version of Chromium, but I suppose your main browser will be Dillo -- which, actually, is just wonderful once you get used to it). But if you are a minimalist, I'd say it's a really solid system.
Also, it's fun for me to think that I bought this Thinkpad T42 three years ago for €20. And now I use it as my main workhorse in a field where typical setups consist of new-ish MacBooks with high-end SSDs and an up to date version of Pro Tools. (And where people occasionally still think that "duh, you can probably only edit a text file in Linux".)
So it's an awesome, clean, fairly easy to maintain distro (e.g. in case of a typical install you have a pristine system after every reboot). And the community is very friendly and responsive.
Nowadays I've found myself mostly using it for VNC to a slightly better machine (my old desktop, on permanent loan to a family member after their laptop broke. This works...). And... even on my not-great 802.11g, and the CPU even locked to 800MHz, typing this text over TigerVNC is literally realtime with no perceptible lag or delay. I'm honestly amazed. But anyway...
FWIW, launchpad.net have multiple sources providing the latest 32-bit builds of Chromium. These are built for Ubuntu, but I find they've worked 100% fine on Slackware. :P (After some work I even got the debugging symbols into the right place!)
(Nothing stopping you from building the world's largest LD_PRELOAD to pull in "enough Ubuntu" that Chromium boots, but I find that 100% unnecessary at this point.)
Some build daily(ish), some build fortnightly-to-monthly-ish. Before I set up VNC, I was in the process of figuring out an autoupdate script (CLI PHP, easily rewritten) that would find and fetch the packages off Launchpad. Let me know if you'd like a copy, I never finished it but I did do the Herculean bit of figuring out the magic API URLs, the rest is just boring scripting and downloading.
Protip. If you open 100-170 tabs (possible! on 2GB! with The Great Suspender), the main process will hit 4G VIRT. xD
Besides that, NetSurf is a bit better than Dillo, you're probably already aware of it.
We switched to LinuxKit https://github.com/linuxkit/linuxkit - it was very hard to maintain boot2docker and TCL and make them usable. LinuxKit is generally a little larger, as we use a bunch of Go code rather than C and Go is a little bloated, although that will improve no doubt. You can make very small LinuxKit images if you really want too.
In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days. Storage is ridiculously cheap. A 64MB compact flash card doesn't cost significantly less than a 2GB card (32x the storage).
I have a few old PCs sitting around, a Pentium 4D, a core duo or two. These are full-size PCs, old Dells and IBMs. It doesn't make much of a difference if they are running an 11MB distribution or a full Debian with GUI.
What I'd really like is a small form factor PC with full x86 support so that I can run DOS, or other PC oses, like the BSDs or BEOS. Ideally it would be the size of a Raspberry PI and cost the same. I know such systems are available, but the cost is an issue. Emulation just isn't the same either.
One thing that drives me nuts is how these embedded systems always want to use goddamn BusyBox instead of full fat Bash and userland. There's just enough stuff removed from BusyBox to break scripts and generally make life annoying. And for what, to save 20MB of space on your 16GB device?
They're getting increasingly rare now that storage costs are so low. Why hobble something with an 8MB flash chip when a 1GB flash chip costs almost the same? Maybe you can shave some fractional pennies on not having to wire up so many address lines, but even that's marginal.
> Why hobble something with an 8MB flash chip when a 1GB flash chip costs almost the same? Maybe you can shave some fractional pennies on not having to wire up so many address lines, but even that's marginal.
If you're making a hundred million of them, the marginal gains add up. There's no point in adding stuff you don't need.
In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days.
In my mind the primary value is learning. The smaller the system the more likely I can understand how all the parts interact. Once a basic understanding is achieved you can start layering in more and still keep up.
Once you get a bit past that it's easy to create recovery media and other interesting boot tools.
I'm sure others can think of more but that's the first thing that I think of when I think of LFS, Tiny and to a lesser extent distros like Slackware that still tend to keep things relatively simple.
>In spite of needing something like this a few weeks ago, I wonder what the role of such reduced distributions actually are these days.
I think this is pretty much why. If you were to take your average hacker/maker, most of them would probably have a similar sentiment. However, most of them would also have a story about how they needed some tiny distro like TCL recently for some niche use case.
I myself needed a tiny distro recently. I had 22 rackmount servers with no HDD all on the same network, and I needed them to all run the same static binary. I realized like most Intel servers, with default settings, they will try to boot over the network with PXE. Wanting to not spend a day messing around with provisioning these servers, I decided I would just put a linux image on a USB stick, stick it in my OpenWRT router, and host it over TFTP. Serving a large image from a USB stick every time each of 22 servers started up was an unnecessary use of bandwidth and would often fail (I didn't have the best router). I also didn't need anything except a network stack and the ability to run a 64 bit static binary, so I did some research and ended up using the x86_64 version of TCL. Worked great! I'm sure tons of others have weird niche use cases for such a distro.
Of course, as my needs evolved, I ended up just using a minimal Arch linux image...