Ideal OS: Rebooting the Desktop Operating System

(joshondesign.com)

656 points | by daureg 2412 days ago

102 comments

  • joshmarinacci 2412 days ago
    I'm the original author. I hadn't planned to publicize this yet. There are still some incomplete parts, broken links, and missing screenshots. But the Internet wants what it wants.

    Just to clarify a few things.

    I just joined Mozilla Devrel. None of this article has anything to do with Mozilla.

    I know that none of the ideas in this article are new. I am a UX expert and have 25 years experience writing professional software. I personally used BeOS, Oberon, Plan 9, Amiga, and many others. I read research papers for fun. My whole point is that all of this has been done before, but not integrated into a nice coherent whole.

    I know that a modern Linux can do most of these things with Wayland, custom window managers, DBus, search indexes, hard links, etc. My point is that the technology isn't that hard. What we need is to put all of these things into a nice coherent whole.

    I know that creating a new mainstream desktop operating system is hopeless. I don't seriously propose doing this. However, I do think creating a working prototype on a single set of hardware (RPi3?) would be very useful. It would give us fertile playground to experiment with ideas that could be ported to mainstream OSes.

    And thank you to the nearly 50 people who have signed up to the discussion list. What I most wanted out of this article was to find like minded people to discuss ideas with.

    Thanks, Josh

    • jitix 2412 days ago
      Thanks for writing this! I have especially been waiting for the next release of some OS to have something revolutionary for people who use the computers as workstations.. but at this point most updates seem to be about voice automation, accessibility, sync and other "mobile" features.. I have wanted a DBFS for a long time for both personal and work files, and my huge multimedia collection.. secondly the ability to pipe data from one GUI app to another will help us immensely.. its the main reason I feel more productive while using CLI.
    • NotUsingLinux 2412 days ago
      Hi Josh,

      You raise some interesting points in your article. I wonder how to comment or discuss this the most efficient way (here or elsewhere?)

      Some questions: Have you looked at haiku?

      What you describe as modules, I think Alan Kay calls "objects" in the smalltalk/Xerox tradition.

      Have you looked at his research project: STEPS reinventing programming?

      https://www.youtube.com/watch?v=YyIQKBzIuBY

      Some bits and parts of this is open source.

      Imagine a full system running on 10000 LoC I think this could be a step forward.

      Also this blurs if not throws away the distinction between "desktop and web(remote)" applications. Because if integration of remote objects is sandboxed but still transparent you get improved usability.

      Also I think you go not far enough. Databases for file system are fine but I think the ida of widgets or UI libraries altogether is not feasible anymore.

      The system has to adopt to a individual level, people have different needs and workflows.

      Highly adoptable and conversational interfaces are needed.

      WDYT?

      • joshmarinacci 2410 days ago
        Yep. I'm very familiar with STEPS. It's what makes me think that it is actually possible to build a new open source OS.
    • daveheq 2411 days ago
      I disagree with you on a lot of things here. Files in two places will confuse people. Expecting a computer to know what hand gestures you're making requires a camera which can be more mistake-prone than voice, and like watching your eyes, a lot of people will just find this creepy. Limiting people to drag-n-drop or cut-n-paste will aggravate half of the userbase (and I use either one depending on the situation).

      A lot of your requirements for a "modern" OS are pie-in-the-sky or just seem very particular to your taste. I didn't much here that you want that I preferred, so outside of the bloat (especially Windows and Ubunto requiring GPUs to process 3-D effects), I see more disadvantages with your changes than otherwise.

    • jlundberg 2412 days ago
      Thanks Josh for publishing your naive article early. Reading it and a bunch of the comments was the best read this week.

      History has yet to concide open desktop operating systems in favour for smartphone era silo platforms.

      • joshmarinacci 2412 days ago
        I feel the answer, at least for desktop, is going to be some sort of hybrid between the app store model of smartphones and the upstream packager model of Linux distros.

        What I'd really like to see is some data viz and machine learning tools to analyze the dependencies of open source software, and then intelligently cut extra strings. Fewer deps makes more reliable software.

    • memsom 2411 days ago
      The BeOS stuff you mention is sort of true, but not really. BeOS once had a full on database based file system, but that's not BFS. The OFS (Old File System - what they called it with in BeOS) had full on no holds barred database storage. But the performance sucked, synching the metatdata in the file system and database (which were different structures apparently) was a bit flakey, and the base for the way the filesystem worked was incompatible with integrating other file systems in to the OS. Dominic Giampaolo and Cyril Meurillon ended up creating a whole new base for the file system with VFS layer, etc. As part of that the BFS file system was created. This incorperated a subset of what the file system used to be able to do - it had extended attributes (for adding arbitrary data to the file), a query layer to allow querying those attributes and a mechanism for live queries that would allow the data to dynamically update. But it wasn't really a database in the same way WinFS was meany to be - i.e. built on top of SQL Server.
    • erlend_sh 2412 days ago
      You should do an in depth review of Redox OS and let them know your thoughts.
    • bane 2412 days ago
      Thanks for writing this. I think many of your observations are correct (and agree with them). I'm not as sold on your solutions though. Here's some thoughts:

      Abstractions are a correct way to do things when we don't know what "correct" is or need to deal with lots of unknowns in a general way. And then when you need to aggregate lots of different abstractions together, it's often easier to sit yet another abstraction on top of that.

      However, in many cases we have enough experience to know what we really need. There's no shame in capturing this knowledge and then "specializing" again to optimize things.

      In the grand 'ol days, this also meant that the hardware was a true partner of the software rather than an IP restricted piece of black magic sealed behind 20 layers of firewalled software. (At first this wasn't entirely true, some vendors like Atari were almost allergic to letting developers know what was going on, but the trend reversed for a while). Did you want to write to a pixel on the screen? Just copy a few bytes to the RAM addresses that contained the screen data and the next go around on drawing the screen it would show up.

      Sometime in the late 90s the pendulum started to swing back and now it feels like we're almost at full tilt the wrong way again despite all the demonstrations to the contrary that it was the wrong way to do things, paradoxically this seemed to happen after the open source revolution transformed software.

      In the meanwhile, layers and layers and layers of stuff ended up being built in the meanwhile and now the bulk of software that runs is some kind of weird middle-ware that nobody even remotely understands. We're sitting on towers and towers of this stuff.

      Here's a demo of an entire GUI OS with web browser that could fit in and boot off of a 1.4MB floppy disk and run on a 386 with 8MB of RAM. https://www.youtube.com/watch?v=K_VlI6IBEJ0

      I would bet that most people using this site today would be pretty happy today if something not much better than this was their work environment.

      People are surprised when somebody demonstrates that halfway decent consumer hardware can outperform multi-node distributed compute clusters on many tasks and all it took was somebody bothering to write decent code for it. Hell, we even already have the tools to do this well today:

      https://aadrake.com/command-line-tools-can-be-235x-faster-th...

      http://www.frankmcsherry.org/graph/scalability/cost/2015/01/...

      There's an argument that developer time is worth more than machine time, but what about user time? If I write something that's used or impacts a million people, maybe spending an extra month writing some good low-level code is worth it.

      Thankfully, and for whatever reasons, we're starting to hear some lone voices of sanity. We've largely stopped jamming up network pipes with overly verbose data interchange languages, the absurdity of text editors consuming multi-core and multi-GB system resources is being noticed, machines capable of trillions of operations per second taking seconds to do simple tasks and so on...it's being noticed.

      Here's an old post I write on this some more, keep in mind I'm a lousy programmer with limited skills at code optimization, and the list of anecdotes at the end of that post has grown a bit since then.

      https://news.ycombinator.com/item?id=8902739

      and another discussion

      https://news.ycombinator.com/item?id=9001618

    • joshmarinacci 2412 days ago
      Correction, 110+ people have signed up now. Wow!
      • unicornporn 2412 days ago
        If you want even more people to sign up, you may want to edit your post to include the URL to the list[1] in your original post. It took me quite some time to find it as you called it a discussion list here on HN and a “group” on your blog post.

        [1] https://groups.google.com/forum/#!forum/idealos-design

    • erikb 2411 days ago
      Well I suggest trying to lean back, reading the feedback points again, and trying to understand what makes people think that way. I bet a UX focussed person can take that into consideration and craft a final version of this article that answers such kind of questions before they even come up. ;-)
    • vacri 2412 days ago
      > My point is that the technology isn't that hard.

      I disagree - the technology is extremely hard. You're talking centuries of staff-hours to make your OS, if you want it to be a robust general-purpose OS and not a toy. Just the bit where you say you want the computer to scan what's in your hands and identify it? That in itself is extraordinarily difficult. You mischaracterise the task at hand by pretending it's simple.

      • edraferi 2412 days ago
        I think the article gave that as an example of what the system should enable, rather than a prerequisite for launch. The system wide message bus and semantic shortcuts features should make it really easy for developers of advanced peripherals to plug into the system.

        For example, a Kinect would be a lot more useful in ideal OS. You could bind gestures to window manager commands, for example.

        • chaboud 2412 days ago
          Unfortunately, it's wholly impractical to do real work with message busses alone if you want to maximize performance.

          See: Every performance inter-process system ever...

          Could we cover a number of cases with copy-on-write semantics and system transactional memory? Sure, but the tech isn't broadly available yet, and it wouldn't cover everything.

          Sometimes you just need to share a memory mapping and atomically flip a pointer...

          • joshmarinacci 2412 days ago
            It doesn't have to be implemented as a message bus. It should just be a message bus semantically. Under the hood we could implement all sorts of tricks to make it faster, as long as the applications don't notice the difference.
            • chaboud 2405 days ago
              I would expect that to work to a point, but coherence and interlocking are eventually going to rear their ugly heads.

              I've created and lived with multiple inter-process high-performance multimedia/data systems (e.g. video special effects and editing, real-time audio processing, bioinformatics), and I've yet to encounter a message passing semantic that could match the performance of manually managed systems for the broad range of non-pathological use-cases, not to speak of the broader range of theoretically possible use-cases.

              If something's out there, I'd love to see it. So far as I know, nobody has cracked that nut yet.

    • bitmapbrother 2412 days ago
      You've probably heard of Google's new OS Fuchsia. What are your thoughts of the technology decisions they've made so far?
      • joshmarinacci 2412 days ago
        They haven't talked about it much, but the idea of no initial permissions is a good one. Today the OS must fundamentally not trust any application. Permissions at install time are a good idea. Permissions at use time is an even better idea. Provenance also helps. But nothing beats sandboxing the app as much as possible.

        There's no silver bullets here, but we might be able to silver plate a few.

        • Gaelan 2412 days ago
          > Today the OS must fundamentally not trust any application.

          I entirely agree.

          > Permissions at install time are a good idea.

          I'm actually not so sure about this. I think the "iOS model" of asking for permissions when necessary is much better than the "Android model"* of a big list of permissions on install, preventing use of the app if any of them aren't granted (leading to users giving apps less scrutiny over their permissions than than they would under the iOS model).

          * I believe some recent versions of Android (M?) may support the "iOS model" in some form.

          • teolandon 2412 days ago
            Recent Android versions have a pretty good system in place. The apps still work even without some permissions (for example Instagram can work for browsing, but won't be able to take any pictures without the camera permission), and you can easily review the permissions you have granted to any app, and revoke them after the fact.

            Now, the real problem is that permissions are too general. The "access/read/write files" permission is all grouped up in one place, so you end up with tons of directory for every app in your root directory (that don't get deleted when uninstalling the apps that generated them), and you allow unnecessary access to other files as well. Or the network permission, which could lead to all sorts of traffic, while many developers just need it for ads.

            • abraae 2412 days ago
              There's an insoluble issue in that making permissions super granular - what you really need for good security - makes them unusable for Joe public, because he can't understand them. Heck, most people probably don't understand the very broad ones that Android has now.

              Maybe whats needed is more of a trust model. Users could ask "what would Bruce Schneier do" for example. If Bruce [substitute trusted person of your choice] would install this app, then I'm happy to do it as well.

              • zanedb 2412 days ago
                That's true. Maybe there could be an 'enable advanced permissions' option?
            • deckiedan 2412 days ago
              I think the macOS sandbox, "apps can only access their own files, unless the OS file open dialog is used to select" is a really clever solution, and could be extended -possibly to URLs too? On install an app can list "home domains" everything else requires a confirm or general permission (for web browsers).
              • sjellis 2411 days ago
                I agree. On Linux, Flatpak actually does the same thing: apps only get permission to access resources when the user chooses to work with those resources by making selections in the UI.
            • forapurpose 2412 days ago
              Last I checked, awhile ago, Android didn't have an Internet permission - the user couldn't stop apps from accessing the Internet. Is that still true?
              • thanksgiving 2412 days ago
                Yes, it is still true. You can disable Internet access if you're on some Android based devices (I know you can with lineageos, the heir to CyanogenMod). However, if you have Google play services installed you may still see ads in the app.

                Of course, you have a firewall if you're rooted but I'm not rooted when on a Nexus device.

          • zanedb 2412 days ago
            Yes, since Android Marshmallow (6.0), which is almost 3 years old, there has been a granular permissions system. Most apps support it now, but there is the problem that only 45.8% of all Android devices have M or later.

            https://developer.android.com/about/dashboards/index.html

          • pjmlp 2412 days ago
            > I believe some recent versions of Android (M?) may support the "iOS model" in some form.

            Actually it is the Symbian model.

        • shalabhc 2412 days ago
          Can we eliminate or reduce the need for permissions altogether?

          E.g. OSes have an `open()` system call, which can potentially access any part of the filesystem, and then they layer an ever growing permission system to restrict this.

          Can we design a system where there is no `open()` call at all? Instead, the program declares "I accept an image and output an image", and when it is invoked, the OS asks the user for an image, making it clear which program is asking for it. Then the OS calls the program, passing in the selected file.

          This model has other advantages such as introspection, composability, validation and isolation (e.g. a program that declares it takes an image an output an image has no access to the network anyway and cannot send you image over the network.)

          • irishsultan 2411 days ago
            I don't see how this would work for command line tools (there are other applications where an extreme version wouldn't work, but for command line tools I really don't see a good workaround except giving broad permissions).
            • shalabhc 2408 days ago
              If command lines were built in systems designed around this principle, they would work slightly different than what we're used to. For instance, when invoking this 'program' on the command line, the system would discover it needs an image and use the command line itself to ask for an image from the user.

              Alternatively there could be standard way to pass an image (or any other input) to a program - similar to a command line arg in current systems, for instance.

              • irishsultan 2407 days ago
                I don't see how this works with something like a recursive grep, or a find result sent to xargs for further processing.
                • shalabhc 2407 days ago
                  You would define rgrep as taking a stream of files. Generating the stream of files is outside the capability of any program (since we don't have `open` or even `listdir`). Instead you'd use a primitive of the storage system itself to define a stream of files that you pass to rgrep. Something like `rgrep(/my/path/, text)`. So it becomes impossible for any program to access a file without the user's explicit indication.
                  • irishsultan 2407 days ago
                    That still doesn't help with a find + xargs combination, or with any kind of problem where you can currently store file names in a file and use that for later processing.
                    • shalabhc 2406 days ago
                      You cant have a find program because you can't discover files, you must be provided them. But you can have a `filter` program, that takes a stream of files an outputs another stream of files matching a filter. You can then pipe the output of filter into another program.

                      Yes you cannot store filenames, but you could store some other serialized token generated from a file and the token could be used to recreate the file object. Alternatively, if you have an image based system, you don't have to convert the file object to a token explicitly - you just hold the references to the file objects and they're automatically persisted and loaded.

                      • irishsultan 2405 days ago
                        Wait, so even listing file names needs permissions (otherwise find would work), even doing ls on the command line won't work?
                        • shalabhc 2405 days ago
                          Correct - ls couldn't be a separate program - it would be replaced with a primitive that lets you explore the filesystem.

                          The point of such a system would be that programs cannot explore or read the filesystem as there is no filesystem API. But programs can operate on files explicitly given to them. So exploring the filesystem is restricted to some primitives that have to be used explicitly. The guarantee then is if I invoke a program without giving it a file or folder, I know it absolutely cannot access any file.

                          • irishsultan 2405 days ago
                            Define "explicitly", because if it means that I can't just type it in a shell then that disqualifies it from being a practical solution, and if I can't put it in a function/script that I can call from a shell that disqualifies it as well.

                            But if I can do those things (especially the second), then that seems to open at least some attack vectors (that would obviously depend on the actual rules).

                            • shalabhc 2404 days ago
                              You should be able to type it in a 'shell', and you should be able to set it on a function/program you call from the shell. But you cant download and run program that automatically references a file by path. This system is different enough a unix style system so I'll try and roughly describe some details (with some shortcuts) of how I imagine it. It is a messaging based data flow system (could be further refined, of course):

                              - The programs behave somewhat like classes - they define input and output 'slots' (akin to instance attributes). But they don't have access to a filesystem API (or potentially even other services, such as network). Programs can have multiple input and output slots.

                              - You can instantiate multiple instances of the program (just like multiple running processes for the same executable). Unlike running unix processes, instantiated programs can be persisted (and are by default) - it basically persists a reference to the program and references to the values for the input slots.

                              - When data is provided to the input slot of an instantiated program (lets call this data binding), the program produces output data in the output slot.

                              - You can build pipeline of programs by connecting the output slot of one program to the input slot of another. This is how you compose larger programs from smaller programs. This could even contain control and routing nodes so you can construct data flow graphs.

                              - Separately, there are some data stores, these could be filesystem style or relational or key/value.

                              The shell isn't a typical shell - it has the capability to compose programs and bind data. It also doesn't execute scripts at all - it can only be used interactively to compose and invoke the program graphs. A shell is bound to a data store - so it has access to the entire data store, but is only used interactively by an authenticated user.

                              So interactive invocation of a program may look something like this:

                                 >  /path/to/file1 | some_program | /path/to/file2
                                 # this invokes some_program, attaches file1 to the input slot, saves the output slot contents to file2.
                              
                              You could save the instantiated program itself if you want.

                                 > some_program_for_file1 = [/path/to/file1 | some_program]
                              
                              Then invoke it any number of times.

                                 > some_program_for_file1 | /path/to/file3  # runs some_program on existing contents
                                 (update file1 here...)
                                 > some_program_for_file1 | /path/to/file4  # runs some_program on new contents
                               
                              With advanced filtering programs, you could define more complex sets of input files.

                                 > /path/to/folder | filter_program(age>10d, size<1M) | some_program | /path/to/output_folder
                              
                              You can even persist the instantiated query, and reuse it

                                 > interesting_files = [/path/to/folder | filter_program(age<1d)]
                                 > interesting_files | program_one
                                 > interesting_files | program_two
                              
                              So that's the rough idea, using an ad-hoc made up syntax for single input/output slot programs.
        • edraferi 2412 days ago
          I would love to see a desktop OS with the Capabilities Security model that Sandstorm uses.
        • remir 2412 days ago
          In Fuchsia, apps are also made using modules.
  • Damogran6 2412 days ago
    So what he's saying is: REmove all these layers because they're bad, but add these OTHER layers because they're good.

    Thats how you make another AmigaOS, or Be, I'm sure Atari still has a group of a dozen folks playing with it, too.

    The OS's over the past 20 years haven't shown much advancement because the advancement is happening higher up the stack. You CAN'T throw out the OS and still have ARkit. A Big Bloated Mature Moore's Law needing OS is also stable, has hooks out the wazoo, AND A POPULATION USING IT.

    4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.

    All of these things take a ton of time, and people, and money, and support (if you don't have money, you need the volunteers)

    Go build the next modern OS, I'll see you in a couple of years.

    I don't WANT this to sound harsh, I'm just bitter that I saw a TON of awesome, fledgling, fresh Operating systems fall by the wayside...I used BeOS, I WANTED to use BeOS, I'da LOVED it if they'd won out over NeXT (another awesome operating system...at least that survived.)

    At a certain level, perhaps what he wants is to leverage ChromeOS...it's 'lightweight'...but by the time it has all the tchotchkes, it'll be fat and bloated, too.

    • oneplane 2412 days ago
      On top of that, most hardware and protocol implementations are either secret and under NDA or free and open but lacking a full implementation to begin with.

      The post contains many idealistic proposals, but most of them boil down to lawyer stuff and money, not technical problems. You can't have nice GPU access because GPU's are secret. You can't have things work together because nobody wants to share their secret sauce. Everyone is trying to 'be the best' and get an edge on the rest, but in a way that nobody really profits from it from a technical standpoint.

      Aside from the shit-ton of reverse-engineering and some cleanroom design, there is very little that can be done to improve this, and no company is going to help, and thus no big pile of resources is coming to save the day.

      This does of course not only go for GPU's, but CPU's and their BSP's and secret management controllers as well, as the dozen or so secret binary blobs you need to get all the hardware to work at all.

      Fixing this from the ground up, i.e. for x86, would mean something like getting coreboot working on the recent generations of CPU's, and that's not happening at the moment due to lack of information and secret code signing keys needed to actually get a system to work.

      • sjellis 2411 days ago
        "This does of course not only go for GPU's, but CPU's and their BSP's and secret management controllers as well, as the dozen or so secret binary blobs you need to get all the hardware to work at all."

        My first thought was actually "what about data formats?" These days, most data formats are at least nominally open, but you still need to write code to work with those formats, and most of the existing code is still in C or C++ libraries. The IdealOS will fail instantly as soon as a user receives a DOCX or XLSX file and it displays the document wrong. It can't just launch LibreOffice and use that. Even LibreOffice can't always parse random MS Office documents correctly, and LO represents decades of coder-years.

        • tripzilch 2409 days ago
          Well they could probably come up with some solution perhaps using a virtual LibreOffice to import it or whatnot. We've known for ages that those formats aren't particularly "ideal" and we should be starting letting them go, regardless of dreaming about a hypothetical OS or not.

          I mean, you could also probably come up with some solution to have a guest over who brings their pet cow without doing too much damage to your nicely decorated apartment. Doesn't mean that's an ideal situation, and it most definitely is no reason to not dream about living somewhere nicer than a stable, and what that would look like to you.

          The latter part, about dreaming how NICE your house could look like if you did not have to accommodate guests with cows barging into your living room all the time. That is what the article is actually about, he's pretty clear about his awareness that technical possibility is very different from the availability of a realistic road to transition from where we are now to the possibilities he sketches.

          It's also a very important matter of combating learned helplessness. If you dismiss dreaming about an IdealOS beforehand because there's no way (that you can see now) to get there from where desktop OS's are today, then you most assuredly will miss the opportunity to attain even some of these improvements, were they to come within reach through some circumstance in the future.

          Also, I remember programming on a 386. And on the one hand it amazes me that the thing in my pocket today is so much more powerful than that old machine, let alone my current desktop. And on the other hand, it infuriates me that some tasks on my desktop today are quite slow when really they have no right to be, and some of these tasks are even things that my 386 used to have no problems with whatsoever (but then, TPX was a ridiculously fast compiler, a true gem).

          We should not let that slip out of sight, demand better and keep dreaming.

          • sjellis 2408 days ago
            I guess that the article did annoy me. Partly because I remember GNUStep, Longhorn, OLPC etc. etc. which attempted some of these things, and we already know why those projects failed completely, and partly because I can now see the desktop slowly improving month-by-month on the Linux+GNOME stack. Yes, it's painfully slow and gradual, but it's sustainable progress: Wayland compositors, GNOME, Flatpak, and Atomic Workstation are actual shipping code that will only get better and more heavily used over time.
      • edraferi 2412 days ago
        > You can't have things work together because nobody wants to share their secret sauce. Everyone is trying to 'be the best' and get an edge on the rest, but in a way that nobody really profits from it from a technical standpoint.

        I feel like enterprise customers could provide some demand for IdealOS for this reason. BigCorps have lots of data and application silos, as well as lots of knowledge workers who are expected to synthesize all that data. There are a lot of smart people who are power users but not devs. (i.e., macro jockies). Something like IdealOS could really increase productivity in these places.

        Of course you have to deal with all the usual enterprise headaches, mostly security and backwards compatibility. But then they'd pay a premium.

        • tmzt 2412 days ago
          I've been suggesting that one way into an enterprise environment is to _give the hardware away_.

          Make a device with enough RAM, Bluetooth for a mouse, USB ports, and one or two HDMI ports. A stick computer might be a good starting point.

          Then build your OS for that device. Enable cloud management, integrate with Active Directory, focus on an amazing out of the box web browser experience and expand with an app store for well-thought-out, well designed open and commercial apps.

          Now give ten to every company with a DUNS number.

          Sell more with a subscription including more advanced management and enable pushing modified Windows group policies to them.

          Make it good enough for a casual knowledge worker to use .

        • abraae 2412 days ago
          If the world worked like this, then BigCorps would probably demand easy to use, intuitive payroll and ERP systems first.

          But exhibit A: SAP.

      • digi_owl 2412 days ago
        Why people have such hope for recent efforts in specifying a open CPU ISA.
        • pjc50 2412 days ago
          The ISA is not important. The economics of open hardware is, and that's a much harder problem.
          • notgood 2412 days ago
            Probably a point in the middle would be the right compromise, most people think its binary (its closed hardware or its free+open) but I don't; I think there should be a hardware company (with all the stack: GPU + CPU + kernel) that goes the same way Epic went with Unreal (their game engine); meaning you can use their hardware for free and the specs are public but if your company evers get more than 50K in profits you have to pay them 20% of your profits; or something like that.
          • bsder 2412 days ago
            > The ISA is not important. The economics of open hardware is, and that's a much harder problem.

            Maybe. The fact that Moore's Law finally broke may paradoxically help that.

            When you can simply get 2x the (cost, performance, features) simply by doing nothing, there is no incentive to optimize anything.

            Now that you can't simply "do nothing", people will start looking at alternatives.

            • tripzilch 2409 days ago
              Yesss I've been waiting for this moment for nearly two decades haha :)

              And it's definitely possible, just look at the C64 and Amiga demoscene. Those machines haven't evolved for ages, but they've been making them do one (thought to be) "impossible" thing after another for a very long time after the platforms were essentially considered dead. I've seen things at demoparties around 2000 where C64 demos showed stuff that was thought to be impossible to do on these machines (or so I was told, I'm not an expert on the C64's capabilities, but the thing runs in the single megahertzes and doesn't have a divide instruction, so yeah). One I remember had a part with a veeery low resolution realtime raytracer, about 10fps I think, the scene consisting of just a plane and a sphere (IIRC) ... but it was done on a C64.

              I wonder how long it will take for PCs though. Moore's Law broke already a few years ago didn't it? But it's not really happening, so far. Or maybe it is. I haven't been keeping up with what's happening in the PC demoscene lately. They used to be way ahead of the curve compared to PC videogames, this changed somewhere in the 200Xs, probably because around that time videogames started getting Hollywood-size budgets.

    • csydas 2412 days ago
      I think the author wants a bunch of really specific personal workflow ideas/concepts they have to be the standard, which is typically what these rants are. Such rant posts are always interesting to me as I do question my own workflow just to see if there are good ideas I'm missing out on, but a lot of the author's ideas just don't strike me as all that important in most cases, and in some of the complaints, I'm not sure what the complaint is.

      Their complaint on the filesystem, for example, falls flat for me, but partially because I think I don't understand what they want or how BeOS did it. Maybe the author has a special meaning for "...sort by tags and metadata", but this looks to be baked right into Finder at the moment; I can add in a bunch of columns to sort by, tag my items with custom tags (and search said tags), add comments, and so on. Spotlight also has rendered a lot of organization moot as you just punch in a few bits of something about what you're looking for (tags, text within the document, document name, file type, date modified by, etc.) and you'll find it. I don't know exactly what is missing from modern OSes (Windows search isn't too bad either) that the author isn't contented with.

      The idea of multiple forms of interaction with the computer are okay, but quite frankly it starts to get into an eerie situation for me where I'd rather have to take a lot of steps to set up such monitoring as opposed to it being baked into the OS. I realize that I'm squarely in luddite territory given the popularity of Home Assistants (Echo, Apple Homekit, Google Home), but to me these seem like very intentional choices on the part of a customer; you have to go out of your way to get the hardware for it and disabling it is as simple as pulling the plug. Look at the non-sense we're having to deal with in regards to Windows Telemetry - to me this is what happens when such items get baked into the OS instead of being an additional application; you end up with a function you no longer can control, and for no other reason than to satisfy the complaint of "I have to download something? No thank you!"

      I could go on, but the author's rant just doesn't seem consistent and instead seems to just want some small features that they liked from older OSes to be returned to modern OSes. There is a huge issue with bloat and cruft and some real legacy stuff in Windows and macOS, and desktop OSes aren't getting the attention they should be, but these suggestions aren't what desktop OSes are missing or what they need.

      • ChrisSD 2412 days ago
        Ars Technica did a retrospective on the BeOS filesystem[1] which may help explain things. The tl;dr of it is that the filesystem is the canonical database that all applications can use and query without any special domain specific knowledge. I'm not up to date with how MacOS works so it's possible they've added a layer on top of the filesystem which works similarly. However, I do know that Windows is nowhere near that level, mainly because it's encouraged for metadata to be stored in file specific ways.

        [1] https://arstechnica.com/information-technology/2010/06/the-b...

        • sirn 2412 days ago
          On the Mac, the metadata is stored in a file (inside /.Spotlight-V100) rather than inside the filesystem (a la BeOS File System). An application can provide Spotlight Importer that can extract metadata from a file during index (this is why mdimport is taking a lot of CPU time).

          AFAIK, this approach is contrary to the BeOS approach, where application write the metadata directly. Spotlight's approach do have few benefits, though, such as able to provide metadata for files in network drives, or for removable disks that might not be using filesystem that supports metadata.

          • oneplane 2412 days ago
            No, that's only part of the story. Early on, they were using resource forks for data but metadata as well. Then they moved to filesystem attributes and those just grew. You can have tags in there, as well as text, icons, settings etc. It's pretty standardized, but on top of that, you can use Spotlight metadata too, both in the EA as well ans the V100 DB.
        • JoBrad 2412 days ago
          Using a structure other than the file to store information about the file seems like a big problem (like the iTunes example Josh used). It's inherently not portable. Even if the OS took extra steps to copy the additional data along with the file, that still replies on the target OS to recognize the additional file(s), and incorporate it into whatever search functionality it has. Supporting the tags or metadata in the file simplifies things quite a bit.
      • sirn 2412 days ago
        It might be worth nothing that one of BeFS's authors, Dominic Giampaolo, is currently working on APFS (and previously on Spotlight.)
        • csydas 2412 days ago
          A fascinating piece of trivia I was not aware of. To me it seems reasonable that instead of trying to reinvent the whole shebang that instead you get these incremental changes over time that just make an OS really really good.

          OS X too a lot of getting used to for me as a kid, as I had an old mac clone and an iMac with 10.1 side by side in my living room, and I loved my little mac clone. OS X didn't immediately win me over because I was just too used to OS 9 and had everything I needed on my offline mac clone. But I distinctly remember Spotlight being what really sold me on OS X because from the get-go it worked basically as intended, and man was it magnificent. If the author of Spotlight is on APFS, I have a lot of faith in it then.

      • FooHentai 2412 days ago
        >I think the author wants a bunch of really specific personal workflow ideas/concepts they have to be the standard, which is typically what these rants are.

        This one in particular:

        >Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps?

        I mean, you can. It's called the taskbar.

    • technofiend 2412 days ago
      Yeah gnu tried and not to be too harsh failed with mach and hurd. They designed an OS according to their principles and nobody came to the party. Heck, mach uses Linux device drivers which tells you how much effort volunteers are willing to put into each project.

      I don't think we'll see a change from Linux and Windows (edit: and IOS) until there's another compelling reason to switch; some feature that can't or won't be available in the other two operating systems and their surrounding ecosystems of software.

      When was the last time we really had a "Visicalc sold more Apples than Apples sold Visicalc" moment? I can't think of one after Linux wafflestomped all the propietary hardware and os unix vendors, or to give Apple their due, when they released the iPhone.

      Edit: duh, of course cloud taking over for bespoke hardware and software defined storage pushing out EMC and the like are two recent examples of industry game changers, but on the other hand both still rely primarily on Linux so my assertion about operating systems still stands.

    • nialv7 2412 days ago
      > 4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.

      Well, they wouldn't need to any more. They can adopt drivers from Linux or any other free operating systems. The inner work of a driver might be arcane, the interface to an operating system is generally well defined. Adopting a existing driver is definitely doable.

  • cs702 2412 days ago
    Yes, existing desktop applications and operating systems are hairballs with software layers built atop older software layers built atop even older software layers.

    Yes, if you run the popular editor Atom on Linux, you're running an application built atop Electron, which incorporates an entire web browser with a Javascript runtime, so the application is using browser drawing APIs, which in turn delegate drawing to lower-level APIs, which interact with a window manager that in turn relies on X...

    Yes, it's complexity atop complexity atop complexity all the way down.

    But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!

    Quoting Joel Spolsky[1]:

    "There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. ... The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. ... When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

    [1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

    • DonaldFisk 2412 days ago
      I like a lot of what Joel writes, but I profoundly disagree with him on this, and I'm not alone in my dislike of accidental complexity which, which I think is now an order of magnitude greater than essential complexity. So there is a "silver bullet". It just needs someone to bite it.

      The author of the article recognizes there's a problem, but is less clear on how to go about solving it. A clue is in this article by Erik Naggum: http://naggum.no/erik/complexity.html

      Dan Ingalls once wrote: "Operating System: An operating system is a collection of things that don't fit into a language. There shouldn't be one." What he meant is we should migrate the functionality of the operating system into the programming language. This is possible if there's a REPL or something similar, so no need for the shell or command line. The language should be image-based, so no need for a file system. So, a bit like Squeak, or a Lisp with a structure editor.

      There's still a gap between the processor and the language, which should be eliminated by making the processor run the language directly. This was done in Burroughs mainframes and Lisp machines.

      Further up the "stack", software such as word processors and web browsers are at present written entirely separately but have much in common and could share much of their code.

      • shalabhc 2412 days ago
        Thanks for the link to Erik's essay - it was a great read.

        I like the idea of an image based system, eliminating the need for the filesystem itself. I think the 'filesystem' and 'executable-process' ideas are so prevalent that they frame our thinking, and any new OSes tend to adopt these right away. But more interesting and powerful systems might emerge if we find a new pattern of operation and composition. Are you aware of any image based full stack systems that are in active development?

        • nailuj 2412 days ago
          I don't think it's developed very actively, but a recent effort is PharoNOS: http://pillarhub.pharocloud.com/hub/mikefilonov/pharonos
        • mafik 2407 days ago
          Android is one such system. Each app gets its own image (called Bundle) where it can store it's state. OS manages those state bundles to offer multitasking on memory-constrained devices and persist app state across reboots.
          • DonaldFisk 2402 days ago
            Android has a file system. It's a operating system (a Linux variant). I'm suggesting that neither a file system nor an operating system is necessary, or even desirable. Just run an interactive programming language continuously on the bare metal, with its image periodically backed up to secondary storage.
    • DashRattlesnake 2412 days ago
      > "When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

      But, sometimes, that's exactly what you should do. It brings to mind OpenSSL after Heartbleed. I remember reading that the LibreSSL people were ripping out all kinds of stuff (like kludges and workarounds to support OpenVMS), and rightly so. You might call it "knowledge [and] collected bug fixes," but sometimes the crap is just crap.

      • gruturo 2412 days ago
        LibreSSL is a special case which doesn't fit your example. They threw away code and didn't rewrite most of it - because it was supporting useless stuff. Features like heartbeat (see Hearbleed), obsolete and insecure ciphers, tons of crap no one should ever use again but is supposed to be there for FIPS-140 or other compliance requirements which in 2017 do far, far more harm than good.
      • lou1306 2412 days ago
        > You might call it "knowledge [and] collected bug fixes," but sometimes the crap is just crap.

        Key word here is "Sometimes".

        Reviewing code means you get to find out the reason these hacks were written in the first place, and then decide wheter to keep, rework or delete them.

        Starting from scratch means you get rid of the worthless crap, yes, but you also lose all the valuable crap.

      • sgift 2412 days ago
        > ripping out all kinds of stuff

        So, they weren't starting over. They forked, refactored and removed things no longer needed. Completely different thing.

      • vacri 2412 days ago
        > It brings to mind OpenSSL after Heartbleed.

        It's three years later. What general-purpose OS other than OpenBSD is using LibreSSL?

    • steinuil 2412 days ago
      > But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!

      Neither is keeping everything as it is and keep pretending it's fine!

      > When you throw away code and start from scratch, you are throwing away all that knowledge.

      I disagree with Joel on here. There's lots to be learned from throwing everything away and starting from scratch, and if anything those innovations could make their way into the current infrastructure, as it has happened with Midori and Windows.

      • mwcampbell 2412 days ago
        Can you give an example of innovations from Midori making it into Windows?
    • makecheck 2412 days ago
      It is vital to have regular cleanup in a code base to avoid the feeling that it should "all" be scrapped. There will always be code worth keeping for all the reasons mentioned (bugs fixed, etc.) but there will always be something that should just go away.
      • Boothroid 2412 days ago
        Absolutely. I suffer when I have to look at my crumby POC code that we don't have the hours to fix.
    • GTP 2412 days ago
      He doesn't suggest to trow away the old layers just because they're old, but because he suggests a different approach. Anyway that's a really good quote, I like it very much.
      • quickben 2412 days ago
        It is, but one just doesn't replace a proper BTree filesystem with a document database. Just because the author saw a document db, and thought it's cool, without looking at why BTrees won.
    • linkmotif 2412 days ago
      Not a big Joel Spolsky guy but this is truth of truths right here.

      Related: "legacy code is code that doesn't have tests" not sure who said this but also very true IMO

      • sgift 2412 days ago
        > Related: "legacy code is code that doesn't have tests" not sure who said this but also very true IMO

        Michael Feathers in "Working with legacy code" (A book I can highly recommend)

      • u02sgb 2412 days ago
        Michael Feathers quote I think.
    • lifthrasiir 2412 days ago
      Joel's assertion applies to the situation where the old code and the new code will reach roughly the same complexity at the end (for example, this is often the case when the requirements are complex enough and cannot be changed). If you have a very good idea to greatly reduce the complexity, ignore him and go ahead.

      In the other words, the point is that you always have to do the cost-benefit analysis for any such endeavor and the history tells that rewriting is intrinsically very expensive.

    • Boothroid 2412 days ago
      It's also more fun to write it yourself! At least until you realise you've bitten off more than you can chew.
  • jcelerier 2412 days ago
    > Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible.

    that's absolutely possible on linux with i3wm for instance

    > I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

    awk and sed, no, but there are many CLI tools that accept video streams through pipe. e.g. FFMPEG. You wouldn't open your video through a GUI text editor, so why would you through CLI text editors ?

    > Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

    Sure they are, on linux: https://linux.die.net/man/1/wmctrl

    Fifteen years ago people were already controlling their WM through dbus: http://wiki.compiz.org/Plugins/Dbus#Combined_with_xdotool

    The thing is, no one really cares about this in practice.

    • kennywinker 2412 days ago
      Good luck piping your skype call into ffmpeg
      • kbenson 2412 days ago
        If every window had one or more associated files in /dev which corresponded to it's audio and video output, it wouldn't necessarily be that hard. You wouldn't even have to worry about actually having bits there until someone was listening, because the OS knows when someone opens a file, and could send the appropriate message to the window system to start sending the video associated with that window to the pipe as well.

        There's no reason why a desktop window application could not supply audio and video to, or receive audio and video from, ffmpeg or even a chained command that might just include ffmpeg at some step.

        • striking 2412 days ago
          I wonder if some sneaky LD_PRELOAD hacks could make this possible.
      • jcelerier 2411 days ago
        Dunno about skype but video4linux exposes devices such as /dev/video0 where you can read / write the raw video stream so...
    • posterboy 2412 days ago
      Tabbing is not implemented by the wm, as far as I know.
      • ori_b 2412 days ago
        Tabbing is not implemented exclusively by the WM, currently -- that's correct. But there exist window managers that handle tabbing. So, it's largely a matter of removing code. Lots of code. And then fixing more code that makes assumptions that it controls the drawing of the tabs.

        And there's already the Xembed protocol for embedding windows in other windows, so it's technically possible to even move tabs from one application into another, with a coordinating dance. None of the changes that he wants really needs changes to X11 (although, as far as I know, it would be totally impossible under Wayland.)

        It just needs someone to change applications to support it. I'd be interested to see an attempt.

        • posterboy 2412 days ago
          I fail to see the use of web tabs in the file explorer. What's the use case?
          • tracker1 2411 days ago
            considering how easy it is to switch tabs in a browser compared to a buried filesystem window, or multiple windows that are also buried. I can definitely see the use. I'd like to be able to have the file browser and terminal share a window via tabs as well.

            I'm not sure we'll see it though, as for the most part the applications are developed fairly separately, and almost certainly won't see it in an open operating system working well for even most apps that people use. Short of maybe google releasing a filesystem browser extension and terminal extension, which may be entirely possible.

          • jcelerier 2411 days ago
            it's very useful for keeping folders pinned without them taking space in your desktop. Just like for terminals, etc.
            • posterboy 2410 days ago
              folders? we were talking web-content-tabs.

              Sure many file-browsers (thunar, pcmanfm ...) can tab file-browser views, but I don't see the need for web-tabs in a not-web-browser. Firefox likewise can show folder content for file:///, but not feature complete compared to a file-browser.

              • jcelerier 2410 days ago
                uh ? I'm re-reading all the parent conversation chain and cannot see where there is mention of "web".
      • tadfisher 2412 days ago
        Fluxbox has had WM-level tabbed window groups for ages now: http://fluxbox.org/features/
      • d4l3k 2412 days ago
        As mentioned i3wm does.

        https://i3wm.org/docs/modes.png

        • posterboy 2412 days ago
          That's simply grouping windows.
          • jcelerier 2411 days ago
            well, that's what tabs are. what's the difference with a Google Chrome tab ?
            • posterboy 2410 days ago
              those are associated with ... well, the chrome (the menu border, searchbar, etc.)
  • spankalee 2412 days ago
    This sounds a lot like Fuchsia, which is all IPC-based, has a syncable object-store[1], a physically-based renderer[2], and the UI is organized into cards and stories[3] where a story is "a set of apps and/or modules that work together for the user to achieve a goal.", and can be clustered[4] and arranged in different ways[4].

    [1]: https://fuchsia.googlesource.com/ledger/

    [2]: https://fuchsia.googlesource.com/escher/

    [3]: https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...

    [4]: https://fuchsia.googlesource.com/sysui/#important-armadillo-...

    [5]: https://fuchsia.googlesource.com/mondrian/

    • enugu 2412 days ago
      Looking at [1] and the key value database described there, it would need some key coordination mechanism to make use of the system database. For instance, a way for an app to say that a document it stores is either directly of a given type or implements the interface (ex: an email or a music data), so that other apps can use this document. So the type field would refer to a standardized type like a uuid associated with some type of data or a URI (like in RDF). Also, it can have some mechanism for other types to implement the interface or extend existing types and for users to create new types just like we can register urls today.

      Having a database with standardized interfaces for documents replace a filesystem is a really important feature mentioned in the article. It will allow the development of many useful apps like the itunes or email examples. Also, this is not specific to any OS can be standardized independently and implemented on current OSes we use by having some extension which stores metadata along with a file.

    • joshmarinacci 2412 days ago
      OP here: I agree. Fuchsia is very exciting. Google is one of the few companies with resources to actually build a new OS from scratch, and deal with backwards compatibility.
    • lsh 2412 days ago
      I don't trust Google. I couldn't use fuchsia on principle.

      Ignoring the reasons why I don't trust Google, being able to trust your tools, especially your desktop, is the most important thing for me. I would love to have my emails delivered to a global document store so many smaller apps could take advantage of them, but only so long as I could guarantee there is no special 'google play services' app needing to run in the background doing who-knows-what with root.

      • rocho 2412 days ago
        Just out of curiosity, what smartphone and OS do you use, if I may ask? iOS and Windows Phone are closed source, which I'd argue it's worse than using Google services. Android seems pretty much unusable without Google integration. So...?
        • Synaesthesia 2412 days ago
          So we are forced to use a major manufacturer for our smartphones. Does that mean we can’t criticize them? They’re not on our side, they’re out to make a profit. I don’t trust them either, and I use a smartphone.

          I do think it’s important that we create open hardware and open software. I’m realizing Richard Stallman was right all along.

        • lsh 2412 days ago
          GNU/Linux (Arch), and a rooted moto-g :) my desktop is notion: http://notion.sourceforge.net/

          I took my hat out of the desktop race a long long time ago. the only thing that has really affected me recently was Mate switching entirely to GTK3. my text editor now does all sorts of things the maintainer of the editor can't change, like smooth-scrolling when using the find dialog.

          I really want to get behind this effort for an improved desktop, even if it means breaking everything. but I have to be able to trust each of the components.

  • zug_zug 2412 days ago
    I really don't understand the negativity here. I sense a very dismissive tone, but most of the complaints are implementation details, or that this has been tried before (so what?).

    I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.

    -- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).

    -- Yes, it should be a database design, with permissions.

    -- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.

    -- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up. [Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])

    -- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).

    -- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.

    -- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)

    I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.

    • pjc50 2412 days ago
      > -- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).

      Back in 1999 I saw a demo of Nemesis at the Cambridge Computer Lab: a multithreaded OS that was designed to resist this kind of thing. Their demo was opening up various applications with a video playing in the corner and pointing out that it never missed a frame.

      Even back then I understood that this was never going to make it to the mainstream.

      > If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement

      You can do shell replacements and shell extensions on Windows. You can replace whatever you want on Linux. Non-customisability of MacOS is a Jobsian deliberate choice.

      > event-driven architecture

      Windows is actually rather good at this.

      > all applications get the ability to share their content

      > every app were confined to a single directory

      Solving this conflict is extremely hard.

      • mnw21cam 2412 days ago
        > Back in 1999 I saw a demo of Nemesis at the Cambridge Computer Lab: a multithreaded OS that was designed to resist this kind of thing. Their demo was opening up various applications with a video playing in the corner and pointing out that it never missed a frame.

        Yes, but Nemesis was a proper real time OS. The video-playing application had asked the OS for a guarantee that it would get X MB/s of disc bandwidth, and that it would have Y ms of CPU time every Z ms. The scheduler then gave that application absolute priority over everything else running while inside those limits, in order to make that happen.

        This isn't hard. However, it conflicts with the notion of fair access to resources for all. The OS can only give a real-time guarantee to a limited number of processes, and it cannot rescind that guarantee. Why should one application get favourable access to resources just because it was the first one to reserve them all? How does the OS tell a genuine video-playing application from an application that wants to slow down the OS/UI?

        This is why applications need special privileges (i.e. usually root) in order to request real-time scheduling on Linux. It's complicated.

        Nemesis also did some nifty stuff with the GUI - individual applications were given responsibility to manage a set of pixels on the screen, and would update those pixels directly. This was specifically to avoid the problems inherent in the X-windows approach of high-priority tasks being funnelled through a lower-priority X server.

      • redial 2412 days ago
        > Even back then I understood that this was never going to make it to the mainstream.

        Back in 1998[1][2] Apple demoed the then beta versions of OS X doing exactly that. Multiple video streams playing without missing frames, being resized into the dock, while still playing [3] (a feature that is not present any more), and even resisting highjacking by a bomb app. It all worked back then and it still does today.

        > Non-customisability of MacOS is a Jobsian deliberate choice.

        Also, there are multiple Finder replacements apps for the Mac, the thing is, nobody cares because the Finder is good enough for most people.

        [1] https://youtu.be/pz3J-WC0jp0?t=258

        [2] https://youtu.be/GNYIYx7QRdc?t=4188

        [3] I can't find the link but it is in one of Job's keynotes from the 2000s

      • digi_owl 2412 days ago
        I fear that on Linux the ability to do drop in replacements are being heavily curtailed by DEs chasing some kind of holy grail UX...
      • JoBrad 2412 days ago
        Containerized apps are a step in the right direction. I like the concept that Mac/iOS uses for app installation. Although I've seen users lose an app they've just run way too often
    • dleslie 2412 days ago
      > Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel?

      Take a moment to consider your expectations of that operating system, and the expectations upon software used thirty years ago.

      Thirty years ago it was uncommon for users to expect more than one application to be operating at once, and so scheduling resource use wasn't an issue. Now, your PC has ll manner of processes doing work while you go about your business; applications are polling for updates from remote servers, media players are piping streams to a central mixer and attempting to play them simultaneously and seamlessly, your display is drawn with all manner of tricks to improve visual appeal and the _feeling_ of responsiveness, and your browser is going all this over again in the dozens of tabs you have open simultaneously.

      So once in a while a resource lock is held for a little too long, or an interrupt just happens to cause a cascade of locks that block your system for a period, or you stumble across a corner-case that wasn't accounted for by the developers.

      Frankly, it's nothing short of a miracle that PCs are able to operate as well as they do, despite our best efforts to overload them with unnecessary work.

      And yes, I too hate Electron, but in all my decades of working on PCs I can't really recall a time that was as... Actually, BeOS was pretty f'n great.

      • GuiA 2412 days ago
        Thirty years ago is cheating.

        How about 15 years ago? I was doing everything you describe as part of my daily computer use (a few browser windows open, a text editor running, a multimedia player, an IM and mail client running, etc) and had the same performance and usability frustrations.

        The main difference is that if I render a video now, I'll do it in 4k instead of 640x480, and that if I download a game, it's 50GB instead of 500MB. But scalin in that direction is expected; my machine isn't any more stable, or anything that goes in the way of the examplea described in the article.

        If I showed my mobile phone to 2002 me, they'd be extremely impressed. If I showed the form factor and specs of my laptop, they'd be extremely impressed. But if I showed them how I use my desktop OS? The only cool thing would be Dropbox, I think.

        • dleslie 2412 days ago
          We've hit a local maximum with the current desktop metaphor; moving in any other direction means losing value until the next peak can be found.

          This is happening in the mobile space, as you noted; it will happen in AR next.

        • hossbeast 2412 days ago
          That is a very interesting thought exercise. 2002 me might also ask, "where's the screensaver?"
      • pjc50 2412 days ago
        I don't think responsiveness of the system is an unreasonable expectation. App A should not be able to perform a denial-of-service on app B simply by being badly programmed.
        • digi_owl 2412 days ago
          I dunno.

          Deep down the computer is still linear.

          Yes we do all kinds of tricks with context switching to make it seem like it is doing a whole bunch of things at the same time.

          But if we had visualized the activity in human terms, it would be an assembly line that is constantly switching between cars, trucks, scooters and whatsnot at a simply astonishing rate.

          • willglynn 2412 days ago
            That hasn't been true in many many years.

            Multicore processors literally run several things at the same time. Even a single core can literally run several instructions at the same time thanks to instruction level parallelism, in addition to reordering instructions, predicting and speculatively executing branches, etc. The processor also has a cache subsystem which is interacting with the memory subsystem on behalf of the code -- but this all works in parallel with the code. Memory operations are executed as asynchronously as possible in order to maximize performance.

            What's more, outside a processor, what we call "a computer" is actually a collection of many interconnected systems all working in parallel. The northbridge and southbridge chips coordinate with the instructions running on the CPU, but they're not synchronously controlled by the CPU, which means they are legitimately doing other things at the same time as the CPU.

            When you read something off disk, your CPU sends a command to an IO controller, which sends a command to a controller in the disk, which sends a command to a motor or to some flash chips. Eventually the disk controller gets the data you requested and the process goes back the other way. Disks have contained independent coprocessors for ages; "IDE" stands for "Integrated Drive Electronics", and logical block addressing (which requires on-disk smarts) has been standard since about 1996.

            Some part of your graphics card is always busy outputting a video signal at (say) 60 FPS, even while some other part of your graphics card is working through a command queue to draw the next frame. Audio, Ethernet, wifi, Bluetooth, all likewise happen simultaneously, with their own specialized processors, configuration registers, and signaling mechanisms.

            Computers do lots of things simultaneously. It's not an illusion caused by rapid context switching. Frankly, the illusion is that anything in the computer is linear :-)

            • willglynn 2412 days ago
              Following up on that last thought, about how linearity is the illusion: this talk explains in detail the Herculean effort expended by hardware and language designers to create something comprehensible on top of the complexities of today's memory subsystems. It's three hours and focused on how it affects C++, but IMO it's well worth the time and accessible to anyone who has some idea of what happens at the machine code level.

              https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-20... https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-20...

              • pjc50 2412 days ago
                Absolutely. You can get systems which don't perform the illusion for you, like e.g. the Tilera manycore processors, and they've not taken off because they're a pain to program.
    • cwyers 2412 days ago
      > I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.

      And if you think about it just a bit longer, you conclude that if all modern operating systems are a disgusting mess, then being a disgusting mess optimizes for survival somehow. And until you figure that piece out, you're never going to design something that's viable.

      • andrewflnr 2412 days ago
        Haven't we figured that out, though? They're disgusting messes because they got to where they are through incremental additions, without getting rid of the vestigial parts. That's no reason to believe that being a disgusting mess is literally a requirement for survival.
      • madez 2412 days ago
        You are misusing the idea of evolution. Evolution requires a sufficient number of generations and individuals. I think both are not given in this context.
    • rocho 2412 days ago
      I cannot speak about macOS, but I can about Windows. I've used Windows since I was 5 years old, for a bit more than 10 years. I've used Windows XP, Vista and 7. Then I've switched to Linux.

      Windows is frankly terrible. I've also tried Windows 8 and I have Windows 10 in a virtual machine (with plenty of resources). It's true that it has too many layers, too many services, too much in the way of doing everything. It cannot run for 15 minutes without some services that crashes or some windows that stops being responsive.

      Truth be told, the same thing happened, to a lesser degree, when I used Ubuntu (first distribution I tried). The experience was more pleasant overall, but the OS felt still too bloated.

      My journey among Linux distributions led me to ArchLinux. I've been using it for a few years now, and all I can say is that it's been exceptional. 99% of the times the package upgrades just work (and don't take 2 hours like on Ubuntu), I've yet to experience an interface freeze, and I'm extremely productive with the workflow I came up with. My environment is extremely lean: first thing I did was to replace desktop environments, which just slow you down, with window managers (at the moment I'm using Bspwm and it's the best I tried so far, even better than i3wm). Granted, the downside of this is that you have be somewhat well-versed in the art of Unix and Linux, but I would say that in most cases it's just one more skill added to the skill set.

      All of this to say that, in my opinion, un-bloated OSes are already here. The messiest component on my system is undoubtedly the kernel, but what can you do? Surely you cannot expect to have a kernel tailored to the computers released in the last year.

      • wutbrodo 2411 days ago
        Agreed. It's always bizarre to me that people make the choice to use bloated, crappy OSes and then complain about the very existence of these options as though they're compelled to use them. I've used Linux as my primary system since the first year of college, and that's taken me all the way from a snazzy, effects-laden setup to my current stripped down Openbox setup with no panels, controlled largely by keyboard shortcuts.

        Not that my system is perfect, obviously, but bloat on your system that's not due to something you're explicitly running is a concern I just can't relate to at all, and frankly, I don't know why people put up with it.

    • Synaesthesia 2412 days ago
      Does your 8-core Mac run on an SSD? I/O performance and consistency is way more important for responsiveness.
    • vacri 2412 days ago
      > but most of the complaints are implementation details

      ... because the answer to a lot of the posed questions are those implementation details.

      It's a bit like saying "There should be peace in the middle east. The details of the politics there are largely irrelevant. They should just make peace there, then it would be better".

  • noen 2412 days ago
    As a current developer, former 10 year UX designer, and developer before that, this kind of article irks me to no end.

    He contradicts his core assertion (OS models are too complex and layered) with his first "new" feature.

    Nearly everything on this manifesto has been done before, done well, and many of his gripes are completely possible in most modern OS's. The article just ignores all of the corner cases and conflicts and trade-offs.

    Truly understanding the technology is required to develop useful and usable interfaces.

    I've witnessed hundreds of times as designers hand off beautiful patterns and workflows that can't ever be implemented as designed. The devil is in the details.

    One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.

    Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.

    • pavlov 2412 days ago
      One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.

      And ironically, one of the reasons why Windows was successful in developing these patterns for office applications is that much of the work was done by IBM.

      The UI in Windows 3 was functionally almost identical to the Presentation Manager interface that had been designed for the IBM-Microsoft collaboration OS/2. The design implemented an IBM standard called CUA [1].

      CUA is not an exciting UI, but it did a good job of consolidating existing desktop software patterns under a consistent set of commands and interactions. The focus on enabling keyboard interaction was crucial for business apps, and a strong contrast to the mouse-centric Mac (which didn't even have arrow keys originally).

      The kind of extensively data-driven UI system development that CUA represented is totally out of fashion nowadays, though. Making office workers' lives easier is terribly boring compared to designing quirky button animations and laying out text in giant type.

      [1] https://en.m.wikipedia.org/wiki/IBM_Common_User_Access

      • BatFastard 2412 days ago
        Good observations. Truth is "It's really hard to develop user interfaces that are easy to use and powerful at the same time." I have been working on one in my passion project for years, and the balance between presenting just enough information with a clear path to more, and filled the screen with overload is a delicate balance.
    • Animats 2412 days ago
      I'd still like to have QNX-type messaging. The UNIX/Linux world started out with no interprocess communication, and ended up with a large number of incompatible ways to add it. The Windows world started out with interprocess communication with everybody in the same address space, and gradually tightened up. QNX started with messaging as the main OS primitive, and everything on QNX uses it.

      The key to efficient IPC is that the scheduler and the interprocess communications system have to be tightly coupled. Otherwise you have requests going to the end of the line for CPU time on each call, too many trips through the scheduler, and work switching from one CPU to another and causing heavy cache misses. QNX got this right.

      (Then they were bought by a car audio company, Harmon, and it was all downhill from there.)

      QNX messaging isn't a "bus" system, and it has terrible discovery. Once communications are set up, it's great, but finding an endpoint to call is not well supported. The designers of QNX were thinking in terms of dedicated high-reliability real-time systems. It needs some kind of endpoint directory service. That doesn't need to be in the kernel, of course.

      QNX is a microkernel, with about 60KB (not MB) of code in the kernel, and it offers a full POSIX interface. (There used to be a whole desktop GUI for it, Photon, good enough to run early versions of Firefox, but Blackberry blew off the real-time market and dropped that.) File systems, networking, and drivers are all in user processes, and optional. L4 is more minimal, probably too minimal - people usually run Linux on top of it, which doesn't result in a simpler system.

      • wogna 2412 days ago
        Small additions here: the networking components of QNX moved to kernel space quite some time ago, I don't even know if io-net is still supported. As far as I know they've reused the NetBSD stack for performance reasons. Also, those 60KB gives you a bare-bones system that is far from what people expect a POSIX system to be; you'd have to add plenty of additional processes to get there.

        I still have a soft spot for QNX though; I hope they'll survive RIM.

        • Animats 2411 days ago
          Aw, they put networking in the kernel? Dan Dodge would not have approved.

          60KB was just the kernel, not the additional processes that run in user space. The great thing about such a tiny kernel was that it could be fully debugged. The kernel didn't change much from year to year back in QNX 6.

          Many embedded systems put the kernel in a boot ROM, so the system came up initially in QNX, without running some boot loader first. You built a boot image with the kernel, the essential "proc" process, and whatever user space drivers you absolutely had to have to get started.

          QNX went open source for a while, starting September 2007, and there had been a free version for years. After the RIM acquisition, they went closed source overnight and took all the sources offline before people could copy them. That was the moment when they totally lost the support of the open source community.

    • tomc1985 2412 days ago
      There's a tyranny of designers, they must be stopped. Their "beautiful" designs have infected everything and now everything is all super-low data-density, full-screen interstitials, and hero units!

        > Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.
      
      I feel like the only power user in the world who liked this design. Yeah VS is big and scary but I disagree with your comments on discoverability. I learned to program on VB6 and then early VS.NET and I discovered features in either just fine. There was a standard protocol for getting to know big hairy beasts like VS or 3DS or FLStudio: set aside a week to play around, click everything in the menus to see what happens, and then come up with a goal and figure out how to achieve it. VS was dense but never stood in my way in this regard. (Though I could say the complete opposite about the documentation, with its dense, verbose style and unique, Microsoft vocabulary)
      • appleflaxen 2412 days ago
        i agree with you; i didn't understand the VS example at all.
    • tylerscott 2412 days ago
      > Truly understanding the technology is required to develop useful and usable interfaces.

      +100. This is something I have advised to any designer that would listen. You must have at least a basic understanding of the technology in order to understand the set of affordances with which you use to design your flows.

      • pjmlp 2412 days ago
        Which is why I always make a point on Web projects to get HTML/CSS designs instead of Photoshop or PowerPoints.
        • _asummers 2412 days ago
          Adobe used to have a product they inherited from Macromedia called Fireworks that I enjoyed getting designs in, as a developer. It no longer exists, to my knowledge, but it spat out CSS land basic HTML, which I liked.
          • iamphilrae 2412 days ago
            Still exists (although discontinued). Open the Adobe CC dialogue, go to the Apps part, tick "View older versions" (or something like that), and you should find its CS6 version.
          • SamuelPB 2412 days ago
            Checkout sketch and a few extensions.
    • titanix2 2412 days ago
      I disagree with your statement on VS discoverability. It is quite the opposite actually. I learn to use VS 2008 (my first IDE) on my own with very few googling after a year or so of computer science class. On the other hand using Eclipse or Netbeans for some class always ended in coding with Vim because the UI and framework integration was non obvious.

      Finally one things that I precisely dislike with VS Code is that this whole discoverability ease was throw out of the windows and almost any complex task can't be completed without looking in the documentation.

      • reitanqild 2412 days ago
        VS Code is REALLY discoverable IMO.

        Ctrl + Shift + P (on Windows, might differ depending on OS) brings up command palette or whatever it is called.

        Start typing.

        When it has narrowed down to the command you need, make a mental note of the shortcut next to it, then hit esc and use it.

    • Qworg 2412 days ago
      That 5% is critical though and part of the reason that Office is so hard to dislodge. For that 5% of users, that feature is critical. Stack up enough features and you can be unassailable.
      • tomc1985 2412 days ago
        I wish that was the mentality.

        "Oh, but why should we allocate resources to something the majority of users won't use?" - everyone on HN

        • tormeh 2412 days ago
          Well, that does make the common stuff excellent. And I would guess that the amount of people that have a weird must-have feature (that isn't just a result of misunderstanding the software) is pretty low.
          • tomc1985 2411 days ago
            It makes the common stuff mediocre. Every product blends in to every other product. Average software for average minds.
    • quickben 2412 days ago
      Like any set of ideas, some of his ideas are brilliant, some are utterly stupid.

      No need to get rilled up, just take the good and ignore the rest :)

  • avaer 2412 days ago
    > I suspect the root cause is simply that building a successful operating system is hard.

    It's hard but not that hard; tons of experimental OS-like objects have been made that meet these goals. Nobody uses them.

    What's hard is getting everyone on board enough for critical inertia to drive the project. Otherwise it succumbs to the chicken-and-egg problem, and we continue to use what we have because it's "good enough" for what we're trying to do right now.

    I suspect the next better OS will come out of some big company that has the clout and marketing to encourage adoption.

    • XorNot 2412 days ago
      What's hard is making your backwards compatiblity story sane. You need to somehow make your new system provide some obvious advantages even to ported apps, while still plausibly allowing them to work with minimal porting effort.

      But I think this "reinvent the world" concept has a deeper flaw - in all the discussion I didn't see any mention of how you make it performant despite that being an identified problem. If everything's message passing...how much memcpy'ing is going on in the background? What does it mean to pipe a 4gb video file to something if it's going to go onto a message bus as ... 4kB chunks? 1 mb?

      Remember this is a proposal to rebuild the entire personal computing experience, so "good enough" isn't good enough - it needs to absolutely support a lot of use cases which is why we have so many other mechanisms. And it also (due to the porting requirement) should have a sensible way to degrade back to supporting old interfaces.

      Microsoft owns the desktop partly because they absolutely were dedicated to backwards compatiblity. You want to make progress - you need to have a plan for the same.

      • SilasX 2412 days ago
        Yep: "The reason God could finish the earth in six days is that He didn't have to worry about backward compatibility."
      • pjmlp 2412 days ago
        Yep and given the disaster of WinDev vs DevTools political differences, they are still putting down fires.

        If UWP had been there in Windows 8, with something like .NET Standard already in place, the app story would be much different.

    • Tepix 2412 days ago
      Exactly. You need to have the most critical apps running on your OS (development IDE, modern web browser and mail mostly). That's going to be a significant effort especially if those apps need to be rewritten as modules to take advantage of the paradigms the new OS offers.
    • Damogran6 2412 days ago
      Good enough, plus it runs App 'X' that I need. Be that Word, or Mozilla, or Facebook.app

      So good HTML 5.0 support is key, but there are a lot of layers between that and bare metal.

    • vacri 2412 days ago
      The author doesn't want "an OS", but "an OS that operates like one out of a sci-fi movie, tracking and interpeting my actions and responding to natural language". Toy OS projects aren't this.
  • dcow 2412 days ago
    Android already tried things like a universal message bus and a module-based architecture and while nice it doesn't quite live up to the promise for two reasons:

    1. Application devs aren't trained to architect new software. They will port old shitty software patterns from familiar systems because there's no time to sit down and rewrite photoshop for Android. It's sad but true.

    2. People abuse the hell out of it. Give someone a nice thing and someone else will ruin it whether they're trying to or not. A universal message bus has security and performance implications. Maybe if Android was a desktop os not bound by limited resources it wouldn't have pulled out all the useful intents and neutered services, but then again the author's point is we should remove these complex layers and clearly the having them was too complex/powerful/hungry for android.

    I do think there's a point to be made that we're very mouse and keyboard centric at the primitive IO level and in UI design. I always wondered what the "command line" would look like if it was more complex than 128 ascii characters in a 1 dimensional array. But it probably wouldn't be as intuitive for humans to interface with unless you could speak and gesture to it as the author suggests.

  • nwah1 2412 days ago
    I agree with a lot of the critics in the comments, but I will say that the author has brought to my attention a number of features that I'm now kind of upset that I don't have.

    I always thought LED keyboards were stupid because they are useless, but if they could map to hotkeys in video players and such, that could be very useful, assuming you can turn off the LEDs.

    His idea for centralized application configs and keybindings isn't bad if we could standardize using something like TOML . The Options Framework for Wordpress plugins is an example of this kind of thing, and it does help. It won't be possible to get all the semantics agreed upon, of course, but maybe 80% is enough.

    Resurrecting WinFS isn't so important, and I feel like there'd be no way to get everyone to agree on a single database unless every app were developed by one team. I actually prefer heterogeneity in the software ecosystem, to promote competition. We mainly need proper journalling filesystems with all the modern features. I liked the vision of Lennart Poettering in his blog post about stateless systems.

    The structured command line linked to a unified message bus, allowing for simple task automation sounds really neat, but has a similar problem as WinFS. But I don't object to either, if you can pull it off.

    Having a homogenous base system with generic apps that all work in this way, with custom apps built by other teams is probably the compromise solution and the way things have trended anyways. As long as the base system doesn't force the semantics on the developers, it is fine.

    • joshmarinacci 2412 days ago
      "I liked the vision of Lennart Poettering in his blog post about stateless systems."

      Do you have a link to that?

    • andrewflnr 2412 days ago
      Why settle for TOML in your config? It should be a database.
      • diegof79 2412 days ago
        The Windows registry is a database. Introduced in Windows 3.1 to handle COM classes, and later extended as the preferred config mechanism over .ini files.

        If you are a Windows user you'll notice the problems that it introduces in terms of security and maintainance. Files are much better in both aspects.

        To me the underlying issue is not centralization into a single database, but the usability of advanced configuration. Every OS have multiple attempts to resolve that problem, which ended in more fragmentation for end users (i.e. macOS plist / registry / rc files / etc).

      • quazeekotl 2412 days ago
        a filesystem is a database
  • ghinda 2412 days ago
    You have most of these, or at least very similar versions, in Plasma/KDE today:

    > Document Database

    This is what Akonadi was when when it came out for 4.x. Nepomuk was the semantic search framework so you could rate/tag/comments on files and search by them. They had some performance problems and were not very well received.

    Nepomuk has been superseded by Baloo, so you can still tag/rate/comment files now.

    Most KDE apps also use KIO slaves: https://www.maketecheasier.com/quick-easy-guide-to-kde-kio-s...

    > System Side Semantic Keybindings

    > Windows

    Plasma 4 used to have compositor-powered tabs for any apps. Can't say if it will be coming back to Plasma 5. Automatic app-specific colors (and other rules) are possible now.

    > Smart copy and paste

    The clipboard plasmoid in the system tray has multiple items, automatic actions for what to do with different types of content and can be pinned, to remain visible.

    > Working Sets

    These are very similar to how Activities work. Don't seem to be very popular.

    • joshmarinacci 2412 days ago
      Those KIO slaves are really interesting. I've never seen those before. Thanks!
  • diegof79 2412 days ago
    What the author wants is something like Squeak. The idea behind Smalltalk wasn't to do a programming language, but a realization of the DynaBook (google for the essay "History Behind Smalltalk").

    While I agree with the author that more innovation is needed on the desktop; I think that the essay is very disinformed.

    For example, Squeak can be seen as an OS with very few layers: everything is an object, and sys calls are primitives. As user you can play with all the layers, and re-arrange the UI as you want.

    So why the idea didn't took off? I don't know exactly (but I have my hypothesis). There are many factors to balance, those many factors are the ones that makes design hard.

    One of those factors is that people tend to put the wrong priorities of where innovation should be. A good example is what the author mentions as priorities for him. None of the items mentions fundamental problems that computer users face today (from my perspective of course).

  • antoineMoPa 2412 days ago
    I appreciate the article for its coverage of many OS (including BeOS, wow, I should try that). What about package management though? Package management really defines the way you live under your flavor of linux, and there is a lot of room for improvement in current package managers (like decentralizing them, for example).

    Also:

    > I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer [...]

    I can't agree with that, it is the plain text nature of the command line that makes it so useful and simple once you know a basic set of commands (ls,cd,find,sed,grep + whatever your specific task needs). Plain text is easy to understand and manipulate to perform whatever task you need to do. The moment you learn to chain commands and save them to a script for future use, the sky is the limit. I do agree with using voice to chain commands, but I would not complain about the plain text nature and try to bring buttons or other forms of unneeded complexity to command-line.

    • taktoa 2412 days ago
      Nix(OS) is the future of package/configuration management IMO. It'd be such a shame if someone built a new OS without learning from NixOS.
    • m_eiman 2412 days ago
      If you want to try BeOS, try Haiku OS instead - it’s an open source clone that’s easier to run on modern machines.

      https://www.haiku-os.org/

  • lake99 2412 days ago
    > Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need

    I don't know what he means by "traditional", but Linux native filesystems can store all the metadata you'd want.

    > Why can't I have a file in two places at once on my filesystem?

    POSIX compatible filesystems have supported that for a long time already.

    It seems to me that all the things he wants are achievable through Plan9 with its existing API. The only thing missing is the ton of elbow grease to build such apps.

    • XorNot 2412 days ago
      There's also a reason no-one uses hard links: because you can't tell if you edit a file, where else you might be editing it.
      • pjmlp 2412 days ago
        Worse, if you ask around most people are unsure what happens to the original file when you delete an hard link.
        • contras1970 2412 days ago
          there is no "original file": hard links are just synonymous names for a single blob. rm / unlink essentially just reduce a reference counter and the storage gets freed when the counter drops to zero.
          • pjmlp 2412 days ago
            I know that, most users don't.
      • ahy1 2412 days ago
        Actually everyone use hard links. They just don't use multiple hard links to the same file.
  • hackermailman 2412 days ago
    This guy wants GuixSD for 60% his feature requests, like isolated apps, version control, snapshots, ease of configuration, and ability to abstract all of it away, and Hurd for his multi-threaded ambitions, modularity, ability to do things like mount a database in a home directory to use as a fileserver, and message passing. This is slowly happening already https://fosdem.org/2017/schedule/event/guixhurd/

    Then he wants to completely redesign a GUI to manage it all, which sounds a lot like Firefox OS with aware desktop apps, but with the added bonus that most things that req privileges on desktop OSs no longer need them with Guix. Software drivers are implemented in user space as servers with GNU Hurd, so you can now access these things and all the functionality that comes with them, exactly what the author wants.

  • jmull 2412 days ago
    This isn't worth reading.

    (It's painfully naive, poorly reasoned, has inaccurate facts, is largely incoherent, etc. Even bad articles can serve as a nice prompt for discussion, but I don't even think this is even good for that. I don't we'd ever get past arguing about what it is most wrong about.)

    • alexandercrohde 2412 days ago
      I think is borderline ad-hominem.
      • khedoros1 2412 days ago
        I think that the comment is attacking the essay, rather than attacking the essay by attacking the author. I think it's worth reading, but I also think it would've been better if it didn't repeatedly contradict itself, say that features don't exist that clearly do, and so on.
  • chrisleader 2412 days ago
    "First of all, it’s quite common, especially in enterprise technology, for something to propose a new way to solve an existing problem. It can’t be used to solve the problem in the old way, so ‘it doesn’t work’, and proposes a new way, and so ‘no-one will want that’. This is how generational shifts work - first you try to force the new tool to fit the old workflow, and then the new tool creates a new workflow. Both parts are painful and full of denial, but the new model is ultimately much better than the old. The example I often give here is of a VP of Something or Other in a big company who every month downloads data from an internal system into a CSV, imports that into Excel and makes charts, pastes the charts into PowerPoint and makes slides and bullets, and then emails the PPT to 20 people. Tell this person that they could switch to Google Docs and they’ll laugh at you; tell them that they could do it on an iPad and they’ll fall off their chair laughing. But really, that monthly PowerPoint status report should be a live SaaS dashboard that’s always up-to-date, machine learning should trigger alerts for any unexpected and important changes, and the 10 meg email should be a Slack channel. Now ask them again if they want an iPad." - Benedict Evans
  • xolve 2412 days ago
    Not an ideal article for anything. Looks like written with limited research, that by the end of it I an hardly keep focus.

    > Bloated stack. True, there are options which author hasn't discussed.

    > A new filesystem and a new video encoding format. Apple created new FS and video format. These are far more fundamental changes to be glossed over as trivial in a single line.

    > CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top. Linux subsytem is a great feature of Windows. Ability to run bash on Windows natively, what's the author complaining about?

    > but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s! Heard of Klipper and similar app in KDE5/Plasma. Its been there for so long and keeps text, images and file paths in clipboard.

    > Why can't I have a file in two places at once on my filesystem? Hard links and soft links??

    > Filesystem tags Are there!

    What I feel about the article is: OSes have these capabilities since long, where are the killers applications written for these?

  • benkuykendall 2412 days ago
    The idea of system wide "document database" is really intriguing. I think the author identified a real pattern that could be addressed by such a change:

    > In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.

    The ability to operate on my browser history or emails as a table would be awesome! And this solves so many issues about losing weird files when trying to back up.

    However, I would worry a lot about schema design. Surely most apps would want custom fields in addition to whatever the OS designer decided constitutes an "email". This would throw interoperability out the window, and keeping it fast becomes a non-trivial DB design problem.

    Anyone have more insights on the BeOS database or other attempts since?

    (afterthought: like a lot of ideas in this post, this could be implemented in userspace on top of an existing OS)

  • mwcampbell 2412 days ago
    I'm glad the author thought about screen readers and other accessibility software. Yes, easy support for alternate input methods helps. But for screen readers in particular, the most important thing is a way to access a tree of objects representing the application's UI. Doing this efficiently over IPC is hard, at least with the existing infrastructure we have today.

    Edit: I believe the state of the art in this area is the UI Automation API for Windows. In case the author is reading this thread, that would be a good place to continue your research.

  • dgreensp 2412 days ago
    I love it, especially using structured data instead of text for the CLI and pipes, and replacing the file system with a database.

    Just to rant on file systems for a sec, I learned from working on the Meteor build tool that they are slow, flaky things.

    For example, there's no way on any desktop operating system to read the file tree rooted at a directory and then subscribe to changes to that tree, such that the snapshot combined with the changes gives you an accurate updated snapshot. At best, an API like FSEvents on OS X will reliably (or 99% reliably) tell you when it's time to go and re-read the tree or part of the tree, subject to inefficiency and race conditions.

    "Statting" 10,000 files that you just read a second ago should be fast, right? It'll just hit disk cache in RAM. Sometimes it is. Sometimes it isn't. You might end up waiting a second or two.

    And don't get me started on Windows, where simply deleting or renaming a file, synchronously and atomically, are complex topics you could spend a couple hours reading up on so that you can avoid the common pitfalls.

    Current file systems will make even less sense in the future, when non-volatile RAM is cheap enough to use in consumer devices, meaning that "disk" or flash has the same performance characteristics and addressability as RAM. Then we won't be able to say that persisting data to a disk is hard, so of course we need these hairy file system things.

    Putting aside how my data is physically persisted inside my computer, it's easy to think of better base layers for applications to store, share, and sync data. A service like Dropbox or BackBlaze would be trivial to implement if not for the legacy cruft of file systems. There's no reason my spreadsheets can't be stored in something like a git repo, with real-time sync, provided by the OS, designed to store structured data.

    • lou1306 2412 days ago
      > I love it, especially using structured data instead of text for the CLI and pipes

      Actually, that's a main selling point for Powershell. Commandlets take and return objects, which means common operations such as filtering, sorting and formatting are quite easy.

    • rdiddly 2412 days ago
      • dgreensp 2412 days ago
        It's not that file-watching APIs (and libraries that abstract over them and try to clean them up) don't exist, it's that they are complex and unreliable, with weak semantics. Typically an "event" is basically a notification that something happened to a file in the recent past. As noted in the remarks on that page, moving a file triggers a cascade of events, which differs depending on interactions with the computer's antivirus software. You aren't making any claims about this API, though, so there is not really anything for me to refute.

        If the file system operated in an event-sourcing model, you'd be able to listen to a stream of events from the OS and reconstruct the state of the file system from them. If it acted like a database, you'd be able to do consistent reads, or consistent writes (transactions! holy cow).

  • jimmaswell 2412 days ago
    Patently false that Windows hasn't innovated, UX or otherwise. Start menu search, better driver containment/other bsod reduction, multi-monitor expanding task bar, taskbar button reordering, other Explorer improvements, lots of things.
  • microcolonel 2412 days ago
    > Why can't I have a file in two places at once on my filesystem?

    You can! Use hardlinks.

    > Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

    There are well established standards for controlling window managers from programs, what on earth are you talking about?

    > Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.

    Just like Wayland!

    > All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.

    Just like flatpak!

    > Smart copy and paste

    This is entirely feasible with the current infrastructure.

    > Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.

    Some of this is already built, and most of it is entirely feasible with existing systems. It's probably not even that much work.

  • IamCarbonMan 2412 days ago
    All of this is possible without throwing out any existing technology (at least for Linux and Windows; if Apple doesn't envision a use case for something it's very likely never going to exist on their platform). Linux compositors have the ability to manipulate the window however the hell they want, and while it's not as popular as it used to be, you can change the default shell on Windows and use any window manager you can program. A database filesystem is two parts: a database and a filesystem. Instead of throwing out the filesystem which works just fine, add a database which offers views into the filesystem. The author is really woe-is-me about how an audio player doesn't have a database of mp3s, but that's something that is done all the time. Why do we have to throw out the filesystem just to have database queries? And if it's because every app has to have their own database- no they don't. If you're going to rewrite all the apps anyways, then rewrite them to use the same database. Problem solved. The hardest concept to implement in this article would be the author's idea of modern GUIs, but it can certainly be done.

    On top of this, the trade-off of creating an entirely new OS is enormous. Sure, you can make an OS with no apps because it's not compatible with anything that's been created before, and then you can add your own editor and your own web browser and whatever. And people who only need those things will love it. But if you need something that the OS developer didn't implement, you're screwed. You want to play a game? Sorry. You want to run the software that your school or business requires? Sorry. Seriously, don't throw out every damn thing ever made just to make a better suite of default apps.

  • remir 2412 days ago
  • Animats 2412 days ago
    If you want to study user interfaces, look at programs which solve a hard problem - 3D animation and design programs. Learn Inventor or Maya or Blender.

    Autodesk Inventor and Blender are at opposite ends of the "use the keyboard" range. In Inventor, you can do almost everything with the mouse except enter numbers and filenames. Blender has a 10-page list of "hotkeys". It's worth looking at how Inventor does input. You can change point of view while in the middle of selecting something. This is essential when working on detailed objects.

  • vbezhenar 2412 days ago
    I think that the next reboot will be unifying RAM and Disk with tremendous amount of memory (terabytes) for apps and transparent offloading of huge video and audio files into cloud. You don't need filesystem or any persistence anymore, all your data structures are persistent. Use immutable stuff and you have unlimited Undo for the entire device life. Reboot doesn't make sense, all you need is to flush processor registers before turning off. This experience will require rewrite OS from ground up, but it would allow for completely new user experience.
    • mike_hearn 2412 days ago
      It's not even clear to what extent you'd have to rewrite.

      3D X-Point memory is coming. This is about 10x slower than DRAM but persistent and at a fraction of the cost. At 10x slower you can integrate it into NUMA systems and treat it as basically the same as RAM. One of the first features prototyped with it is "run a JVM with the heap made entirely persistent".

      I agree that there's a lot of scope for innovation in desktop operating systems but it probably won't come from UI design or UI paradigms at this point. To justify a new OS would require a radical step forward in the underlying technology we use to build OS' themselves.

  • zaro 2412 days ago
    > I suspect the root cause is simply that building a successful operating system is hard.

    Well, it is hard, but this is not the main source of issues. The obstacle to having nice things on the desktop is this constant competition and wheel reinvention, the lack of cooperation.

    The article shows out some very good points, but just think of this simple fact. It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a files system which is almost 40 years old. And it is not because it is so hard to make such a filesystem. Not at all. Now this is at the core of reasons why we can't have nice things :)

    • arthurfm 2412 days ago
      > It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a filesystem which is almost 40 years old.

      Universal Disk Format? [1]

      ExFAT can also be used on all currently supported versions of Windows & macOS and added to Linux very easily via a package manager.

      You could argue there isn't any need for a cross-platform filesystem these days. It's often easier to simply transfer files over Ethernet, Wi-Fi or even the Internet.

      [1] https://tanguy.ortolo.eu/blog/article93/usb-udf

      • eltoozero 2412 days ago
        Not sure why the downvotes, ExFAT mostly doesn't suck these days for random go-between work.

        To your last comment, I will reply with the "old" adage to "never underestimate the raw bandwidth of a stationwagon loaded with tapes/drives barreling down the highway."[0]

        [0]: https://en.m.wikiquote.org/wiki/Andrew_S._Tanenbaum

    • alrs 2412 days ago
      • digi_owl 2412 days ago
        I really wish that UDF would get official blessing for use beyond optical media.

        Yes you can kinda hack it into usage, but programs like gparted will not allow me to make a UDF partition last i checked (Windows sorta can, under the live drive moniker, iirc).

  • thibran 2412 days ago
    Interesting to read someone else ideas about that topic, which I though myself quite a lot about. The basic building block of a better desktop OS is IMHO – and as the OP wrote – a communication contract between capabilities and the glue (a.k.a apps). I don't think we would need that many capability-services to be able to build something useful (it doesn't even need to be efficient at first). For the start it might be enough to wrap existing tools and expose them and see if things work or not.

    Maybe by starting to build command-line apps and see how good the idea works (cross-platform would be nice). I guess that the resulting system would have some similarities with RxJava, which allows to compose things together (get asynchronously A & B, then build C and send it to D if it contains not Foo).

    If an app would talk to a data-service it would no longer have to know where the data is coming from or how it got there. This would allow to build a whole new kind of abstractions, e.g. data could be stored in the cloud and only downloaded to a local cache when frequently used, just to be later synced back to the cloud transparently (maybe even ahead of time because a local AI learned your usage patterns). I know that you can have such sync-things today, they are just complicated to setup, or cost a lot of money, or work only for specific things/applications, also they are often not accessible to normal users.

    Knowing how to interact with the command-line gives advanced users superpowers. I think it is time to give those superpowers to normal users too. And no, learning how to use the command-line is not the way to go ;-)

    A capability-services based OS could even come with a quite interesting monetization strategy by selling extra capabilities, like storage, async-computation or AI services, beside of selling applications.

  • Groxx 2412 days ago
    >Consider iTunes. iTunes stores the actual mp3 files on disk, but all metadata in a private database. Having two sources of truth causes endless problems. If you add a new song on disk you must manually tell iTunes to rescan it. If you want to make a program that works with the song database you have to reverse engineer iTunes DB format, and pray that Apple doesn't change it. All of these problems go away with a single system wide database.

    Well. Then you get Spotlight (on OSX, at least) - system-wide file/metadata/content search.

    It's great! It's also quite slow at times. Slow (and costly) to index, slow to query (initial / common / by-name searches are fast, but content searches can take a second or two to find anything - this would be unacceptable in many applications), etc.

    I like databases, but building a single well-performing one for all usages is quite literally impossible. Forcing everyone into a single system doesn't tend to add up to a positive thing.

  • lou1306 2412 days ago
    Windows 10 didn't add any UX feature? What about Task View (Win+Tab) and virtual desktops?

    And why bashing the Linux subsystem, which is surely not even developed by the UX team (so no waste of resources) and is a much needed feature for developers?

    BTW, there is a really simple reason why mainstream OSs have a rather conservative design: the vast majority of people just doesn't care and may even get angry when you change the interaction flow. Many of the ideas exposed in the post are either developer-oriented or require significant training to be used proficiently.

    • khedoros1 2412 days ago
      > What about Task View (Win+Tab) and virtual desktops?

      Virtual desktops have been part of Windows since at least Windows XP. The necessary architecture was already in place, Microsoft just didn't include a virtual desktop manager. There were/are several available.

    • agumonkey 2412 days ago
      My favorite Win7/10 UX features: drag n snap (with the corresponding Win + arrow bindings). Whoever managed to pitch the idea to MS, thanks a lot.
      • digi_owl 2412 days ago
        Could use some refinement though. Only a two window left/right split is able to resize both windows when dragging one. You can't have say one window on the left, and two on the right and expect all of them to be resized to maintain that layout.
        • agumonkey 2412 days ago
          It's probably done on purpose. Average feature for simple productivity, most people will need a 2 panels split.

          I would love a few more options, like pinning one and having two windows share the remaining space (like a video player on a corner)

          • digi_owl 2412 days ago
            I guess i explained it poorly.

            You can, at least with Windows 10, have the screen split into 4. But once you go beyond a left/right split, the other windows will not resize to maintain their areas if you resize one of them.

    • nwah1 2412 days ago
      I like the new Night Light feature that removes the need for f.lux

      The network settings menu in the status bar is much better. I can turn wifi on and off easily.

      I like the new notification panel, and setting reminders in Cortana.

      The new Mail app is great. The Money app is great. The News app is great. The Calendar app is great. The Weather app is great. Very simple to use.

      You can set dark color schemes nearly system-wide.

      The lock screen is cool.

      Edge doesn't suck.

    • ThePhysicist 2412 days ago
      I agreed, and personally I think it's great that Windows 10 can still (mostly) run applications built for Windows XP, as often the original developers of these apps do no longer maintain / update them but they nevertheless provide some good value.
      • digi_owl 2412 days ago
        You can run even older software.

        Heck, you could probably run stuff from the 3.x era, if you installed Win10 as a 32-bit OS.

        This is related to how x86 CPUs do 64-bit btw, not the OS itself.

  • nickpsecurity 2412 days ago
    The author keeps questioning why certain siloing like App Store happens. The author then offers technical solutions that won't work. The reason is the siloing is intentional on part of companies developing those applications to reduce competition to boost profits. They'd rather provide the feature you desire themselves or through an app they get 30% commission on.

    A lot of other things author talks about keep the ecosystems going. The ecosystems, esp key apps, are why many people use these desktop OS's. Those apps and ecosystems take too much labor to clean slate. So, the new OS's tend not to have them at all or use knock-offs that don't work well enough. Users think they're useless and leave after the demo.

    The market effects usually stomp technical criteria. That's why author's recommendations will fail as a whole. "Worse Really is Better" per Richard Gabriel.

    • sowbug 2412 days ago
      More benign versions of the same idea: app developers want to "provide a seamless experience"; they want their apps to be visually distinctive; they want to be portable across different OSes; they want clear boundaries between what they have to support and what they don't (is a copy-paste idiosyncracy something we have to document?); they can get performance advantages by implementing functionality themselves; and they're worried that the OS will change a critical subsystem or interaction in a way that isn't straightforward for them to adapt to.
    • cmiles74 2412 days ago
      This shouldn't be an issue on a free and open operating system, like Linux. Profit isn't a driver for LibreOffice or Blender, but these apps are still siloed off from each other. I think the author is right in that if the operating system offered both a richer and simpler set of tools to make it easier to add OS components and to communicate between applications, we could really see some interesting stuff.

      Personally, I do find the idea of an operating system composed of services and applications that all share the same messaging statement compelling.

      • tonyedgecombe 2412 days ago
        Profit is definitely a driver for much open-source software, Google pushes Android so they can control mobile advertising, Oracle pushes Java to stop Microsoft having a stranglehold on corporate development, RedHat pushes Linux so it can sell services. There aren't many big open source projects that are purely altruistic.
        • cmiles74 2412 days ago
          For sure there are projects that are driven by dollars, but many that are not... If we're going to get a desktop environment with the level of openness that the OP would like, I do think this would be a job for libre developers as it is fundamentally at odds with the pursuit of dollars via lock-in (that is, every app would be increasing the value of the OS and sacrificing lock-in of the customer's data).
          • tonyedgecombe 2412 days ago
            It's a dilemma isn't it, without funding it probably won't happen, with funding it's corrupted in some way.
            • cmiles74 2411 days ago
              I'm not seeing how products that are funded are "corrupted". I think that products that are funded need to make money and that drives the pressure to cordon off the customer's data and to lock them into the specific application. I'm not saying that this is innately bad (though some people might), but that it runs counter to this idea of building an OS that does more than simply launch software and store data. If you're feeling pressure to own the customer's data, then you won't be all that interested in making your application available to the rest of the OS by providing a suite of services.
    • ThePhysicist 2412 days ago
      Control of quality and auditing is another reason why apps stores exist and are useful: The (on average) high quality of the apps in the iOS store is the result of a rather strict auditing process, which in the end is also beneficial for the user. This is something that usually doesn't happen naturally with completely open systems. Even the Linux distributions (that are usually run mostly by volunteers and not profit-oriented) often have very strict criteria that your package needs to fulfill in order to be included in the official repository.
  • snarfy 2411 days ago
    What we have today grew together organically over time like a city. To do what is described in the article is akin to demolishing the city and completely rebuilding it from scratch. But it's not just from scratch, it's replacing all of the infrastructure and tooling that went into building the parts of the city, like plumbing and electrical. A state of the art substation requires it's own infrastructure to build. It's akin to requiring a whole new compiler tool chain and software development system just to get started with rebooting the OS.

    If this happens it's only going to happen with a top-down design from an industry giant. Android and Fuchsia are examples of how it might happen. Will it? It seems these days nobody cares as long as the browser renders quickly.

    • gldalmaso 2411 days ago
      I think this is a good analogy.

      To complement it a bit. There's the problem of bootstrapping. Once all that new city infrastructure and beautiful planning is complete, who wants to move into that new city that has no markets, stores, bars, restaurants, etc?

      Desktop is full of old cruft because people use old crufty software today. They must be able to continue to use old crufty software because they need to until a better alternative exists, but they use multiple old crufty software and better alternatives come slowly.

      Desktop builds the new city adjacent to the old one and makes the grass greener there, but it takes quite a while for the old city to get empty.

  • Skunkleton 2412 days ago
    In 2017 a modern operating system such as Android, iOS, or Chrome (the browser) exists as a platform. Applications developed for these platforms _must_ conform to the application model set by the platform. There is no supported way to create applications that do not conform to the design of the platform. This is in stark contrast to the "1984" operating systems that the OP is complaining about.

    It is very tempting to see all the complexity of an open system and wish it was more straight forward; more like a closed system. But this is a dangerous thing to advocate. If we all only had access to closed systems, who would we be seceding control to? Do we really want our desktop operating systems to be just another fundamentally closed off walled garden?

    • TuringTest 2411 days ago
      The idea wouldn't be to lose open systems, it's building open systems in a different way, including all the lessons learned in the past 30 years about working and organizing information in digital systems connected to the internet

      Like, for example, the WWW. Why is it that desktops have no native support for the user to organize web applications, and everything is handled through a single app, the browser?

  • bastijn 2412 days ago
    Apart from discussing the content. Can I just express my absolute love for (longer) articles that start with a tl;dr?

    It gives an immediate answer to "do I need to read this?", and if so, what key arguments should I pay attention to?

    Let me finish with expressing my thanks to the author for including a tl;dr.

    Thanks!

  • joshmarinacci 2412 days ago
    OP here. I wasn't quite ready to share this with the world yet, but what are you gonna do.

    I'm happy to answer your questions.

  • jonahss 2412 days ago
    The author mentions they wished Object-based streams/terminals existed. This is the premise of Windows Powershell, which today reminds me of nearly abandoned malls found in the Midwest: full of dreams from a decade ago, but today an empty shell lacking true utility, open to the public for wandering around.
    • ZenoArrow 2412 days ago
      Bit of a stretch. PowerShell is used a ton, especially by Windows sysadmins/devops engineers, and is actively developed. Perhaps you've just got that impression because you haven't been following it with much interest.
  • raintrees 2412 days ago
    I have been conceptualizing what it would take to abstract away the actual physical workstation into a back-end processing system and multiple UI modules physically scattered throughout my home (I work from home) and grounds.

    For example, as in shift my workspace from my upstairs office to my downstairs work area just by signing in on the different console setup downstairs. All of my in-process work comes right back up. Right now I do this (kind of) using VMs, but they are limited when addressing hardware, and now I am multiplying that hardware.

    Same thing with my streams - Switch my audio or video to the next room/zone where I want to move myself to. Start researching how to correctly adjust my weed whip's carburetor, then go out to the garage and pull up my console there where my work bench is and the dismantled tool.

    Eventually my system would track my whereabouts, with the ability (optionally turned on) to automatically shift that IO to the closest hardware setup to me as I move around the structure/property.

    And do something like this for each person? So my wife has her streams? Separate back end instance, same mobility to front-end UI hardware?

    Can this new Desktop Operating System be designed with that hardware abstraction in mind?

  • mherrmann 2412 days ago
    What I hate is the _bloat_. Why is GarageBand forced upon me with macOS? Or iTunes? Similarly for video players etc on all the other OSs. I am perfectly capable of installing the software I need, thank you very much.
    • oneplane 2412 days ago
      It's not forced, it is a part of the product and you are free to not use it or remove it.

      Where some people might not want certain defaults, most people have no clue how to get access to software and will take whatever is already there. This is part of the reason all Windows devices come preinstalled with 50% windows and 50% OEM bloat; the OEM gets paid and the customer might 'use what is already there' and for the bloatware vendors hopefully purchase a full version or subscription.

      What you want and what other people want most likely doesn't line up and never will. This is because there is no universal configuration for everyone and because the median is not going to work for anyone at all (i.e. install Garageband but not a browser, or install Numbers but not Pages)

  • ksec 2412 days ago
    I hate to say this, but an ideal Desktop OS, at least for majority of consumers is mostly here, and it is iOS 11.

    Having use the newest iPad Pro 10.5 ( along with iOS 11 beta ), the first few hours were pure Joy, after that were frustration and anger flooding in. Because what I realize, is this tiny little tablet, costing only half a Macbook Pro or even iMac, limited by Fanless design with lower TDP, 4GB of memory, no Dedicated GPU, likely much slower SSD, provides a MUCH better user experience then the Mac or Windows PC i have ever used, that is including the latest Macbook Pro.

    Everything is fast and buttery smooth, even the Web Browsing experience is better. The only downside is you are limited touch screen and Keyboard. I have number of times wonder If I can attach a separate monitor to use it like Samsung Desktop Dock.

    There are far too many backward compatibility to care for with both Windows and Mac. And this is similar to the discussion in the previous Software off Rails. People are less likely to spend time optimizing when it is working good enough out of the box.

    • make3 2412 days ago
      you don't have access to the file system in iOS. This makes me crazy. You also can't do any change that changes the os's behavior in any meaningful way. For a dev at least, even for mobile, it feels really limiting to use.
      • pjmlp 2412 days ago
        The new version does provide access.
    • joshmarinacci 2412 days ago
      OP here.

      Quite true. I'm genuinely surprised how much progress Apple has made with iOS 11. The fact that they are giving users a file management app means they are finally ready to handle real work. With a really good Bluetooth keyboard....

      • corn13read 2412 days ago
        Now if only they could realize this on mobile and allow attachments to an email without icloud LMAO
        • eltoozero 2412 days ago
          Wut? MailDrop is what you're talking about and it's optional on top of only kicking in with attachments over 25mb IIRC.

          Maybe you prefer whalemail, yousendit, or one of the other sign-up-free and get-ads-forever services for large attachments, and that's fine, they're not going away.

          Neither is DropBox, for the time being. I'm both worried and excited about DropBox's new offerings and I'm all for it as long as they don't become Evernote and start selling backpacks and rebranded Fujitsu scanners. :(

    • rocky1138 2412 days ago
      Did you read the article? His entire premise is now that consumers have finished with the desktop, we can get them back to being workstations again, unencumbered by the requirements of consumers.

      Talking about how iOS is great for consumers but doesn't have a good keyboard is a bit tone deaf.

      • ksec 2412 days ago
        My bad. Sorry I Skim Read it, Headlines and Tl;dr. May be he should name it as Workstation OS, although I guess all "Desktop" are pretty much Workstation these days.

        But if Non-Consumer, Workstation OS is what we want, then I value backward compatibility over everything else. Which means everything he wanted to remove are here to stay.

  • gshrikant 2412 days ago
    While I'm not sure I agree with everything in the article, it does mention a point I've been thinking about for a while - configuration.

    I really do think applications should try to zero-in on a few standard configuration file formats - I really don't have a strong preference on one (although avoiding XML would be nice). It makes the system uniform and makes it easier to move between applications. Of course, applications can add extended sections to suit their need.

    Another related point is the location of configuration files - standard Linux/Unix has a nice hierarchy /etc/ for and /usr/local/etc and others for user-specific configurations (I'm sure Windows and OS X should have a similar hierarchy too) but different applications still end up placing their configuration files in unintuitive places.

    I find this lack of uniformity disturbing - especially because it looks so easy (at least on the surface) to fix and the benefits would be nice - easier to learn and scriptable.

    A last unrelated point - I don't see why Linux distributions cannot standardize around a common repository - Debian and Ubuntu both share several packages but are yet forced to maintain separate package databases and you can't easily mix and match packages between them. This replication of effort seems more ideological than pragmatic (of course, there probably are some practical reasons too). But I can't see why we can't all pool resources and share a common 'universal' application repository - maybe divide it into 'Free', 'Non-Free', 'Contrib/AUR' like granular divisions so users have full freedom to choose the packages they want.

    Like other things, I think these ideas have been implemented before but I'm a little disappointed these haven't made it into 'mainstream' OS userlands yet.

    • khedoros1 2412 days ago
      > A last unrelated point - I don't see why Linux distributions cannot standardize around a common repository

      Because there are enough differences between distros that a lot of the software is packaged with different build options, configurations, file paths, etc. Keeping a separate repository of software that doesn't differ increases complexity of all the distros involved, requires more inter-distro administration decisions, and just generally generates more work than keeping them separate does.

      You've either got more-complex packages with multiple sets of configurations available, or you erase the differences between distros (which exist for some good reasons). The way things are, effort is duplicated, but only accidentally, when the decisions of two distros about a particular package just happen to be the same.

    • thibran 2411 days ago
      > I really do think applications should try to zero-in on a few standard configuration file formats

      This is a non-problem when you use a binary object, which can be represented/manipulated in all kind of ways. It would even be possible to expose the configuration in an arbitrarily format on a mount-point in the filesystem using FUSE https://en.wikipedia.org/wiki/Filesystem_in_Userspace

  • nebulous1 2412 days ago
    I much preferred the second half of this to the first half.

    However, both seemed to end up with the same fundamental flaw: he's either underestimating or understating how absurdly difficult most of what he's suggesting is. It's all well and good saying that we can have a standardized system for email, with everything being passed over messages, but what about everything else? It's extremely difficult to standardize an opinionated system that works for everything, which is exactly why so many operating system constructs are more general than specific. For this to all hang together you would have to standardize everything, which will undoubtedly turn into an insane bureaucratic mess. Not to mention that a lot of software makers actively fight against having their internal formats open.

  • hyperfekt 2412 days ago
    This would be neat, but isn't radical enough yet IMHO. If everything on the system is composed of pure functions operating on data, we can supercharge the OS and make everything both possible AND very simple. The whole notion of 'application' is really kind of outmoded.
  • doggydogs94 2412 days ago
    FYI, most of the author's complaints about the command line were addressed by Microsoft in PowerShell. For example, PowerShell pipes objects, not text.
  • agumonkey 2412 days ago
    I see https://birdhouse.org/beos/refugee/trackerbase.gif for 2 seconds and I feel happy. So cute, clear, useful.
    • romanovcode 2412 days ago
      I still don't understand what is it. An email client? What is this yellow "data" on top?

      Seems rubbish to me.

      • eltoozero 2412 days ago
        You're looking at a file browser from BeOS.

        The file-system in BeOS can operate as a database, so files can have attributes and metadata baked alongside them natively.

        The mail-client operated as a daemon running in the background periodically fetching and writing entries to an OS folder that was a searchable database with to, from, subject, body, and time stamp as "fields" abstracted "magically" to the window view.

  • EdSharkey 2412 days ago
    Hacker News is most interesting when a controversial article like this is written without all the necessary facts or research. I learned a lot about a range of existing OS and application tech just through all the refuting going on here.

    Anyhow, my $0.02 is that all software dies. Either software lands in a niche due to its architecture and doesn't survive industry paradigm shifts or it groans under its own weight of cruft allowing more nimble competitors to enter the market and take marketshare. I'm no fan of full-system rewrites because of the tremendous cost and typical failure, but even so all software does eventually die and replacements will emerge.

    So, it at least makes sense for some well-heeled upstart to begin thinking about the next-gen operating system in case an opportunity presented itself to establish a market. If that upstart were me, my focus would be on productivity, performance, stability/security.

    Especially with regards to UX, I would focus on defining UX Guidelines and a windowing toolkit that would only change very infrequently (like once every 10-20 years.) To me, a tacky-looking, "outdated" UX that billions of people know by heart and can play like a fiddle is infinitely more valuable than one that changes look-and-feel year to year. My devs would be laser focused on fixing bugs and performance enhancements, not feature-itis.

  • al2o3cr 2412 days ago

        Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
    
    My copy of Divvy is confused by this statement. :)
  • st3fan 2412 days ago
    > And if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins. There is no extension API. This is the result of many layers of cruft and bloat.

    I am going to say that it is probably a product decision in case of Mail.app.

    Whether Mail.app is a big steaming pile of cruft and bloat inside - nobody knows. Since it is closed source.

    • oneplane 2412 days ago
      Mail.app actually does have plug-ins.
      • efficax 2412 days ago
        Yes but the plugin API is undocumented.
  • gumby 2411 days ago
    I really agree that the hermetic siloization of applications and their data over the past 30 years has been a major step backwards. I also wish all apps were composable.

    It seems to require a mental shift few developers are willing to adopt however. Good luck -- you are on the right track on many things (even if I can't imagine life without a command line).

  • casebash 2412 days ago
    I wouldn't say that innovation in Desktop is dead, but most of it seems to be driven by features or design patterns copied from mobile or tablet. Take for examples Windows 8 and Windows 10, Windows 8 was all about moving to an OS that could run on a whole host of devices, while Windows 10 was all about fixing up all the errors made in this transition.
  • mcny 2412 days ago
    Hi Josh,

    Thank you for writing this.

    Just noticed a small typo (I think)

    > For a long time Atom couldn't open a file larger than 2 megabytes because scrolling would be to slow.

    to should be too.

    Sincerely,

  • PrimHelios 2412 days ago
    This seems to me to be written by someone who uses MacOS almost exclusively, but has touched Windows just enough to understand it. The complete lack of understanding of IPC, filesystems, scripting, and other OS fundamentals is pretty painful.

    >Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

    I'm an idiot when it comes to operating systems (and sometimes even in general), but even I know why there are issues with that. You need a standardized form of IPC between the two apps, which wouldn't happen because both devs would be convinced their way is the best. On top of that, it's a great way to get an antitrust against you if you aren't careful [0]

    >Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical?

    Soft/hard links, fam. Even Windows has them.

    >Why can['t] I sort by tags and metadata?

    You can in Linux, you just need to know a few commands first.

    >Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

    Decent point IMO. There's a lot of native UI I have a hard time reading because it's so small. That said, I think bringing in the ability to zoom native widgets would bring in a lot of issues that HTML apps have.

    >We should start by getting rid of things that don't work very well.

    The author doesn't understand PCs. The entire point of these machines is backwards-compatibility, because we need backwards compatibility. I'm sitting next to a custom gaming PC and I have an actual serial port PCIe card because I need serial ports. Serial ports. In 2017. I'd be screwed if serial wasn't supported anymore.

    I won't touch the rest of the article because I there's a lot I disagree with, but he seems to just want to completely reinvent the "modern OS" as just chromebooks.

    [0]: https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....

    • mikelward 2412 days ago
      Not if tabs are a higher-level concept, e.g. handled by the window manager, as Fluxbox does.

      http://fluxbox.sourceforge.net/features/tabs.php

      • khedoros1 2412 days ago
        That's what I was thinking. There's no reason a window manager can't have the concept of tabs, and display different programs as tabs on the same window.

        I used to use Fluxbox, but I didn't know it was already capable of that. Pretty cool!

      • PrimHelios 2412 days ago
        That's actually a really good point. However, I read that point more as having the new tab in the other application, as opposed to having a different app on each tab.
      • digi_owl 2412 days ago
        One more thing that will be a mess once Wayland gets rammed down our throats...
    • khedoros1 2412 days ago
      >>Any web app can be zoomed.

      >Decent point IMO. There's a lot of native UI I have a hard time reading because it's so small. That said, I think bringing in the ability to zoom native widgets would bring in a lot of issues that HTML apps have.

      Sounds like the Compiz Resize plugin, with the "stretch" option enabled: http://wiki.compiz.org/Plugins/Resize

      Or maybe the Enhanced Zoom Desktop plugin: http://wiki.compiz.org/Plugins/Ezoom

      It just seems like what the author describes could be easily implemented as a Compiz plugin. I mean, when it first came out, people went crazy with all sorts of plugins that were more fun than useful, but nicely showed off what the system was capable of.

  • Zigurd 2412 days ago
    A few years ago I wrote a book about developing big complex networked apps. It had "Enterprise" in the title, based on the idea that mobile device OSs would become dominant - which they did - and that the evolution of tablet devices would continue to where powerful devices like the iPad Pro would overtake the use of Mac and Windows laptops - which they didn't.

    Windows and MacOS are full of compromises but are usable. Chrome OS is a contender for users that need a simpler system. What addressable segment is left? You pretty much have to make the case for replacing Windows. But you can only hope to replace the "voluntary" Windows seats. Many Windows users have no choice.

  • oconnor663 2412 days ago
    > Wayland is supposed to fix everything, but it's been almost a decade in development and still isn't ready for prime time.

    Mutter's Wayland implementation is the default display server for Gnome Shell right now. How much more prime time do can you get?

    • danieldk 2412 days ago
      And Fedora has shipped two releases with Wayland as the default display server.
      • dylan-m 2412 days ago
        And Ubuntu is poised to bite the bullet and do the same in a release or two :)

        And it works comfortably with the offical drivers for modern ATI graphics cards. (One of the earlier fears about Wayland).

        I really wish people would cut it out with the "Wayland is doomed" myth. This is just how free software works: you see stuff before it's done. Then (given sufficient backing) it gets finished. Then it works.

  • atemerev 2412 days ago
    "A solution in search of a problem".

    What problem of mine "piping my Skype stream to video analysis service" is supposed to solve? Why would I want to dock and undock different application parts to all places they don't belong? Etc.

  • blueworks 2412 days ago
    The reference to atom and it's performance to the underlying electron and nodejs runtime is inappropriate since another popular editor Microsoft's VS Code which also uses electron but is very fast and is a pleasure to work with.
  • linguae 2412 days ago
    I've been thinking a lot about the problem of modern desktop operating systems myself over the past year. I believe that desktop operating system environments peaked last decade. The Mac's high water mark was Snow Leopard, the Linux desktop appeared to have gained momentum with the increasing refinement of GNOME 2 during the latter half of the 2000's, and for me the finest Windows releases were Windows 2000 and Windows 7. Unfortunately both the Linux desktop and Windows took a step in the wrong direction when smartphones and tablets became popular and the maintainers of those desktops believed that the desktop environments should resemble the environments of these new mobile devices. This led to regressions such as early GNOME 3 and Windows 8. GNOME 3 has improved over the years and Windows 10 is an improvement over Windows 8, but GNOME 2 and Windows 7, in my opinion, are still better than their latest successors. Apple thankfully didn't follow the footsteps of GNOME and Windows, but I feel that the Mac has stagnated since Snow Leopard.

    I agree with the author of this article that desktop operating systems should develop into workstation operating systems. They should be able to facilitate our workflows, and ideally they should be programmable (which I have some more thoughts about in my next paragraph). In my opinion the interface should fully embrace the fact that it is a workstation and not a passive media consumption device. It should, in my opinion, be a "back to basics" one, something like the classic Windows 95 interface or the Platinum Mac OS interface.

    One of the thoughts that I've been thinking about over the years is the lack of programmability in contemporary desktop GUIs. The environments of MS-DOS and early home computers highly encouraged users to write programs and scripts to enhance their work environment. Unix goes a step further with the idea of pipes in order to connect different tools together. Finally, the ultimate form of programmability and interaction would resemble the Smalltalk environment, where objects could send messages to each other. What would be amazing would be some sort of Smalltalk-esque GUI environment, where GUI applications could interact with each other using message passing. Unfortunately Apple and Microsoft didn't copy this from Xerox, instead only focusing on the GUI in the early 1980s and then later in the 1980s focusing on providing an object-oriented API for GUI services (this would be realized with NeXTSTEP/OPENSTEP/Cocoa, which inspired failed copycat efforts such as Microsoft Cairo and Apple/IBM Taligent, but later on inspired successful platforms such as the Java API and Microsoft .NET). The result today is largely unprogrammable GUI applications, though there are some workarounds such as AppleScript and Visual Basic for Applications (though it's far from the Smalltalk-esque idea). The article's suggestion for having some sort of standardized JSON application interface would be an improvement over the status quo.

    I would love to work on such an operating system: a programmable GUI influenced by the underpinnings of Smalltalk and Symbolics Genera plus the interface and UI guidelines of the classic Mac OS. The result would be a desktop operating system that is unabashedly for desktop computer users. It would be both easy to use and easy to control.

    • TheCowboy 2412 days ago
      A lot of hate in this thread seems against even discussing this, but I think it's worth exploring.

      I usually refer to some of these groups of ideas as "composability of workspaces". People question why you would want to dock or undock a tab from different apps, but we already work like this a lot when we use modern IDEs and web browsers. I'd argue that Emacs and Linux CLI still has a lot of appeal for this reason of workspace composability.

      Are we better thinking and debating about how we want computational environments to exist, or simply hope that the next version of iOS or Windows 'does not suck'? Will we be able to seamlessly compute across multiple devices; will OSes become specialized? What would be optimal?

      There are social, economic (scarcity of programmer time), and institutional limitations to undertaking huge projects. But that doesn't rule out any type of progress toward a long-term goal, or prevent people from booking small wins.

    • pjmlp 2412 days ago
      You forgot to mention PowerShell, F# and C# scripting. :)

      Overall I agree with you.

  • pier25 2412 days ago
    I agree with some of the points stated. For years I've been thinking that a tag based file system would be superior to a folder based one in many aspects.

    macOS has tags, but the UX/UI for interacting with them is really poor.

  • jacinabox 2412 days ago
    In regards to the issue of file systems being non-searchable, it's definitely worth taking a look at compressed full-text indexes: http://pizzachili.dcc.uchile.cl/resources/compressed_indexes...

    Under this scheme each file on disk would be stored as an index with constant factor overhead. The original file is not needed; all of the data can be decoded out of the index.

  • sddfd 2412 days ago
    I think electron is a step into the right direction.

    Let's assume for a moment there weren't the problem with JavaScript performance (because, for example, web assembly can replace it).

    Then electron is a platform everyone can build his applications on. And once that happens, operating systems are free to shed the library cruft.

    This is just one possible migration path, and I am not saying it's going to happen or that it is even a good idea.

    But if you have to write cross platform apps it seems, that this has clear advantages.

    • contras1970 2412 days ago
      Electron solves nothing, it's a prime example of the Inner Platform[0], it will go through all the growing pains of solving the "library cruft" inside it, and will still require all the "cruft" below it.

      BTW, you say you think electron is a step in the right direction, and follow it with "I am not saying [...] it is even a good idea." it can't be a step in the right direction and a bad idea at once, so which is it? is it a good idea or not?

      [0] https://en.wikipedia.org/wiki/Inner-platform_effect

      • sddfd 2411 days ago
        > right direction, vs not even a good idea

        I think it is a step in the right direction because it allows you to build platform independent software today, with a relatively defined interface to the system (dom + JavaScript). This is a great opportunity to gain experience about the requirements for systems like electron. So I think it is good that electron exists, and would like to see it improved to address the issues that are discovered by using it.

        On the other hand, I am not sure if developing application software in JavaScript is a good idea. I am not sure if compiling to webassembly just to have it execute in another virtual machine is a much better idea. I don't know if the dom abstraction is a good idea in the long run (it seems to work for the web though). And I am not sure if there is another less popular technology around that should be used instead electron.

    • Ezhik 2412 days ago
      People were saying this about Java in the 90s
  • dgudkov 2412 days ago
    Many interesting ideas and concepts, no question. However, if it was a startup pitch I would struggle to see a killer application. I can see features (some are very exciting!) here, but I'm failing to see a product. What kind of real-life problem would such OS solve? Is this problem worth billions of dollars required for developing a new OS and a tool kit of apps for it?
  • saagarjha 2412 days ago
    > if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins.

    Mail.app supports plugins.

    > Why can't I have a file in two places at once on my filesystem?

    So…a hardlink?

    > Why don't my native apps do that?

    Dynamic text lets you do this, but it's mobile-only currently.

    > have started deprecating the Applescript bindings which make it work underneath

    Since when?

  • jonahss 2412 days ago
    Look to Mobile OSs for innovation in OS design. Like the author stated, it's currently where the money is. It's the closest we have to "starting over" and alot of things were rethought, such as security and sandboxed apps. IPC is limited to start, but slowly growing.

    I wouldn't be surprised if the workstation OSs of the future grew out of our current Mobile OSs

    • OOPMan 2412 days ago
      What a hideous idea, given the UI semantics on mobile OSes are pretty sub-optimal for systems that provide input systems other than a touch screen.
    • pjmlp 2412 days ago
      This is what I would like to see on desktops.

      Unsafe code constrained to the bottom layers of the OS, with everything else on memory safe languages.

      • wolfgke 2412 days ago
        > Unsafe code constrained to the bottom layers of the OS, with everything else on memory safe languages.

        Chrome, that is used on Android and is a large code base, is written mostly in C++, which you would probably not call a memory safe language.

        • pjmlp 2412 days ago
          Yeah but writing web apps in HTML/CSS/JavaScript is not the same as using C++.

          I would prefer the lower layers also to be written in a memory safe systems programming language, but having the userspace 100% in such a language, would already be quite an improvement versus the current situation.

  • atmartins 2412 days ago
    I'm surprised at all the negative, pessimistic views about looking forward with operating systems. I welcome conversations about what things could be like in the future. Obviously Google's pondering this with Fuchsia. Maybe it will take a more vertical approach, where only certain hardware could take advantage of some features for a while.
  • coldtea 2412 days ago
    >In fact, in some cases it's worse. It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows. Below is a screenshot of Processing running for the first time on a Raspberry Pi with hardware acceleration, just a couple of years ago. And it was possible only thanks to a completely custom X windows video driver. This driver is still experimental and unreleased, five years after the Raspberry Pi shipped.

    That's because of Open Source OSes though, which vendors don't care about and volunteers aren't enough and able to match the work needed for all things to play out of the box. Nothing about this particular example has anything to do with OS research or modern OSes being behind.

    >Here's another example. Atom is one of the most popular editors today. Developers love it because it has oodles of plugins, but let us consider how it's written. Atom uses Electron, which is essentially an entire webbrowser married to a NodeJS runtime. That's two Javascript engines bundled up into a single app. Electron apps use browser drawing apis which delegate to native drawing apis, which then delegate to the GPU (if you're luck) for the actual drawing. So many layers.

    Again, nothing related to modern OSes being inadequate. One could use e.g. Cocoa and get 10x what Electron offers, for 10x the speed, but it would be limited in portability.

    >Even fairly simple apps are pretty complex these days. An email app, like the one above is conceptually simple. It should just be a few database queries, a text editor, and a module that knows how to communicate with IMAP and SMTP servers. Yet writing a new email client is very difficult and consumes many megabytes on disk, so few people do it.

    First, I doubt one of the reasons "few people do it" is because it "consumes many megabytes on disk" (what? whatever).

    Second, the author vastly underestimates how hard it is handling protocols like IMAP, or writing a "text editor" that can handle all the subtleties of email (which include almost a whole blown HTML rendering). Now, if he means 'people should be able to write an emailer easily iff all constituent parts where available as libraries and widgets', then yeah, duh!

    >Mac OS X was once a shining beacon of new features, with every release showing profound progress and invention. Quartz 2D! Expose! System wide device syncing! Widgets! Today, however Apple puts little effort into their desktop operating system besides changing the theme every now and then and increasing hooks to their mobile devices.

    Yeah, and writing a whole new FS, a whole new 3D graphics stack, memory compression, seamless cloud file storage, handoff, move to 64-bit everything, bitcode, and tons of other things besides. Just because they are not shiny, doesn't mean there are no new futures there.

    >A new filesystem and a new video encoding format. Really, that's it?

    Yeah, because a new FS is so trivial -- they should also rewrite the whole kernel at the same time, for extra fun.

    >Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

    There's also no real reason this should be offered. Or that it should be a priority. If every possible feature someone might thing was "a priority" OSes would be horrible messes.

    >Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical? Why can I sort by tags and metadata?

    Note how you can do all those things in OS X (you can have aliases and symlinks and hard links, can add tags and metadata, and can sort by them). And in Windows I'd presume.

    And it's "fundamentally hierarchical" because that's how we think about stuff. But it also offers all kind of non hierarchical views, Spotlight and Tags based views for one.

    >Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

    Because bitmap assets. Suddenly all those things are not so "trivial".

    There are good arguments to be made about our OSes being held back by legacy cruft (POSIX for one) and new avenues to explore, old stuff that worked better than what we have now, etc.

    But TFA is not making them.

  • ageofwant 2412 days ago
    Most all the author craves for can be cobbled together from existing components. On Linux at least. If you don't use Linux you have bigger issues to deal with first.

    He can start by using a tiling window manager, like i3.

    "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so."

    • joshmarinacci 2412 days ago
      As I stated in the article. None of this is new. It just hasn't been put together into a usable package.
  • free_everybody 2412 days ago
    Realistically, how difficult is it to write a brand new operating system like this? Could a few people with full-time jobs write a working model in a year? Maybe 10 people? Is it just too time consuming with too little of a payout? There should be more options; I think a lot of people can agree on that.
    • joshmarinacci 2412 days ago
      While I was unemployed I almost launched a Kickstarter to build it, but I figured not enough people were interested.

      I think a runnable prototype could be done by a couple of people in a year if you focused on a very tight hardware subset, say Raspberry Pi 3 and VirtualBox x86.

  • anc84 2411 days ago
    > It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows.

    Huh? I am not aware of a 3D accelerated Doom version on Windows in that timeframe nor that it was hard on Linux 10 years later. Any pointers?

  • zvrba 2412 days ago
    This sounds like a rant from a person not really acquainted with operating systems.

    > Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps?

    How would this even be semantically meaningful? What about top-level components like menus which are completely different?

    > Why can't I have a file in two places at once on my filesystem?

    Umm, soft and hard links do exactly that.

    > Why can't I speak commands to my computer

    Cortana takes a shot at that. Personally, I don't even want to try out the feature until it has the level of comprehension corresponding to a human. Otherwise, I'll just be guessing how to spell out my sentences / commands..

    > or have it watch as I draw signs in the air, or better yet watch as I work to tell me when I'm tired and should take a break.

    Because these are hard problems in computer vision, unrelated to operating systems.

    > Each application has its own part of the filesystem

    Yes, I wouldn't want to give up on that. It's orderly.

    > its own config system, and its own preferences, database

    Well, Windows unifies this in the registry. It's somewhat unpopular.

    > Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need.

    NTFS can store extended metadata + arbitrary data in alternate data streams. Doesn't seem to be used very much.

    > I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

    The video stream is a stream of bytes. Skype interprets it and constructs a video from that byte stream. Does he suggest that this interpreter should be part of the kernel? That there is one single video streaming protocol that fits all purposes?

    > Native Applications are heavy weight,

    Um? I have yet to see a "non-native" application that is as snappy as a native one.

    > take a long time to develop and very siloed.

    Any application takes a long time to develop. If you care about stability, crash recovery, etc.

    > Wouldn't it be easier to build a new email client if the database was already built for you?

    Exists, integrated in the Windows OS: https://en.wikipedia.org/wiki/Extensible_Storage_Engine

    > The UI would only be a few lines of code.

    It's logic behind the UI that's complicated, not building the UI itself (heck, you can just draw it if you use C# or VB).

    > If you want to make a program that works with the song database you have to reverse engineer iTunes DB format

    Even if the hypothetical document DB existed, how would one program know about the schema of other programs? Or schema versioning, or...? The problems with proprietary formats won't just disappear, it'll just become easier to do the wrong thing based on misinterpretation of the other program's schema.

    > Message Bus [...] All applications become small modules that communicate through the message bus for everything.

    COM, DCOM, CORBA... The first two are made user-friendly on Windows by C#. Don't know whether it's possible to snoop on COM messages, but given the thickness of the documentation on COM, I'd say the answer is "yes".

    > However, this also means we have to rebuild everything from scratch.

    Yes. Windows already exposes an insane amount of helper objects as COM components.

    > You could build a new email frontend in an afternoon...

    In which alternate universe?

    > I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer, like serialized object streams (think JSON but more efficient).

    He should read up on Powershell. It's also extensible and can directly invoke COM components (+ all of the .net framework).

    > System Side Semantic Keybindings

    That one may be original. I think KDE has something like this.

    > The clipboard should be visible on screen as some sort of a shelf that shows the recent items I've copied.

    IIRC, I've seen something like this in KDE. Earlier versions of Windows had some "clipboard manager" too, though it seems to have disappeared in new versions. Plenty of freeware ones though.

    > In the new system all applications are tiny isolated things which only know what the system tells them.

    That's how Windows UWP applications behave. Appstore ones too. IIRC, some old, then-mainstream OS-es tried the kind of separation and it didn't work well with users. Sometimes you want to share data between isolation domains.

    > None of this is New

    No, and it seems that, feature-wise, Windows is closest to his dream OS. Now he just needs to convince programmers to use the features that are already there :-)

    • mikelward 2412 days ago
      Tabs should just be a window manager concept. You wouldn't be docking one app into another, you'd be merging two windows into a shared window (or: multiple panes into a single frame, if you want to extend the metaphor).

      See Fluxbox for an example. http://fluxbox.sourceforge.net/features/tabs.php

  • meesterdude 2412 days ago
    > I could take a snapshot of a screen. This would store the current state of everything, even my keybindings. I can continue working, but if I want I could rollback to that snapshot.

    we can already do this with virtualization (and i make use of it extensively)

    • egypturnash 2412 days ago
      The fun part: making this something a non-technical user would ever even realize is a possibility, much less consider doing.
  • jokoon 2412 days ago
    There are really millions of small things that I would make in a new desktop OS.

    First would be to forget the whole idea of resizable windows. Windows should only tile automatically. A tab interface have shown that a simple task bar is just enough.

    File explorers would have their columns be resized automatically... I can't believe how both OS X and windows 10 still have this wrong.

    Ultimately I would let applications use hardware directly instead of relying on how the OS do things. This would increase cross compatibility and developer freedom. Good bye Qt and all those horrors of the past.

    Not to mention how there are millions of small utilities and functionalities like windirstat, foobar2000 that would be ideal to make the OS a little more useful.

  • michaelmrose 2412 days ago
    "Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs."

    What does this mean? Some can be via IPC

  • OOPMan 2412 days ago
    Ah, the age-old assumption among developers:

    Everything is terrible and broken, the only way to fix it is to throw everything in the bin and start from scratch.

    Some things never change...

  • untangle 2412 days ago
    Perhaps the new OS prototypes could be built on top of a hypervisor. Yes, it's a layer. But building hypervisor-up would be a nice jump-start.
  • jrs95 2412 days ago
    I'm not really sure if a system wide document database is an improvement over Core Data or not...
  • djhworld 2412 days ago
    I like the optimism in this post, there's are lot of dismissive comments on here.

    However I just don't think any of the ideas would ever really function. The idea of letting you pipe your Skype video feed to some video analysis tool would never happen

    Similarly you'd never get application developers to open up their apps in such a way where you can extract/import content.

  • osteele 2412 days ago
    This could be the first half of a good article. It's a list of things the author cares about that variously: aren't (yet) possible, aren't a good idea, nobody cares enough to make happen, or (maybe) indicate a market force failure.

    What would make this interesting (to me) is a discussion not that these features don't exist, but why.

    • TheOtherHobbes 2412 days ago
      They don't exist because your OS will only succeed if you're a successful monopoly (in at least a couple of market segments) with the market leverage to force adoption.

      And if you're a monopolist with the market leverage to force adoption, you're very unlikely indeed to also be a leader in OS R&D.

      A more fundamental problem is that this wish list is only really of interest to developers. The average user doesn't care about OS configurability or the kind of OS-level task programming that's being talked about here, and they're unlikely to use these features unless there's a super-simple UI to make them accessible.

      Personally I'd love to see more debate about OS design, and more movement and improvement. IMO all the modern OS options pretty much suck in many ways.

      But realistically I know you don't get much bottom-up invention in a market driven economy when niches have already been filled with okay-I-guess solutions.

      You only get commoditisation and tinkering, and those are a long way from streamlined genius-level excellence.

  • consultSKI 2412 days ago
    >> think JSON but more efficient

    Amen. Seriously tho, a lot of great insight.

  • d4r114 2412 days ago
    PJON could fit as the message bus the author describes
  • SomeHacker44 2412 days ago
    Please write it from the ground up in Common Lisp. :)
  • jackcosgrove 2412 days ago
    How does this new OS handle backwards compatibility?

    I've always thought the next evolution of the OS was to be a hypervisor for application containers that can communicate via a common message bus.

    • yberreby 2412 days ago
      Sounds like what you want a microkernel with excellent IPC. There is still the problem of the window manager, incompatible protocols, and backwards compatibility, but I think a very small, robust core with discrete components that communicate in an isolated way instead of sharing resources is the way forward.

      If you start viewing the filesystem as a huge global variable, it becomes obvious that something is wrong with modern OS / app design. One wouldn't tolerate this kind of uncontrolled sharing in a regular program; why should we tolerate it in an OS? Permissions help, but don't solve the issue, as they are still an access control measure on top of an already flawed model.

  • maxekman 2412 days ago
    The suggested OS sounds a lot like Plan 9 to me.
  • rado 2412 days ago
    macOS' recent Metal feature is great and extended my MacBook Air's life by 2 years. People keep forgetting it.
  • nkristoffersen 2412 days ago
    Sounds like he should be using iOS honestly.
  • Xorlev 2412 days ago
    I believe understand the vision that the author is trying to paint. I don't think he's alone, but the reality is that building a full OS is a pretty massive undertaking. Additionally, his idea of simplicity may be complexity for others. I want to explore this a bit from a point of optimism, because it's very very easy to find flaws in a manifesto that desires to redesign an operating system.

    There's a lot of interesting experimentation in OS-land. BeOS (and it's successor, Haiku [1]) are called out explicitly by the author. BeOS/Haiku use this idea of apps as modules to expose functionality across a message bus. Redox OS (A Rust OS), is built on the microkernel concept. These are both kind of on the fringe at the moment, so let me bring up one more platform that many of us use daily: the modern web browser.

    Chrome (and Firefox, ChromeOS, etc.) actually do take many of these concepts to heart. Now, I know more about Chrome than Firefox or ChromeOS, so let me set those aside for a moment.

    - "Everything done via a message bus." This is Chrome extensions in a nutshell. - "Dockable tabs in any window." - "A CLI with structured data." Sorta Chrome debugger+JS. With some effort, this could be a lot more powerful. The author's desire to pipe a video call to an analysis service is a fairly tough requirement here and obviously wouldn't fly in Chrome either, but that isn't to say that it'd be impossible. - "A built-in document database." (IndexedDB) - "Working sets." Chrome profiles -- try them! - "Apps become Modules." This is more of a miss, but if you squint enough through a powerful enough lens, the APIs exposed by Chrome to extensions/webpages are a lot like this. That said, given that everything on Chrome is more site-centric vs. computer-centric, things are namespaced vs. Spotify being able to execute arbitrary queries for MP3s.

    Now, I'm not going to say that Chrome is IdealOS. There is much from that vision that's missing. And I'd also say that webapps just aren't always an acceptable substitute for native applications. Through massive wastes of computing power, we are getting closer (see: Slack, Atom, all things Electron). We aren't there yet. I'll always take a native app if it's written decently.

    It seems to me like in general much of this vision is being expressed through disparate efforts, but only a few are tackling the idea of replacing the full OS. Chrome seems best poised in many ways because it's already on your existing OS. Yes, it's having to use the underlying OS' APIs and such, but you can argue it's just one more layer. ChromeOS seems to do a pretty good job of eliminating even that.

    In general, I'm excited to see discussion on operating systems. The OSes used by the general public are already here: Android and iOS. It's up to us to build a better future for those of us using workstations

    Disclaimer: I do not work on Chrome, but do work for its parent organization. My views in no way reflect that of my employer's.

    [1] https://en.wikipedia.org/wiki/Haiku_(operating_system) [2] https://en.wikipedia.org/wiki/Redox_(operating_system)

  • tomc1985 2412 days ago
    I'm getting to the point where Medium articles with stock imagery are instantly ignored.
  • pvdebbe 2412 days ago
    Complecting a GUI into an OS doesn't sound very ideal to me.
  • kyberias 2412 days ago
    So much incorrect stuff in the text, I stopped reading.
  • halo 2412 days ago
    This aligns a lot with my personal thoughts about desktop operating systems, especially the document database, ala BeFS on steroids, which is something I've thought about for years and would be a huge improvement over the current situation in a lot of use-cases.

    I've long felt that applications and "package management" is still extremely poor. Applications should be self-contained to a single file (software.app) and have no shared dependencies outside the OS. The OS could have built-in support for compression (e.g. software.app.compressed) for software that needs it. Each application has one settings file per user. Sending, moving and backing up software then becomes a breeze. Uninstalling becomes a matter of deleting a file.

    No shared libraries. It's 2017 and bandwidth and disk space are not major problems. An OS will be able to figure out when it doesn't need to load more than one copy of a dynamic library to avoid using excess memory. The OS should be 'batteries included' so truly native applications will be tiny. You want to proactively discourage ports to encourage native software, and need developers to think twice before using bloated or interconnected dependencies.

    This greatly simplifies creating any "app store"/"package manager". It will largely download and update individual files.

    All software is sandboxed, with permissions required to do anything interesting, ala iOS.

    Title bars should be stackable to turn into tabs, ala Haiku's stack-and-tile (https://www.haiku-os.org/docs/userguide/en/gui.html#stack-ti...). Everyone uses tabs in web browsers, it is overdue to bring them as first-class features into desktop OSes, where it would greatly improve multitasking. The title bar goes from being redundant into a core feature.

    I agree that creating a good new operating system requires starting from scratch and that is really, really hard. Broadly speaking, you want to discourage ports, as it's a short-cut that will remove a lot of the advantages of the OS and will discourage people from creating native software. Any new OS needs to have the outright aim of making commercial software viable to be successful, which is something Linux has struggled with.

    Any new OS needs to be very polished and slick visually, which should be one of the lessons from Mac OS X and its relative success over BeOS and NeXTSTEP, which were much less visually appealing.

    Practically, I've wondered if you could focus on a single low-cost piece of hardware - a Raspberry Pi-in-a-box, perhaps, coupled with a VM version. This could limit the scope of the task and you might get a good amount of enthusiasts beyond free software evangelists.

    I also wonder if thin translation layers over established libraries would vastly speed up development by allowing a working version to be produced much faster, even though you would want to replace them with something better in the long run.

  • Ezhik 2412 days ago
    I was throwing out hypotheticals with a couple of friends a few days back. One problem that I felt things like Samsung DeX and Windows Continuum were trying to solve was the fact that all your devices are ultimately separate computers.

    Your currently open apps, configuration, even things like your wallpaper - are still ultimately different across your devices. Each device has its own state, and while with things like cloud file syncing and Pushbullet and etc, you can make your devices at the very least aware of each other, in the end, they still have separate states.

    The endgame would be to just have a single state, period. Your computer would be every device you have. You would be able to drag a window from your phone to your desktop to your HoloLens. Every file you have in your life, is always with you.

    But that's the faraway future.

    Something possible with today's hardware (but not software), however, would be to have phones with smart docks. Instead of just being hubs to connect the phone to a screen and a keyboard, they would also have processing power, and be proper computers in their own right to which the phone would be able to offload complex computations. But I'm thinking it should be less like an external GPU dock, and more like a server for remote compilation or video rendering. This way, for example, you'd even be able to do things like starting to render a video while your phone is docked, then undocking while the video is still rendering on the dock, or you could launch a game that runs on the dock and is controlled from your phone - something like AirPlay, but the processing takes place on the dock. So ultimately, while you still have multiple computers, there is only a single state, which is on your phone.

    The software is the hard part here. We can build a smartphone and a smart dock, and have a fast enough data protocol to transfer content between each other through USB-C. But who will write the OS? Where do you get the apps? Why would Adobe bother porting After Effects to run on a phone of all things, and then also restructure it to be aware of the whole smart dock concept, when After Effects can do something like this today as it is? Why would game developers bother writing their games in a way that specifically supports this dock paradigm when they can get the same general idea on the Nintendo Switch for free? And so on.

    This, just like OP's idea, would take a reboot. The problem with a reboot is that it's a reboot. You cannot do that. Microsoft cannot do it, which is why Windows 10 still runs very old software. Apple can't do it, which is why Carbon was a thing. Linux can't do that, because Red Hat and Canonical will not throw their customers under a bus.

    But still, it's fun to daydream. Being told to stop even imagining the impossible is not exactly going to help innovation.

  • ZenPsycho 2412 days ago
    this runs parallel to a lot of my thoughts. one thing that you don't quite address, and which i believe has derailed all efforts to do stuff like this, is the challenge of getting a large group of developers to agree on a single set of data formats. it is only once you nail that, that doing many of the composition/copy/paste things become possible. some of these formats are easy: jpeg, png, utf-8. when it comes to something like: the meta data schema for a song? a recipe? that's a can of worms and flamewars.

    to some extent you've got the DBFS thing that everything shares but that's only of use for sharing so far as you can get easy agreement about what fieldnames should he available for a kind of thing.

    you've also got security concerns. if everything shares the same database, any random bit of code can ship that data off to a russian data mining op. or corrupt your song database. or encrypt everything and ransom it. you kind of address this by puttin a layer of indirection here, and having security and access managed via the message bus, but this needs a UI, and i don't think apple, android, or facebook has really mastered the ui for permissions.

  • jonahss 2412 days ago
    Look to Mobile OSs for innovation in OS design. Like the author stated, it's currently where the money is. It's the closest we have to "starting over" and alot of things were rethought, such as security and sandboxed apps. IPC is limited to start, but slowly growing.

    I wouldn't be surprised if the workstation OSs of the future grew out of our current Mobile OSs

  • SiempreZeus 2411 days ago
    Most of the stuff he wants is either trivial, or an implementation hell that Will never happen. Hard and soft links solve the hierarchical filesystem problem, kwin has windows tabbing (nobody uses it), rewriting every single app from scratch for a theoretical gain? Sure that's happening.

    The only good idea is the system documents database, and it isn't really that useful.

  • casebash 2412 days ago
    I wouldn't say that innovation in Desktop is dead, but most of it seems to be driven by features or design patterns copied from mobile or tablet. Take for examples Windows 8 and Windows 10, Windows 8 was all about moving to an OS that could run on a whole host of devices, while Windows 10 was all about fixing up all the errors made in this transition.
  • fundabulousrIII 2411 days ago
    I used to think a system bus concentrator for disparate communications was the way forward but it always ends in tears. You have created an interrupt handling system in userspace with n * x permutations and specifications..this is the literal tower of babel and is very easily abused.
  • zanedb 2412 days ago
    I think the ideas proposed here are excellent, but again there is the problem of monetization. Where will a product like this get funded, and how will it be monetized?
  • romanovcode 2412 days ago
    > In the screenshot below the user is lifting the edge of a window up to see what's underneath. That's super cool!

    > https://joshondesign.com//images2/lift-window.png

    Is this a sarcasm? Because it is complete garbage.

    • rocky1138 2412 days ago
      Why is it complete garbage? There have been a ton of times I've been typing something in but needed to see what's behind it. Reaching over for the mouse is slow.

      In fact, do you know what the keybinding is to roll up windows in KDE or XFCE is? I could use that.

      • userbinator 2412 days ago
        There have been a ton of times I've been typing something in but needed to see what's behind it. Reaching over for the mouse is slow.

        I use Windows, where there is this utility which lets you adjust the transparency of windows from the keyboard: http://www.vanmiddlesworth.org/vitrite/

        Thus you can see the contents of both the upper and lower windows at once, and not just briefly. Very useful on a laptop with limited screen space. I know adjustable window transparency is definitely possible on Linux too.

        On the other hand, this looks like it's just effects for the sake of effects.

      • lscotte 2412 days ago
        That's why I use a tiling wm (i3). All windows are always on top.
        • rocky1138 2412 days ago
          Is there an easy way to avoid horizontal scrolling on webpages in these windows? That always bugs the crap out of me. I wish there was "fit to window" zoom in Chrome/Firefox.
      • ageofwant 2412 days ago
        Why is it behind the window in the first place? Get and use a WM that solves all your problems. I use i3, and so should you ;-)
    • digi_owl 2412 days ago
      The example may not be doing the idea any service, but the idea itself seems to be to be able to peek at the content of a window without having to actually bring the window to the front.

      That said, i wonder if this is why some people love using "focus follows mouse" or whatever it is called. Just drag the mouse pointer to the window you want to peek at, then back to the main window once done. No need to look for safe place to click or use keyboard commands (though i guess something similar could be achieved with a WM that brings each window to front as you alt-tab around).

    • rrdharan 2412 days ago
    • baybal2 2412 days ago
      Indeed, and it appears that the guy has some relation to Mozilla