35 comments

  • rbanffy 32 days ago
    It's a lot like the Burroughs B-25 (AKA the Convergent NGEN).

    https://archive.org/details/bitsavers_convergent5Brochure198...

    Convergent extended that idea into their Megaframe, which could be expanded by adding more enclosures, each with a number of separate processors.

    http://bitsavers.org/magazines/Mini-Micro_Systems/198304_Meg...

    This last one lists our familiar Steve Blank as one of its authors.

    It must have been a sight to behold:

    https://archive.org/details/bitsavers_MiniMicroSrDigest_8947...

    • Manfred 29 days ago
      I got to borrow one those Burroughs machines when I was 16 from a collector and they were really well built, but not great for a teen that wanted color screens and games. I did learn a tiny bit of Fortran.
      • rbanffy 29 days ago
        Lots of envy here... I never even saw one in person. Would love to play with it.
    • bombcar 29 days ago
      It involves the 186, the rarest of x86 processors.

      11 megabyte bus in 1983 is pretty impressive.

      • rbanffy 29 days ago
        They were Sun's "the network is the computer" well before Sun had that idea.
    • hsnewman 29 days ago
      I worked on those in the mid-80's. Very easy to program (we used PDS-ADEPT), but the Burroughs megaframe would always have issues.
  • jhbadger 30 days ago
    >Pleased with his concept, Fitch named it Jonathan (after himself)

    While it may not have been coincidental that Fitch's name was Jonathan, another reason is that the Jonathan is a strain of apple, as is the McIntosh (spelled Macintosh for the Apple product).

    • philwelch 29 days ago
      Yes, this point is made in the footnote to the exact section you’ve quoted.
    • mattl 29 days ago
      There’s a McIntosh Hi Fi company which may have had some influence there too.
  • AntiRush 29 days ago
    There’s someone on Reddit who is building some real Jonathan modules:

    https://www.reddit.com/r/VintageApple/comments/1at3bjb/jonat...

  • Findecanor 30 days ago
    That reminds me of the TI-99's "sidecars". Those were flat on the desk, so the computer could get quite wide.

    http://www.mainbyte.com/ti99/hardware/sidecar.html

    The Amiga 1000 supported sidecars but later big-box Amigas had internal expansion slots like PCs. The wedge-shaped home computer Amiga 500 supported sidecars. I had an Amiga 500 with SCSI HD sidecar, and a video grabber which was two sidecars (one for the RGB splitter), but the HD and grabber didn't work together.

    • SoftTalker 29 days ago
      The TI-99 also had an expansion box that accepted cards for more RAM, RS-232, disk controllers, and other functions.

      https://www.arcadeshopper.com/wp/ti-99-4a-faq-peripheral-exp...

      The cable to connect the box to the computer console was ridiculously thick and heavy, and rarely shown in marketing photos.

      • mrkstu 29 days ago
        I had this as a kid, hooked up along side the voice synthesizer.

        I never had a reason to actually buy an expansion card, but having the disk drive was nice, and I was the only kid I knew with a disk drive vs tape- the local library even had the ability to check out TI shareware type software via cassette.

        Great platform, other than some clunky decisions up front that crippled the hardware so it didn't compete with business lines, and trying to keep out 3rd party developers.

    • spc476 29 days ago
      The IBM PCjr also had sidecars. I had a PCjr as a kid with two memory sidecards, and a parallel port sidecar. And yes, it did make the computer wide.
    • musicale 29 days ago
      Amusing but not entirely practical.

      That being said, Apple's modular design gives me a Eurorack vibe that TI's lacks.

  • leptons 29 days ago
    As a 15 year old in 1985, I had dreams of a very similar concept, except I was fascinated by the Transputer CPU as the heart of the system I was dreaming about.

    https://en.wikipedia.org/wiki/Transputer

    The "transputer" was a CPU that had high speed serial interconnects that could connect to other transputer chips to support parallel processing. I was so enamored by this chip that I contacted the company to try to get datasheets, and the guy at the company couldn't believe I was a 15 year old. He sent me some brochures anyway.

    My dream was a modular system, with each module catering to different computing needs. I/O modules, compute modules, storage modules, even a printer module - and each module would contain at least 1 transputer chip so it could talk to all the other CPUs in the system.

    Want a faster computer? Just add some compute-only modules that contained 4 or 8 transputer chips, all working in parallel. Each peripheral added like a hard drive module or I/O module would have at least 1 transputer CPU so you would end up with a faster computer simply by plugging in any kind of module. I had so many drawings in my high-school notebook about my dream computer system.

    I have never heard about the "Jonathan" computer idea until today on HN, but back in 1985 I would have been very excited about it. It seems very similar to what I thought I wanted back then, though without the parallel processing aspect.

    • zackmorris 29 days ago
      Ya you're just a few years older than me, I remember thinking the same thing in the '80s, but didn't learn what a transputer was until just a few years ago.

      I truly believe that we could have been on an alternate timeline completely different than what we have today. Basically nothing now is how I would do it. Not hardware, not programming languages, not video cards, not networking, none of it. It's all.. runner-up solutions. Nothing really revolutionary, just evolutionary over enough decades that it approximates what might have been.

      Had I had any early wins at all in the critical era from 1995 to 2001 before the Dot Bomb and 9/11 sent us down this alternate timeline where one guy has all the money and resources in the world while everyone else makes rent, I would have designed infinitely scalable hardware and written languages to recruit it. Basically we'd all be sharing our neighbors' processing power and bandwidth in a free and secure way, which would feel like BitTorrent compared to dialup. Instead I spent most of those years in college and struggling to survive after everything fell apart, no better off today than I was 25 years ago.

      I perceive computers as having run about 1000 times slower than they should have in 2010, and about a million times slower today, coming up on a billion by 2030. GPUs are sort of a stopgap to hide the fact that Moore's law died around 2007 when smartphones arrived.

      I've lost hope that any help will come from the top, just like with political parties. If we want real performance and to get to a Star Trek future with stuff like UBI in our lifetimes, we're going to have to self-organize and do it ourselves. Which basically means that someone who won the internet lottery or actually bought Bitcoin when it was $10 (like I didn't) will have to choose to pay it forward instead of doubling down on whatever all this is. I imagine that people like that exist out there, I've just never met one. Then we get an army of prolific young people and experienced veterans getting real work done outside the profit motive for the benefit of all humankind.

  • jefurii 30 days ago
    An S-100 bus computer as done by Apple! I love this form-factor even though I can understand why it didn't catch on. The early personal computer era was exciting in the same way as the early 1900s were for aeroplane design.

    Maybe it wouldn't have been the nightmare the author imagines - Apple figured out how to connect all sorts of gear using Appletalk, and somebody else here pointed out that the chaos of IBM PC compatibles is probably what helped PCs really take off.

    This would've used more desk space tho.

    • shrubble 29 days ago
      I didn't see anything about S-100 mentioned in the article - did I miss it?
      • leejoramo 29 days ago
        It wasn’t mentioned. I think they are referring to how the Jonathan’s bus is more similar to S-100 than a PC’s motherboard card bus. There was not a primary mother board that ran the bus like in Apple 2 or IBM PC

        I ran TSR-80 systems and I never had a S-100, I was always fascinated how it could have multiple motherboards on it with different CPUs. I think S-100 was more similar to SCSI or Ethernet than to a PCI bus.

        • rbanffy 29 days ago
          IIRC, it's entirely possible to build an Apple II on an Apple II card and let it drive a completely passive backplane with slots 0 through 7. I think it'd even be possible to drive the bus from any slot.

          The Jonathan bus would probably be a lot more robust, however, as power delivery was an issue on loaded Apple II's (with too much current flowing through too few VCC and GND pins).

        • convolvatron 29 days ago
          not really like scsi - that's more like usb with a central controller. more like VME - a bus that looked alot like a 68k bus, but supported multiple masters with a protocol to negotiate temporary ownership to assert a transaction.

          there was a single address space. so every board had a set of dip switches to give it its address. which was always a big source of pain

          I can't imagine apple shipping a computer product like that, so maybe you get the high bits based on what slot you're in and a config eprom to do discovery?

          • leejoramo 29 days ago
            Interesting.

            I suspect if Apple had done this they would have had someway for the cards to auto-negotiate.

            A few years after the Jonathan prototype, I remember fighting with the DIP switches on PC ISA cards and setting the the correct IRQs, and then seeing a friend drop in a NUBUS card into their Macintosh II and the hardware was magically configured. I wonder if NUBUS could have multiple masters?

            • rbanffy 29 days ago
              Apple II cards knew which slot they were in and could have small ROMs with simple IO functions.
          • flenserboy 29 days ago
            I could see them using something like they did with NuBus — something similar to the declaration ROMs could make this sort of configuration work nicely, as long as part makers played according to the specs.
    • azinman2 29 days ago
      Well and FireWire was developed in the late 80s, and of course later usb made connecting different hardware trivial.
      • queuebert 29 days ago
        From a software perspective, yes, but give me the old serial and parallel cables any day over that damn USB-A connector that never fits in either orientation.
  • CharlesW 30 days ago
    That FrogDesign-era aesthetic is everything. I'm as affected by it as I am by my favorite music of that era.
    • Cockbrand 29 days ago
      The book mentioned in the article, AppleDesign, is full of great photos (of prototypes and shipped products) and interesting stories from that era. It covers the entire Frog Design era in great detail. Highly recommended, but unfortunately quite expensive these days.
    • BugsJustFindMe 29 days ago
      Agreed. I looked at the photos and my immediate visceral reaction was to think "Wow, that's beautiful."
    • azulster 29 days ago
      for such an influential design firm, i'm dissapointed in how uninteresting their website design is
    • helpfulContrib 29 days ago
      I've kept every single computer I've used since 1978, a veritable collection of over 40 machines.

      I still lust for an Apple IIc. Never had one, my eyes go boink whenever I see one in the wild.

      • bombcar 29 days ago
        After the (only) Mac in the computer lab was grabbed, the IIc was the next favorite. Everyone else had to be satisfied with green-screen older ones.
      • azulster 29 days ago
        my dream is to have a lamp-style imac with modern internals.

        i wish apple would have fun with the chassis design again

      • lisper 29 days ago
        You can get them on eBay for a few hundred dollars.
  • trhway 29 days ago
    I think it is informative to look at the 1981 Macintosh business plan (https://archive.computerhistory.org/resources/text/2009/1027...) page 6 "Clustering of Retail System Prices" - Apple has a machine in the 3 top bands out of 4 (note the sub-Macintosh VLC, "Very Low Cost" targeting home market, envisioned by Jobs, in his other document from that time with 5 bands (that I couldn't find today) it was in its own sub-$1K band - ultimately came 30 years later as iPad and iPhone), and 2 pages down there is a slogan "The Advantage of a Product Line is that Each Individual Product Does not have to Do Everything". As far as I see a thing like Jonathan just didn't fit there.
  • pjdesno 30 days ago
    There were lots of projects at Apple that never made it off the ground.

    When I was there in '88-'90 I remember seeing a few prototypes of their Mobius computer - an ARM-based system that emulated an Apple II much faster, and was (unfortunately) faster than a Mac II in native mode. It got canned before they ever got around to designing a case for it, but folks who had the prototypes kept them for quite a while.

    • musicale 29 days ago
      Apple fixed the "faster than a Mac" problem by moving ARM into their first handheld, the Newton.
  • phtrivier 29 days ago
    > This meant that every user could have their own unique Jonathan setup, pulling together various software platforms, storage devices, and hardware capabilities into their own personalized system. Imagining what would have been required for all this to work together gives me a headache.

    Whao. On the contrary, I could imagine plenty of ways in which it would _simplify_ stuff. Imagine if, instead of having a single computer juggling between many software process contending for the same resources, each application was its own mini-computer, with physical separation of memory and processing, and a single, hardware defined way of sharing data ?

    At this point, our computers are dumb terminals for computation happening on someone else software, so we're forced to develop distributed systems anyway. But having hardware separation forces doing the only sane thing (the old "share memory by communicating values", as opposed to "communicate commands and share memory")

    Sort of, OTP/Erlang on multiple chips of the same desktop box ?

  • sergius 29 days ago
    How about a bus that carries hi speed network and power... and everything running Plan 9 to glue it up together :-)
    • kevindamm 29 days ago
      mmm tastes like beowulf
      • floren 29 days ago
        In what way? "Beowulf clusters" were built out of off-the-shelf machines connected through relatively normal (if high-speed) networking, running MPI programs on Linux. That's what distinguished them from the more expensive, more custom HPC systems of the day.
        • CodeWriter23 29 days ago
          In the way of it being a pipe dream relentlessly pursued…in words only…by many nerd types. So much so it turned into a meme during the 90’s. Like someone would mention a hamburger and someone else would say imagine a Beowulf cluster of those.

          I kinda doubt ggp’s Plan 9-linked idea has enough legs to take it that far though.

  • gumby 30 days ago
    > In addition to the shared backbone interface, there would need to be software written to make an almost-endless number of configurations work smoothly for the most demanding of users.

    This describes modern devices connected via USB C or Thunderbolt, which in my experience works fine (eGPUs are a bit specific, but perhaps that will shake out). I don't think we're smarter today than back in the 68030 days; probably they would have worked it out. After all there have been any number of coprocessor cards for apple computers over the years.

    • lloeki 29 days ago
      That, or really, closer in timeframe, PCI cards, except the shelf is internal.

      Also, Framework laptop. Or RPis with stackable HATs.

      I think the concept holds water, e.g thinking of the music realm, from pedalboards to a ton of USB devices (soundcards like Scarlett or Volt, DAS...) to standard rackable items like DACs/ADCs with a ton of IO.

  • ilaksh 29 days ago
    Just because an idea wasn't implemented or didn't become popular doesn't mean it's not a good idea. I have always strongly felt that expansion cards in desktop computers should work more like this rather than requiring the case to come off.
    • crispyambulance 29 days ago
      Yeah, it has DEFINITELY worked in other contexts.

      NIM modules and CAMAC modules have been popular instrumentation platforms for experimental physics since the 70's and 80's. Of course that's very far from consumer adoption but the concept kinda works. No idea if NIM modules are still in production though!

      https://en.wikipedia.org/wiki/Nuclear_Instrumentation_Module

    • timw4mail 29 days ago
      IBM had at least two computers that used this concept: The PCJr, and the IBM Convertible "laptop".

      One of the many problems with sidecars is that they make the computer footprint larger. While it hypothetically allows for unlimited expansion compared to a limited number of internal slots, this would be practically limited by physical space and electrical signaling issues.

    • eloisant 29 days ago
      I'm not sure, I'd rather open the case the rare times I need to add an expansion card than have the whole thing take more space.
      • jasaldivara 29 days ago
        Well, if you don't populate all expansion slots, it takes more space than it needs.
  • bombcar 29 days ago
    Conceptually the Framework laptop is getting moderately close to this.

    Unfortunately we have signaling over USB C or Thunderbolt so good that everything is connected by wires instead of being integrated into a "standard case design" so whilst we can (somewhat) have the expandability, we don't have the resulting neatness.

    • marci 29 days ago
      Somebody somewhere will probably see this and build the framework to build something similar out of Framework parts. That is, if it hasn't be done already.
    • fundad 29 days ago
      It’s great. Almost 40 years of miniaturization, energy efficiency and price/performance improvements made it so you can achieve the Jonathan over wires that run outside of the case.

      Modularity like this inside the case is cool and necessary for high bandwidth needs (gaming and hyperscalars) where you get good price/performance but they’re not business computers like The Jonathan.

  • imglorp 29 days ago
    They went another way to wall the garden.

    The super-open Apple II family had expansion slots and published circuit diagrams that third parties could use to build all sorts of cards. And build they did.

    Instead of Jonathan, Apple drove towards the 1984 closed Mac ecosystem. You needed a special wrench to gain access: https://www.micromac.com/products/macopener.html

    • dreamcompiler 29 days ago
      It's just a long Torx-15 screwdriver. You can buy them at any hardware store today, but in 1984 they were not very common.
      • quesera 29 days ago
        A suitable replacement could be made from an iron coat hanger of the proper gauge.

        Cut the hanger, square off the cut end with a file, maybe taper it a little, and lightly tap it into the Torx screw teeth.

        The coat hanger was a much softer metal than the screw, so the Torx teeth would bite into the hanger wire, producing a wrench which was good enough for a few uses at least -- more if you were careful on replacement and future removal.

  • MeteorMarc 29 days ago
    Note that Jonathan also is the name of a fruit apple variety: https://minnetonkaorchards.com/jonathan-apples/
  • nntwozz 29 days ago
    Very nice design, that PowerPod 500 on the screen was new to me. Found a pic of it and larger version 3 here:

    https://newsletter.shifthappens.site/archive/the-cursed-univ...

    • scrumper 29 days ago
      This deserves a submission of its own. Great find.
  • compressedgas 32 days ago
  • datavirtue 30 days ago
    Surprisingly sexy. Way ahead of it's time and beyond the scope of ideas that a commercial entity would deem even remotely acceptable.
    • flohofwoe 30 days ago
      This sort of open and stackable expansion module system wasn't uncommon at all for 80's home computer systems though.
      • CharlesW 30 days ago
        The TI-99 "sidecars" examples is new to me, and don't remember other "stackable expansion" examples from the time. What do you have in mind?
      • saagarjha 29 days ago
        Note that this sort of open and stackable expansion is illegal to build in most American cities.
  • ThePowerOfFuet 30 days ago
    >This meant that every user could have their own unique Jonathan setup, pulling together various software platforms, storage devices, and hardware capabilities into their own personalized system. Imagining what would have been required for all this to work together gives me a headache. In addition to the shared backbone interface, there would need to be software written to make an almost-endless number of configurations work smoothly for the most demanding of users. It was all very ambitions, but perhaps a little too far-fetched.

    Sounds an awful lot like the mess which is Windows.

    • wildzzz 30 days ago
      There are modern systems that still use this sort of modular backplane design. MicroTCA and PXIe are the two major ones, both providing at least power and a PCIe bus on the backplane. MicroTCA supports carrying Ethernet, SATA, and PCIe on the backplane. However both of these are more intended for industrial computing. I could see a consumer variation based on MicroTCA where the chassis has a pre installed controller and backplane. The backplane would be configurable based on the CPU you are running. Higher end CPU would allow for more PCIe lanes meaning more slots, lower end would only allow for a few slots. The computer part is provided by a heavily packaged SBC containing a processor and memory. Mass storage either sits on the SBC as m.2 or as a chassis card connected via SATA or PCIe. You'd have a GPU card connected to a x16 slot (up to two of these for a large chassis). High end sound card sits on another slot. Various interface cards sit on others. You could even drop in an FPGA accelerator card.

      This would all probably be too expensive for your average consumer though.

    • flohofwoe 30 days ago
      But it was exactly this "mess" of the accidentially opened PC platform what gave us the hardware cambrian explosion of the late 90s. Without this, Nvidia probably wouldn't exist, and everything that's not Apple would have an IBM logo slapped on ;)
    • boomlinde 29 days ago
      This can be solved through reasonable abstraction. Windows does it, OS X does it, earlier Mac OS and pre-Mac Apple computers did it.

      Your application software doesn't need to know what exact disks you have in your system so long as it can interact with the disks through a standardized interface. Your application software doesn't need to know what other application software is running on your system so long as you can facilitate interoperability through shared resources like storage, clipboards and sockets using standardized formats and protocols. That's how systems have been designed to host arbitrary software and hardware in more than half a century.

      In OS X, `ls` doesn't know that `cat` exist, and neither know anything about your SSD, yet they have absolutely no problems interoperating.

    • TheOtherHobbes 30 days ago
      It's a great idea from a consumer & aesthetic POV, but the technical challenges would have been interesting.

      It's also a reinvention of one-bus-for-different-boxes computing from the 70s - Unibus, Massbus, and so on.

      It could have been made to work, but it would have been speed-limited, and mechanically unreliable if it wasn't done exactly right.

    • vidarh 30 days ago
      My Amiga 2000 had a PC on a card in it that used a window on my Amiga workbench as output. It's not like making weird stuff work together was unusual.
  • bogantech 30 days ago
    > This meant that every user could have their own unique Jonathan setup, pulling together various software platforms, storage devices, and hardware capabilities into their own personalized system. Imagining what would have been required for all this to work together gives me a headache. In addition to the shared backbone interface, there would need to be software written to make an almost-endless number of configurations work smoothly for the most demanding of users. It was all very ambitions, but perhaps a little too far-fetched.

    gestures at the Amigas and PCs of the time

  • doubloon 29 days ago
    1 every time u add a component to a system you double the complexity. If you build automated regression testing it becomes easy to see the impact as test runtime grows exponentially. So the definition of the interface between components must be able to isolate them from one another. We saw the consequences in PC land with cards and dip switches and irq settings and plugnplay and even pcpartsbuilder. Vs say USB c.

    Like there is a spectrum between pluggable sidecars and Bluetooth. In between is open isa , scsi, adb, usb, ethernet, wireless. Each one has tradeoffs in cost per module , cost of manufacture, cost of testing, physical form (space) and user experience. The market decided where the tradeoffs mattered most and it basically rejected every single side car variation ever produced. Its a niche thing that always remains niche due to suboptimal tradeoffs for most use cases

    And i bet it is mainly a version of the premature optimization issue. A side car is spending enormous manufacturing resources to optimize something people will do extremely rarely… plug parts of a cpu box machine together. If you need to frequently plug and unplug…. You are better off with some interface that uses cables and … like sequential io vs random io you wouldnt want to have to get the module from the middle of your stack. If you dont need to plug very often…. Then you wind up paying for some enormous amount of plastic molding work and pcb design and testing for something you do maybe once a year or less. Better off to just buy bare boards like in PC land. But even if you have fancy plastic cases on modules that doesnt solve the software side. So ironically the thing that makes usb great isnt just the physical design but the software isolation between subsystems .

    • vidarh 29 days ago
      A sidecar type design is literally just exposing the bus, and yes, it rejected exposing the bus externally over "just" providing a variety of cases to take cards internally. On the Amiga, it was literally exposing the raw CPU bus.

      Most of the 1980's sidecar designs existed to provide options at a point where they were competing at price points and in market segments where that was not viable. At the same time the price points for peripherals made the extra plastic a rounding error at that point in time. I think most of these designs failed largely because the moment you got people thinking about how many extra things they might assemble to get what they want, they'd look for a machine that provides more of that out of the box instead.

      Having a sidecar felt like an "escape hatch" so you didn't feel locked in when buying a fairly basic machine. If you think there's even a possibility you might need 3+, you'd be looking at a big box machine from the start.

      • doubloon 27 days ago
        then the question becomes.... why was the original machine not simply expandable with extra sidecars, why couldn't they upgrade their TI 99 4/a or PC Jr with sidecars so it could do the same as a machine that provided "more out of the box".

        is it because CPU freq kept doubling every 18 months or whatever? but i would argue, when that doubling stopped and we reached the 5 ghz limit, we didn't see sidecars make a comeback in computer land.

        the closest thing we have is external GPU which is still, after all, cabled, not sidecard.

        i am making a hypothesis again that the cost of upgrading a machine with sidecars is always going to be higher than just buying a machine that allows lower cost upgrades via slots or cables.

        or in Apple worldview, its better 'overall' from user experience to just buy the nextgen which has very little or no expandability. and just guess what people actually need/want based on careful research and connection with customer base.

        now i dont have good data on cost of sidecard vs other interconnects but it seems inherently higher to me. thanks for the response.

        • vidarh 27 days ago
          It's very simple: Because these machines were for the most part cost-cut to the very bone for people to be able to afford buying them at all.

          When I bought my first Amiga, an Amiga 500 was the most I could afford. When I could afford to add a (sidecar) harddrive, that was all I could afford. If I were to wait until I could afford an Amiga 2000 and an internal harddrive, that might have been cheaper than the combination but it would have meant waiting a couple of years to get one at all.

          These were the market realities these machines were built for. And, yes, that made it a lot less attractive for machines targeting business customers.

          But I'll note not so much because of the cost of the boxes - my first harddrive cost almost as much as Amiga had cost. Used. It'd have been almost as expensive as an expansion card for a big box, because the controller was pretty much a full computer on it's own, and the drive itself was ridiculously expensive.

    • boomlinde 29 days ago
      > 1 every time u add a component to a system you double the complexity. [...] Vs say USB c.

      Then you concede that it is not necessarily so.

  • userbinator 29 days ago
    Apple might've taken over if they actually got this to production --- their competitor at the time was the IBM PC/AT with its 8 ISA slots, and Apple's bus could've become the "ISA" instead. Then again, given that these would cost a lot more for the housings compared to just a PCB with a card edge connector and mounting bracket, maybe not.
  • brk 29 days ago
    My IBM PCjr utilized this concept with their sidecars. I recall having: 1 Memory expansion sidecar 1 Parallel port sidecar 1 Sound module sidecar 1 Extra power sidecar (just power insertion to make up for the anemic default power supply)

    I also had -4 Deskspace - the sidecars made the footprint almost double, it took up a lot of room!

  • InvisibleUp 29 days ago
    You can see a somewhat similar concept to this today in the test and measurement field with the PXI standard. It's an open standard of plug-in instrument or computer modules that slot into a chassis. The only real sore spot is the drivers, often being proprietary Windows DLLs.
  • rgovostes 28 days ago
    Panic, longtime publisher of developer tools for the Mac, have an homage to the Jonathan on the product page of their terminal app, Prompt: https://panic.com/prompt/
  • gigatexal 29 days ago
    This is the nerdiest, coolest thing I've ever seen. I wonder if the next, next Apple Mac Pro will be expandable similar to this.
    • JKCalhoun 29 days ago
      I'm thinking rack-mount, but on its side.

      You know some kids would have been posting GIFs of their rigs to the local BBS showing a 20 foot wide desk barely containing the machine.

  • sneak 29 days ago
    I would still like a backplane like this with a bunch of USB3/thunderbolt ports with lots of power available. I’m so tired of USB cables and hubs. It would be nice for our computers to be more like modular synthesizers.
    • dhosek 29 days ago
      You could have usb jacks on the front to set up routes through the hardware and use cables to configure options in your software!
  • lobochrome 29 days ago
    This is what I want the MacPro to be.
    • Findecanor 29 days ago
      In an alternative universe, Apple could have made the Mac Studio the new "Mac Pro", with a slot on the underside for vertical stacking accessories.
      • sneak 29 days ago
        Instead we get peripherals with the same finish and footprint and little U-loops of thunderbolt cables mucking up the top-down view. Even the Raspberry Pi figured this out with their HAT system.
  • titzer 29 days ago
    > Imagining what would have been required for all this to work together gives me a headache. In addition to the shared backbone interface, there would need to be software written to make an almost-endless number of configurations work smoothly for the most demanding of users.

    Uh, what? It's not like there haven't been several standards for busses, I/O, and DMA inside a PC case for decades. Somehow wrapping plastic around a PCI connector makes this harder? I just don't get it. Seems like a great idea to me.

    • jandrese 29 days ago
      In some ways the design is a throwback to the Altair where the "computer" was just a bus that you had to slot in whatever functionality you wanted. Sure the form factor is a bit nicer (no screws!) but fundamentally built the same way.

      He is right that if you're serious about running different half a dozen different OSes the driver situation would have been a nightmare.

    • azulster 29 days ago
      USB solves this, but before USB every serial connection had to be programmed for individually, both for the OS and the firmware of the attachment.

      we've solved it now with USB for peripherals, but we only just managed to solve the bandwidth problem for PCI, and we are still unable to attach RAM or CPUs via usb with any sort of performance

  • vincnetas 29 days ago
    Was expecting to find some modern examples of such modular computer approach in comments, but am disappointed for now. Looks like Framework laptop is closest i can think of in this regard. Which is basically just USB hub for extra ports.

    Another example from mobile phones which was google modular phone prototype which also failed to reach consumers.

    Project Ara : https://en.wikipedia.org/wiki/Project_Ara

  • luxuryballs 29 days ago
    Steve realized this would surely lead to the discovery of his estranged son.
  • causi 29 days ago
    Computers used to look so damn cool.
  • joshu 29 days ago
    i had a meeting at Frog once and saw one of these (or a similar design) off in the corner.
  • itsTyrion 28 days ago
    ok but why does it look so cool