22 comments

  • orev 12 days ago
    I’m glad they explained why RAM has become soldered to the board recently. It’s easy to be cynical and assume they were doing it for profit motive purposes (which might be a nice side effect), but it’s good to know that there’s also a technical reason to solder it. Even better to know that it’s been recognized and a solution is being worked on.
    • OJFord 12 days ago
      I didn't find that a particularly complete explanation - and the slot can't be closer to the CPU because? - I think it must be more about parasitic properties of the card edge connector on DIMMs being problematic at lower voltage (and higher frequencies) or something. Note the solution is a ball grid connection and the whole thing's shielded.

      I suppose in fairness and to the explanation it does give, the other thing that footprint allows is a shorter path for the pins that would otherwise be near the ends of the daughter board (e.g. on a DIMM), since they can all go roughly straight across (on multiple layers) instead of a longer diagonal according to how far off centre they are. But even if that's it, that's what I mean by it seeming incomplete. :)

      • Tuna-Fish 11 days ago
        > and the slot can't be closer to the CPU because?

        All the traces going into the slot need to be length-matched to obscene precision, and the physical width of the slot and the room required by the "wiggles" made in the middle traces to length-match them restrict how close you can put the slot. Most modern boards are designed to place it as close as possible.

        LPCAMM2 fixes this by having a lot of the length-matching done in the connector.

        • ansible 11 days ago
          Generally speaking, layout for modern DRAM (LPDDRx, etc.) is a giant pain. Trace width, differential trace length matching, spacing, number of vias, and more.

          And all this is needed even though the DRAM signaling standard has extensive measurement and analysis of the traces built right into the hardware of the DRAM and the memory controller on the processor. They negotiate the speed and latency at runtime.

          Giant pain.

      • throwaway48476 11 days ago
        Competes with space for VRM's.
      • smolder 11 days ago
        Yeah, you can only make the furthest RAM chip in DIMM be so close to the CPU based on the form factor, and the other traces need to match that length. Distance is critical and edge connectors sure don't help.
    • klysm 11 days ago
      I didn’t really appreciate the insanity of the electrical engineering involved in high frequency stuff till I tried to design some PCBs. A simplistic mental model of wires and interconnects rapidly falls apart as frequencies increase
    • drivingmenuts 12 days ago
      The problem is getting manufacturers to implement the new RAM standard. While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on.

      They are going to lose money when people buy new RAM, rather than a whole new laptop. While processor speeds and size haven't plateaued yet, it's going to take a while to develop significant new speed upgrades and in the meantime, the only other upgrade is disk size/long-term storage, which, aside from Apple, they don't totally control.

      So, why should they relenquish that to the user?

      • cesarb 11 days ago
        > While the justifications given are great for the consumer, I didn't see any reason for a manufacturer to sign on. [...] So, why should they relenquish that to the user?

        It makes sense that the first ones to use this new standard would be Dell and Lenovo. They both have "business" lines of computers, which usually offer on-site repairs (they send the parts and a technician to your office) for a somewhat long time (often 3 or 5 years). To them, it's a cost advantage to make these computers easier to repair. Having the memory (which is a part which not rarely fails) in a separate module means they don't have to replace and refurbish the whole logic board, and having it easy to remove and replace means less time used by the on-site technician (replacing the main logic board or the chassis often means dismantling nearly everything until it can be removed).

        • masklinn 11 days ago
          > To them, it's a cost advantage to make these computers easier to repair.

          Alternatively, it allows them to use more efficient RAM in computer lines they can't make non-repairable so they can boast of higher battery life.

        • babypuncher 11 days ago
          They also charge a lot more for these "business-class" machines. That higher margin captures the revenue lost to DIY repairs and upgrades.
      • AnthonyMouse 11 days ago
        > They are going to lose money when people buy new RAM, rather than a whole new laptop.

        You're thinking about this the wrong way around.

        Suppose the user has $800 to buy a new laptop. That's enough to get one with a faster processor than they have right now or more memory, but not both. If they buy one and it's not upgradable, that's not worth it. Wait another year, save up another $200, then buy the one that has both.

        Whereas if it can be upgraded, you buy the new one with the faster CPU right away and upgrade the memory in a year. Manufacturer gets your money now instead of later, meanwhile the manufacturer who didn't offer this not only doesn't sell to you in a year, they just lost your business to the competition.

        • petemir 11 days ago
          I doubt the consumer mass that actually matters to manufacturer's earnings understands RAM value and if the computer they are buying is RAM-upgradable or not.

          They are going to buy the 800$, any of the two, complain when it inevitably "works slower" in a couple of years (if they are lucky), and buy a new 800$ once again then. I don't see the manufacturer's motivation to offer upgradable RAM.

          • AnthonyMouse 11 days ago
            They don't have $800 to buy another one so soon. So they take the one that "works slower" to some tech who knows the deal and tells them this machine sucks because you can't upgrade it, and now they think your brand is crap (because it is), curse you for the next however many years until they have the money and then buy the next one from someone else.
      • makeitdouble 11 days ago
        I'd see two angles:

        - the manufacturer themselves benefit from easier to repair machines. If DELL can replace the RAM and send back the laptop in a matter of minutes instead of replacing the whole motherboard to then have it salvaged somewhere else, it's a clear win.

        - prosumers will be willing to invest more in a laptop that has better chance to survive a few years. Right now we're all expecting to have parts fail within 2 to 3 years on the higher end, and budget accordingly. You need a serious reason to buy a 3000$/€ laptop that might be dead in 2 years. Knowing it could weather RAM failure without manufacturer repair is a plus.

      • bugfix 12 days ago
        Even if it's just Lenovo using these new modules, I still think it's a win for the consumer (if the modules aren't crazy expensive).
      • rock_artist 11 days ago
        Unlike Apple, where they are in in-direct competition on computer hardware, For PCs, If Lenovo starts doing it, then it's a marketing point. now Asus, HP, Dell would try and get it.

        So it's the egg and the chicken where if it'll be important to consumers, it might end up as catching up.

      • 7speter 11 days ago
        These companies did plenty well 12+ years ago when users could upgrade their systems memory.
    • kjkjadksj 11 days ago
      They can have their technical fig leaf to hide behind but in practice, how many watts are we really saving between lpddr5 and ddr5? is it worth the ewaste tradeoff to have a laptop we can't modularly upgrade to meet our needs? I would guess not.
      • masklinn 11 days ago
        > how many watts are we really saving between lpddr5 and ddr5?

        From what I gathered, it's around a watt per when idling (which is when it's most critical): the sources I found seem to indicate that ddr5 always runs at 1.1V (or more but probably not in laptops), while lpddr5 can be downvolted. That's an extra 10% idle power consumption per.

        • kjkjadksj 5 days ago
          So rather than your battery lasting 15 hours it only lasts 14ish. Probably a rounding error considering how much a poorly coded website can impact your battery life estimates even on apple silicon, and the fact your battery degrades over its life anyhow and never gets what it says on the tin (especially apples tin).
    • yread 11 days ago
      If they soldered a decent amount that gou can be sure you don't ever need to upgrade it would be fine (seriously, 64GB ram costs like 100eur, non issue in a 1000eur laptop). 8 is not enough already and 16 will soon be limiting too.
      • brookst 11 days ago
        Is the goal to not have any computers that are limited to a single task? Tons of corporate IT purchases go to someone only using e.g. Word all day. Do we really care if they are provisioned with “enough” memory for you or me?
        • pathartl 11 days ago
          The baseline 14" MacBook Pro that costs $1600 has 8GB of shared RAM. That's not enough. I don't believe OP is talking about machines better suited for your task, machines in the $1k range.
      • nuancebydefault 11 days ago
        10 percent is not neglectible. Also 64GB is a lot _today_ but most probably not 5 years from now. The alternative of buying a new laptop feels like a big waste.
      • orev 11 days ago
        No matter how much the specs increase, developers find a way to use it all up. This approach would just accelerate that process.
    • tombert 11 days ago
      Yeah, I was actually surprised to learn there was a reason other than "Apple wants you to buy a new Macbook or overspec your current one". It's annoying, but at least there's a plausible reason to why they do it.
      • seanp2k2 11 days ago
        "...and they charge 4x what the retail of premium RAM would otherwise be per GB"

        do storage next.

      • klausa 11 days ago
        Apple's RAM is not soldered to the _motherboard_, it's part of the SoC package.
        • Vogtinator 11 days ago
          Only recently. It started out as soldered to the main board.
          • klausa 10 days ago
            M1 Macs started shipping in late 2020, so, for some definitions of "recently", sure.

            It's true for any laptops that can be reasonably described as having a "SoC" and not CPU, anyway.

            (I guess you could be extremely pedantic and try to argue that T2 counted as SoC? But clearly not what I meant.)

          • brookst 11 days ago
            No, it started out as chips in sockets. I (dimly) remember upgrading my II+, I think from 32kb to 48kb?

            A lot has changed.

            • lazide 10 days ago
              EEPROM like DIP packaging where it was damn near impossible to pull without bending a pin and/or smacking your hand on something?

              God forbid someone steps on it too, I think I might still have some scars on my feet.

              • brookst 9 days ago
                And remember how the pins were a little too wide, so as to ensure tension against the socket, so you had to put one side in and apply pressure to get the other side? How many chips did I just fold over or bend even just inserting? Many.
                • lazide 9 days ago
                  Nightmare material.
  • mmastrac 12 days ago
    Ugh, finally. And it's not just a repurposed desktop memory standard either! The overall space requirements look to be similar to the BGA that you'd normally solder on (perhaps 2-3x as thick?). I'm sure they can reduce that overhead going forward.

    I love the disclosure at the bottom:

    Full Disclosure: iFixit has prior business relationships with both Micron and Lenovo, and we are hopelessly biased in favor of repairable products.

    • Aurornis 12 days ago
      > Ugh, finally.

      FYI, the '2' at the end is because this isn't the first time this has been done. :)

      LPCAMM spec has been out for a while. LPCAMM2 is the spec for next-generation parts.

      Don't expect either to become mainstream. It's relatively more expensive and space-consuming to build an LPCAMM motherboard versus dropping the RAM chips directly on to the motherboard.

      • nrp 11 days ago
        My recollection of this is that LPCAMM was a proposal from Dell that they put into the JEDEC standardization process, and LPCAMM2 is the resulting standard, named that way to avoid confusion with the non-standard LPCAMM that Dell trialed on a small number of commercial systems.
        • Tuna-Fish 11 days ago
          Almost. The Dell proposal is called CAMM, which was slightly modified during the JEDEC process and standardized as CAMM2, which is the combined with the memory type the same way DIMM was, For example LPDDR5X CAMM2 or DDR5 CAMM2. LPCAMM2 is not a name used in any JEDEC standard or even referred to anywhere on their site, but it seems to be used by both the memory manufacturers and the users because it's less of a mouthful, and they feel there needs to be more to distinguish between LPDDR5 CAMM2 and DDR5 CAMM2 because they are not electrically compatible.
      • audunw 11 days ago
        Not to mention putting the RAM directly on a System-in-Package chip like Apple does now. That's going to be unbeatable in terms of space and possibly have an edge when it comes to power consumption too. I wouldn't be surprised if future standards will require on-package RAM.

        I kind of wish we could establish a new level in the memory hierarchy. Like, just make a slot where you can add slower more power hungry DDR RAM that acts as a big cache for the NVM storage, or that the OS can offload some of the stuff in main memory if it's not used much. It could be unpopulated in base models, and then you can buy an upgrade to stick in there to get some extra performance later if needed.

        • burutthrow1234 11 days ago
          This is kind of what Optane was in some incarnations (it's really terrible branding that conflates multiple technologies).
    • cjk2 12 days ago
      Yeah they even gloss over Lenovo's crappy soldered on the motherboard USB-C connectors which is always the weak point on modern thinkpads. Well that and Digital River (Lenovo's distributor) carries absolutely no spare parts at all for any Lenovos in Europe, and if they do they only rarely turn up, so you can't replace any replaceable bits because you can't get any.
      • chpatrick 12 days ago
      • sspiff 11 days ago
        Digital River is shit at everything. From spare parts, to delivery and tracking, to customer communications, to warranty claims. Every single interaction with them is a nightmare. It is the single reason I prefer to buy Lenovo from resellers rather than directly.
  • baby_souffle 12 days ago
    This is fantastic news. Hopefully the cost to manufacturers is only marginal and they find a suitable replacement for their current "each tier in RAM comes with a 5-20% price bump" pricing scheme.

    Too bad apple is almost guaranteed to not adopt the standard. I miss being able to upgrade the ram in macbooks.

    • Aurornis 12 days ago
      > Too bad apple is almost guaranteed to not adopt the standard.

      Apple would require multiple LPCAMM2 modules to provide the bus width necessary for their chips. Up to 4 x LPCAMM2 modules depending on the processor.

      The size of each LPCAMM2 module is almost as big as the entire size of an Apple CPU combined with the unified RAM chips, so putting 2-4 LPCAMM2 modules on the board is completely infeasible without significantly increasing the size of the laptop.

      Remember, the Apple architecture is a combined CPU/GPU architecture and has memory bandwidth to match. It's closer to your GPU than the CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple laptops is akin to almost like asking for upgradeable RAM on your GPU (which would not be cheap or easy)

      For every 1 person who thinks they'd want a bigger MacBook Pro if it enabled memory upgrades, there are many, many more people who would gladly take the smaller size of the integrated solution we have today.

      • coolspot 12 days ago
        > like asking for upgradeable RAM on your GPU

        Can I please have upgradeable RAM on GPU? Pwetty pwease?

        • thfuran 12 days ago
          Sure, as long as you're willing to pay in cost, size, and performance.
      • kokada 12 days ago
        > Up to 4 x LPCAMM2 modules depending on the processor.

        The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is the kind of notebook that mostly needs to be upgraded later since they commonly come with only 8GB of RAM.

        Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would be 2 x LPCAMM2 modules, that seem plausible.

        For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be impossible (probably). But I think you could have something like the Mac Studio have them, that is arguably also the kind of device that you probably want to increase memory in the future.

        • throwaway48476 11 days ago
          It would only need to be 2x per board side.
    • sliken 12 days ago
      Apple ships 128 bit, 256 bit, and 512 bit wide memory interfaces on laptops (up to 1024 bit wide on desktops).

      Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide LPDDR5-6400) with LPCAMM2 in a thin/light laptop?

      • pja 12 days ago
        This PDF[1] suggests that an LPCAMM2 module has a 128 bit wide memory interface, so the epic memory bandwidth of the M3 max won’t be achievable with one of these memory modules. High end devices could potentially have two or more of them arranged around the CPU though?

        [1] https://investors.micron.com/node/47186/pdf

        • 7speter 11 days ago
          Apple could just make lower tier macbooks but mac fanboys wouldnt be able to ask “but what about apples quarterly profits?”

          Most macbooks dont need high memory bandwidth, most users are using their macs for word processing, excel and vscode.

          • pmontra 11 days ago
            As a non Mac reference, I work on a HP laptop from 2014. It was a high end laptop by then. It's between 300 and 600 Euro refurbished now.

            I expanded it to 32 GB RAM, 3 TB SSD but it's still a i7 4xxx with 1666 MHz RAM. And yet it's OK for Ruby, Python, Node, PostgreSQL, docker. I don't feel the need to upgrade. I will when I'll get a major failure and no spare parts to fix it.

            So yes, low end Macs are probably good for nearly everything.

          • sliken 11 days ago
            Even low end gaming, simulations, and even fun webGL toys can require a fair amount of memory bandwidth with an iGPU, like apple's M series. It also helps quite a bit for inference. I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.
            • consp 11 days ago
              > I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.

              Good for your niche case, the other 99.8% still only does web and low performance desktop applications (which includes IDEs)

          • teaearlgraycold 11 days ago
            Yes but Apple’s trying to build an ecosystem where users get highly quality, offline, low latency AI computed on their device. Today there’s not much of that. And I don’t think they even really know what’s going to justify all of that silicon in the neural engine and the memory bandwidth.

            Imagine 5 years from now people have built whole stacks on that foundation. And then competing laptops need to ship that compute to the cloud, with all of the unsolvable problems that come with that. Privacy, service costs (ads?), latency, reliability.

            • jwells89 11 days ago
              Apple is also deliberately avoiding having “celeron” type products in their lineup because those ultimately mar the brand’s image due to being kinda crap, even if they’re technically adequate for the tasks they’re used for.

              They instead position midrange products from 1-2 gens ago as their entry level which isn’t quite as cheap but is usually also much more pleasant to use than the usual bargain basement stuff.

      • wmf 12 days ago
        For 512 bits you would need four LPCAMM2s. I could imagine putting two on opposite sides of the SoC but four might require a huge motherboard.
        • kristianp 11 days ago
          Perhaps future LPCAMM generations will require more bits? I still can't imagine apple using them unless required by right to repair laws. But those laws probably don't extend to making RAM upgradeable.
      • AnthonyMouse 11 days ago
        Apple does this because their CPU and GPU use the same memory, and it's generally the GPU that benefits from more memory bandwidth. Whereas in a PC optimized for GPU work you'd have a discrete GPU that has its own memory which is even faster than that.
      • jauntywundrkind 12 days ago
        Hoping we see AMD Strix Halo with it's 256-bit interface crammed into an aggressively cooled fairly-thin fairly-light. But it's going to require heavy cooling to make full use of.

        Heck, make it only run full tilt when on an active cooling dock. Let it run half power when unassisted.

        • seanp2k2 11 days ago
          Kinda hilarious to see gamers buying laptops that can't actually leave the house in any practical meaningful way. I feel like some of them would be better off with SFF PCs and the external monitors they already use. I guess the biggest appeal I've seen is the ability to fold up the gaming laptop and put the dock away to get it off the desk, but then moving to an SFF on the ground plus a wireless gaming keyboard and wireless mouse that they already use with the normal laptop + one of those compact "portable" monitors seems like it'd solve the same problem.
          • kristianp 11 days ago
            My wife can get an hour of gaming out of her gaming laptop. They're good for being able to game in an area of the house where the rest of the family is, even if that means being plugged in at the dining table. Our home office isn't close enough.

            Also a gaming laptop is handy if you want to travel and game at your hotel.

          • jwells89 11 days ago
            I’ve been wondering for a while now why ASUS or some other gaming laptop manufacturer doesn’t take one of their flagship gaming laptop motherboards, put some beefy but quiet cooling on it, put it in a pizza-box/console enclosure, and sell it as a silent compact gaming desktop.

            A machine like that could still be relatively small but still be dramatically better cooled than even the thickest laptop due to not having to make space for a battery, keyboard, etc.

            • antonkochubey 11 days ago
              ZOTAC does these - there are ZBOX Magnus with laptop-grade RTX 4000 series GPUs in 2-3 liter chassis. However their performance and acoustics are rather.. compromised, compared to a proper SFF desktop (which can be built in ~3x the volume)
              • jwells89 11 days ago
                Yeah, those look like they’re too small to be reasonably cooled. What I had in mind is shaped like the main body of a laptop but maybe 2-3x as thick (to be able to fit plenty of heatsink and proper 120/140mm fans), stood up on its side.
    • j16sdiz 11 days ago
      Unified memory is basically L3 cache speed with zero copy between CPU and GPU.

      They have engineering difference. Depends on who you ask, it may or may not worth it

    • redeeman 11 days ago
      and they wont so long as people buy regardless
    • cjk2 12 days ago
      Given enough pressure ...
      • armarr 12 days ago
        You mean pressure from regulators, surely. Because 99% of consumers will not notice or know the difference in a spec sheet.
      • colinng 12 days ago
        They will maliciously comply. They might even have 4 sockets for the 512-bit wide systems. But then they’ll keep the SSD devices soldered - just like they’ve done for a long time. Or cover them with epoxy, or rig it with explosives. That’ll show you for trying to upgrade! How dare you ruin the beautiful fat profit margin that our MBAs worked so hard to design in?!?
        • 7speter 11 days ago
          Apple lines perimeter of the nand chips on modern mac minis with an array of tiny capacitors, so even the crazy people with heater boards can’t unsolder the nand and replace them with higher density NAND.
          • wtallis 11 days ago
            Have you not looked at the NAND packages on any regular SSDs? Tiny decoupling caps alongside the NAND is pretty standard practice.
          • cjk2 11 days ago
            This is normal. They are called decoupling capacitors and are there to provide energy if the SSD requires short bursts of it. If you put them any further away the bit of wire between them and the gate turns into an inductor and has some somewhat undesirable characteristics.

            Also replacing them is not rocket science. I reckon I could do one fine (used to do rework). The software side is the bugbear.

        • cjk2 11 days ago
          This is hyperbole. They are replaceable. It's just more difficult.
  • zxcvgm 12 days ago
    I remember when Dell was the first to introduce [1] these Compression Attached Memory Modules in their laptops in an attempt to move away from soldered-on RAM. Glad this is now being more widely adopted and standardized.

    [1] https://www.pcworld.com/article/693366/dell-defends-its-cont...

    • AlexDragusin 12 days ago
      > The first iteration, known as CAMM, was an in-house project at Dell, with the first DDR5-equipped CAMM modules installed in Dell Precision 7000 series laptops. And thankfully, after doing the initial R&D to make the tech a reality, Dell didn’t gatekeep. Their engineers believed that the project had such a good chance at becoming the next widespread memory standard that instead of keeping it proprietary, they went the other way and opened it up for standardization.
      • jimbobthrowawy 11 days ago
        Trying to make it a standard is one of the least surprising things about it. You want accessories/components in your product to be as commodity as possible to drive costs down.
  • doublextremevil 12 days ago
    Cant wait to see this in a framework laptop
    • OJFord 12 days ago
      For the presumed improvement to battery life? Because Fw already uses SO-DIMMs.
      • universa1 11 days ago
        That's also nice, but the memory speed is also higher, Ddr5-7266 vs 5600 iirc. The resulting higher bandwidth translates more or less directly into more performance for the iGPU.
      • wmf 11 days ago
        It's also faster (7500 vs. 5600).
  • userbinator 11 days ago
    A bit of a disingenious argument intended to sell this as being more revolutionary than it really is --- BGA sockets already exist for LPDDR as well as other things like CPUs/SoCs, but they're very expensive due to low volumes. If the volume went up, they'd go down in price significantly just like LGA sockets for CPUs have.

    https://www.ironwoodelectronics.com/products/lpddr/

  • zokier 12 days ago
    I wonder if this will bring a new widely available high-performance connector to the wider market. SO-DIMM connectors have been occasionally repurposed to other uses, most notably by Raspberry Pi Compute Models 1-3 among other similar SOM/COM boards. RPi CM4 switched to 2x 100pin mezzanine connectors; maybe some future module could use CAMM connectors, I'd imagine they are capable enough
    • wmf 11 days ago
      The compression connector looks flimsier than a mezzanine so it should probably be a last resort for multi-gigahertz single-ended signaling.
  • kristianp 11 days ago
    So this is going into the ThinkPad P1 (Gen 7), which is too expensive and power hungry for my use cases. How long until it filters down into less expensive SKUs? Are we talking next years generation?

    Ifixit also links to a repair guide:

    https://www.ifixit.com/Device/Lenovo_ThinkPad_P1_Gen_7

    • CoolCold 11 days ago
      My personal understanding - for Thinkpads, it's next year. I guess Lenovo is making real-life testes with P1 here, gather feedback before addressing other families like T14/T14s
  • farmdve 12 days ago
    Remember that Haswell laptops were the last to feature socketed CPUs.

    RAM is nice to upgrade, for sure. As well as an SSD, but CPUs are still a must. I would even suggest upgradeable GPUs but I don't think the money is there for the manufacturers. Why allow you to upgrade when you can buy a whole new laptop?

    • zamadatix 12 days ago
      I'm not sure I really get much value out of a socketed CPU, particularly in a laptop, vs something like a swappable MB+CPU combo where the CPU is not socketed.

      RAM/Storage are great upgrades because 5 years from now you can pop in 4x the capacity at a bargain since it's the "old slow type". CPUs don't really get the same growth in a socket's lifespan.

      • immibis 12 days ago
        Socket AM4 had a really good run. Maybe we just have to pressure manufacturers to make old-socket variations of modern processors.

        The technical differences between sockets aren't usually huge. Upgrade the memory standard here, add or remove PCIe lanes there. Using new cores with an older memory controller may or may not be doable, but it's quite simple to not connect all the PCIe lanes the die supports.

        • seanp2k2 11 days ago
          but then what excuse would you have to throw another $500 at Asus for their latest board that while being the best chance the platform has, still feels like it runs a beta BIOS for the first 9 months of ownership?
      • farmdve 12 days ago
        As I said to the comment above, it makes perfect sense. In 2014 we purchased a dual core Haswell. Almost a decade later I revive the laptop by installing more ram, an SSD and the best possible quad core CPU for that laptop. The gain in processing power were massive and made the laptop useable again.
        • zamadatix 12 days ago
          I'm sure it's all subjective (e.g. I'm sure someone here even considers the original dual core Haswell more than fine without upgrade in 2024) but going from a dual core Haswell to a quad core Haswell (or even a generation or two beyond, had it been supported) as an upgrade a decade after the fact just doesn't seem worth it to me.

          The RAM/SSD sure - a 2 TB consumer SSD wasn't even a possible thing to buy until a year after that laptop would have come out and you can get that for <$100 new now. It won't be the highest performing modern drive but it'll still max out the bus and be many times larger than the original drive. Swap equipment 3 years from now and that's also still a great usable drive rather than a museum piece. Upgrading to a CPU that you could have gotten around the time the laptop came out? Sure, it has twice as many cores... but it still has pretty bad multi core performance and a god awful perf/wattage ratio to be investing new money on a laptop for. It's also a bit of a dead end, in 3 years you'll now have 2 CPUs so ancient you can't really do much with them.

          • pavon 12 days ago
            This matches my experience. Every PC I've built over the last 30 years have benefited from memory and storage upgrades through their life, and I've upgraded GPU a few times. However, every time I've looked at upgrading to another CPU with the same socket it is either not a big enough step up, or too much of a power hog relative to the midrange CPU I originally built with. The only time I've replaced CPUs is when I've fried them :)
            • seanp2k2 11 days ago
              Yup, so I've adopted a strategy for my past few desktop builds like this:

                - Every time a new ToTL GPU comes out for a new family, buy it at retail price as soon as it launches (so, the first-available ToTL models that were big gains in perf: GTX 1080 Ti, RTX 2080 Ti, RTX 3090, RTX 4090)
              
                - Every other release cycle, upgrade CPU to the ToTL consumer chip (eg on a 12900KS right now, HEDT like ThreadRipper is super expensive and not usually better for gaming or normal dev stuff). I was with Ryzen since 1800x -> 3950x -> 5950x but Intel is better for the particular game I play 90% of the time.
              
                - Every time you upgrade, sell the stuff you've upgraded ASAP. If you do this right and never pay above MSRP for parts, you can usually keep running very high-end hardware for minimal TCO.
              
                - Buy a great case, ToTL >1000w PSU (Seasonic or be quiet!), and ToTL cooling system (currently on half a dozen 140mm Noctua fans and a Corsair 420mm AIO). This should last at least 3 generations of upgrading the other stuff.
              
                - Storage moves more slowly than the rest, and I've had cycles where I've re-used RAM as well, so again here go for the good stuff to maximize perf, but older SSDs work great for home servers or whatever else.
              
                - Monitor and other peripherals are outside of the scope of this but should hopefully last at least 3 upgrade generations. I bit when OLED TVs supported 4K 120hz G-Sync, so I've got a 55" LG G1 that I'm still quite happy with and not wanting to immediately upgrade, though I do wish they made it in a 42" size, and 16:10 would be just perfect.
          • farmdve 11 days ago
            Maybe it is subjective. For me it made perfect sense. I could not afford a new laptop but could afford rejuvenating an old one.
    • Night_Thastus 12 days ago
      On a laptop it's not very practical.

      Because you can't swap the motherboard, your options for CPUs are going to be quite limited. Generally, only higher-tier CPUs of that same generation - which draw more power and require more cooling.

      Generally a laptop is built designed to provide a specific budget of power to the CPU and has a limited amount of cooling.

      Even if you could swap out the CPU, it wouldn't work properly if the laptop couldn't provide the necessary power or cooling.

      • farmdve 12 days ago
        I can't say I agree. Back in 2014 a laptop was purchased with a dual-core haswell CPU. 8 years later I revive the laptop by upgrading the CPU to almost the best possible CPU, which is a 4-core 8 thread CPU or 4-core 4 threads, I am unsure which of these it was, but the speed boost was massive. This is how you keep old tech alive.

        And the good thing about mobile CPUs is that they have almost the same TDP across the various dual-quad versions(or whatever is the norm today).

        • Rohansi 11 days ago
          How old was the new CPU though? Probably the same or similar generation to what it originally came with since the socket needs to be the same.

          IMO the switch to an SSD would have been the biggest boost.

          • farmdve 11 days ago
            Same gen but with 2 more cores + Hyperthreading
      • yencabulator 11 days ago
        > On a laptop it's not very practical.

        > Because you can't swap the motherboard,

        https://frame.work/ has entered the chat.

    • leduyquang753 12 days ago
      The Framework laptop 16 features replaceable GPU.
      • freedomben 11 days ago
        I'm writing this from my Framework 16 with GPU and it is the best laptop I've ever known. It's heavy and big and not the most portable, but I knew that would be the case going into it and I have no regrets
      • FloatArtifact 11 days ago
        > The Framework laptop 16 features replaceable GPU.

        In a way I don't mind having non-replaceable ram in the framework ecosystem as an option. Put simply because the motherboard itself is modular and needs to be upgraded for the CPU. At that point though I would prefer on integrated ram CPU/GPU.

      • farmdve 12 days ago
        These are very obscure, or perhaps I mean to say niche laptop manufacturers. We need this standard for all of them, HP, Lenovo, Acer etc.
        • nwah1 12 days ago
          Framework open sources most of their schematics, if I understand correctly. So it should be possible for others to use the same standard, if they wanted to. (they don't want to)
          • nrp 11 days ago
          • Dylan16807 11 days ago
            The form factor isn't great for being a vendor-neutral thing.

            If we can convince the companies to actually try for compatibility, then a revival of MXM is probably a significantly better option.

            • Manabu-eo 11 days ago
              MXM was problematic because the inflexibility of the form factor to upgrade a given system. If your laptop size, power and cooling was designed for a gtx1030 you couldn't replace it with a gtx1080 module.

              In framework's case, the cooling is integrated in the gpu module, and both it's size, cooling and power deliver can be adjusted depending on the gpu power.

              • Dylan16807 10 days ago
                I don't mind having a wattage limit on the slot. That's easy to factor into purchasing decisions. The much bigger issues are how custom each kind was, with very limited competition on individual modules and a big conflict of interest in wanting to sell you a new laptop.

                A friend of mine was betrayed on this by MSI, where laptops with GTX 900 series GPUs were promised upgrades and then when the 1000 series came out they didn't offer any. I think they did make weak excuses about power use, but a 1060 would have fit within the power budget fine and been an enormous upgrade. A few people have even gotten 1060 modules to work with BIOS edits, so it wasn't some other incompatibility. It seems like they saw they couldn't offer a 1080 and threw out the entire project and promise, and then offered a mild discount on a brand new laptop, no other recourse.

    • seanp2k2 11 days ago
      They've done upgradeable laptop GPUs before with MXM: https://en.wikipedia.org/wiki/Mobile_PCI_Express_Module

      Looks like the best card they have out with MXM right now is a Quadro RTX 5000 Mobile which seem to be going for ~$1000 on eBay.

    • sojuz151 12 days ago
      I would say it would make the most sense to have a replaceable entire ram+cpu+gpu assemble. Just have some standard form factors and connectors for external connectors.

      This way, you could keep power consumption low and be able to upgrade cpu to a new generation

    • immibis 12 days ago
      Laptops have always been trading size for upgradeability and other factors, and soldering everything is the way to make them tiny. If you ask me they've gotten too extreme in size. The first laptops were way too bulky, but they hit a sweet spot around 2005-2010, being just thick enough to hold all those D-Sub connectors (VGA, serial, etc).

      And soldering stuff to the board is the default way to make something when upgradeability isn't a feature.

  • PTOB 11 days ago
    The current Dell version of this: upgrade to 64GB is $1200. Found this the hard way when trying to get my engineering team what I thought would be a $200 upgrade per machine from their stock 32GB Precision laptop workstations.
  • Dwedit 11 days ago
    Can it become loose then suddenly not have all pins attached properly? This is something that's unlikely to happen with SODIMM slots, but I've seen so many times when screw receptacles fail.
  • hinkley 10 days ago
    > LPDDR operates at lower voltages compared to DDR, giving it the edge in power efficiency. But, the lower voltage makes signal integrity between the memory and processor challenging,

    Why can't the signaling channels use a higher voltage and control circuitry on the memory stick step up and step down the gain to access the memory module?

  • snvzz 11 days ago
    I see no mention of ECC.

    It worries me.

  • sharpshadow 11 days ago
    Is it possible to have both LPDDR and LPCAMM2 in use at the same time?
    • wtallis 11 days ago
      LPCAMM2 is a connector and form factor standard for modules carrying LPDDR type memory chips.
      • masklinn 11 days ago
        I assume they mean having some memory soldered and an expansion slot.

        I've seen laptops like that, with e.g. 8GB soldered and a sodimm slot.

        • sharpshadow 11 days ago
          That would be nice since there is a rise of CPU+RAM and even GPU I think all on one chip. Would be interesting to be able to upgrade RAM on maschines like that.
  • sharpshadow 11 days ago
    Would it be possible to have LPCAMM2 as external device tru thunderbolt?
    • noodlesUK 11 days ago
      No, RAM is not something that is exposed on the PCIe bus (which is what thunderbolt is based on). RAM has a different protocol (DDR5 in this case), and as it says in the article, is very sensitive to the distance between the CPU and the RAM. External RAM isn't really something that is viable in the modern era of computers as far as I know.
      • simcop2387 11 days ago
        Surprisingly this is something starting to show up in the server market lately with a new protocol/tech called CXL. But yea that latency issue is still there over the distance but it'll let more remote memory type stuff start to happen. I doubt you'll do more than a few meters (i.e. within the same rack) ever but it'll likely end up getting used for so called "hyperscaler" type companies to more flexibly allocate resources, similar to how they're doing PCIe over ethernet with DPU devices right now. Unlikely that this will end up at the consumer level anytime even medium term because that kind of flexibility is still just so niche but we might see some CXL connectivity eventually for things like GPUs or other accelerators to have more memory or share better between host and accelerator.

        EDIT: article about a tech demo of it on a laptop actually, hadn't seen this before: https://www.techradar.com/pro/even-a-laptop-can-run-ram-exte...

    • 6SixTy 11 days ago
      Only CXL has the potential to be outsourced to Thunderbolt, as it works off PCIe and system RAM does not. CXL (Compute eXpress Link) is a server grade technology that's really aimed at solving some problems within the high performance compute area, like cache coherency. If you don't get it, I don't either tbh.
  • dvh 12 days ago
    What's wrong with DIMM?
    • magicalhippo 12 days ago
      The physical size of the socket and having the connections on the edge means you're forced to have much longer traces. Longer traces means slower signalling and more power loss due to higher resistance and parasitics.

      This[1] Anandtech article from last year has a better look at how the LPCAMM module works. Especially note how the connectors are now densely packed directly under the memory chips, significantly reducing the trace length needed. Not just on the memory module itself but also on the motherboard due to the more compact memory module. It also allows for more pins to be connected, thus higher bandwidth (more bits per cycle).

      [1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...

      • kjkjadksj 11 days ago
        I'd wager for most consumers capacity is more important than bandwidth and the power losses are going to be small compared to the rest of the stack.
        • magicalhippo 11 days ago
          > power losses are going to be small compared to the rest of the stack

          While certainly not the largest losses, they do not appear insignificant. In LPDDDR4 they introduced[1] a new low-voltage signalling, which I doubt they could have gotten working with SODIMMs due to the extra parasitics.

          If you look at this[2] presentation you can see that at 3200MHz a DDR4 SODIMM would consume around 2 x 16 x 4 x 6.5mW x 3.2GHz = 2.6W for signalling going full tilt. Thanks to the new signalling LPDDR4 reduces this by 40% to around 1.6W.

          Compare that to a low-power CPU having a TDP of 10W or less a full 1W reduction per SODIMM just due to signalling isn't insignificant.

          To further put it into perspective, the recent Lenovo ThinkPad X1[3] uses around 4.15W average during normal usage, and that includes the screen.

          Obviously the memory isn't going full tilt at normal load, but say average 0.25W x 2 sticks would reduce the X1's battery lifetime by 10%.

          edit: yes I'm aware the presentation is about LPDDR4 yet the X1 uses LPDDR5, just trying add context using available sources.

          [1]: https://www.jedec.org/news/pressreleases/jedec-releases-lpdd...

          [2]: https://www.jedec.org/sites/default/files/JY_Choi_Mobile_For...

          [3]: https://www.tomshardware.com/reviews/lenovo-thinkpad-x1-carb...

          • kjkjadksj 9 days ago
            1 watt is exactly what I am saying is almost inconsequential. People leave charger bricks plugged in all day and the lights on.
            • 76SlashDolphin 9 days ago
              And this line of thinking is exactly why we can't have M1 Macbook levels of battery life on Windows laptops. Believe it or not but a lot of people like to be able to have a light device they can just take without a charger and use for a solid day or 2 of work.
              • kjkjadksj 5 days ago
                There are windows laptops that can do this already though with a sodimm slot no less
          • CoolCold 11 days ago
            useful, thank you!
        • bmicraft 11 days ago
          Bandwidth translates directly into better (igpu) performance
    • linsomniac 12 days ago
      It requires too much power, according to the article. This allows using "LP" (Low Power) parts to be removable, they normally have to be soldered on board close to the CPU because of the low voltage tolerances.
    • adgjlsfhk1 12 days ago
      One of the biggest problems is that edge connections don't give you enough density. Edge connections are great for serves where you stack 16 channels next to each other, but in a laptop form factor, your capacity is already limited, so you can get more wires coming out of the ram by connecting to the face rather than the edge.
    • 0x457 12 days ago
      There is literally an entire section explaining why LPDDR needs to be soldered down as close as possible to the memory controller.
    • armarr 12 days ago
      Larger footprint, taller, longer traces and signal degradation in the connectors.
    • rangerelf 12 days ago
      There's nothing _wrong_ with it, it performs according to spec, but it has limitations: trace length, power requirements, signal limitations, heat, etc.
    • mmastrac 12 days ago
      The size, the sockets, the heat distribution, etc, etc, etc.
  • cryptonector 11 days ago
    Yes please. Also, can we haz ECC?
    • seanp2k2 11 days ago
      Why are you trying to bankrupt Intel??? Without being able to charge 5x as much for Xeons for ECC support, why would anyone ever pony up for one?
  • Tran84jfj 12 days ago
    I would welcome something like Raspberry Pi compute module, that contains CPU+RAM and communicates with other parts via PCIE. This standard can last decades!

    Yet another standard for memory will just fail.

  • ThinkBeat 12 days ago
    Meanwhile Apple bakes the RAM,CPU,GPU all into the same "chip". Good luck with that.
    • 0x457 12 days ago
      Meanwhile, Apple ships machines with a 1024bit wide memory bus, while this solution offers just 128 bits per "stick".
    • colinng 12 days ago
      Don’t forget - they solder in the flash too even though there is no technical reason to do so.

      Unless “impossibly far profit margin” is a technical requirement.

      • mschuster91 12 days ago
        > Don’t forget - they solder in the flash too even though there is no technical reason to do so.

        There is, Apple uses flash memory as swap to get away with low RAM specs, and the latency and speed required for that purpose all but necessitates putting the flash memory directly next to the SoC.

        • wmf 12 days ago
          This is not really true; Apple's SSDs are no faster than off-the-shelf premium NVMe SSDs.
          • wtallis 11 days ago
            And the latency of flash memory is several orders of magnitude higher than even the slowest interconnect used for internal SSDs.
          • Rohansi 11 days ago
            Yeah but some people need to justify their $1,800 USD purchase of laptop that comes with only 8 GB of RAM. Even though most laptops manufactured today would also come with NVMe (PCIe directly connected to the CPU, usually) flash storage, which is used by all operating systems as swap.
            • mschuster91 11 days ago
              NVMe by no means is directly connected to the CPU directly, usually it's connected through at least one PCIe switch.
              • Rohansi 11 days ago
                It's harder to confirm for laptops but you can refer to motherboard manuals to see if any of your PCIe-related slots go through a switch or not. For example, my current PC has a PCIe x16 slot, x1 slot, and two M.2 NVMe slots. It says everything is integrated into the CPU except the x1 slot which goes through the motherboard chipset. I don't see why any laptop would make NVMe go through a PCIe switch unless the CPU doesn't provide enough lanes to support everything supported by the motherboard. Even the at the lowest end, a dual core Intel Core i3-10110U (laptop processor from 2019) has 16 lanes from the CPU which could support at least one NVMe without going through a switch.
  • p0w3n3d 12 days ago
    Apple hates it
  • oneplane 12 days ago
    On the other hand, with a reflow station everything becomes modular and repairable.

    I do hope that a more widespread usage of compressed attachment gives us some development in that area where projects that were promising modular devices failed (remember those 'modular' phone concepts? available physical interconnects were one of the failures...). Sockets for BGAs have existed for a while, but were not really end-user friendly (not that LGA or PGA are that amazing), so maybe my hope is misplaced and many-contact connections will always be worse than direct attachment (be it PCB or SiP/SoC/CPU shared substrate).

    • RetroTechie 12 days ago
      > maybe my hope is misplaced and many-contact connections will always be worse than direct attachment

      As much as I like socketed / user-replaceable parts, fact is that soldering down a BGA is a very reliable way to make those many connections.

      On devices like smartphones & tablets RAM would hardly ever be upgraded even if possible. On laptops most users don't bother. On Raspberry Pi style SBCs it's not doable.

      Desktops, workstations & servers are the exception here.

      Basically the high-speed parts of a system need to be as close together as physically possible. Especially if low power consumption is important.

      Want easy upgrades? Then compute module + carrier board setups might be the way to go. Keep your I/O connectors / display / SSD etc, swap out the CPU/GPU/RAM part.

    • jcotton42 12 days ago
      > On the other hand, with a reflow station everything becomes modular and repairable.

      Not for the average person.

      • redeeman 11 days ago
        true, but can the average person replace the innertube on a bicycle wheel? :)
        • pezezin 11 days ago
          Yes? I did it many, many times as a kid, it is not that difficult.
          • lazide 10 days ago
            I suspect the poster would argue you’re not average - possibly even because you’re on HN to say so.
    • zokier 12 days ago
      > On the other hand, with a reflow station everything becomes modular and repairable.

      until you hit custom undocumented unobtainium proprietary chips. good luck repairing anything with those.

  • quailfarmer 11 days ago
    I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

    There's lots of value in tight integration. Improved signal integrity (ie, faster), improved reliability, better thermal flow, smaller packaging, and lower cost. Do I really want to compromise all of those things just to make RAM upgrades easier?

    And how many times do I need to upgrade the RAM in a laptop, really? Twice? Why make all those sacrifices to use a connector, instead of just reworking the DRAM parts? A robotic reflow machine is not so complex that a small repair shop couldn't afford one, which is what you see if you to to parts of the world where repair is taken seriously. Why do I need to be able to do it at home? I can't re-machine my engine at home. It's the most advanced nanotechnology humanity can produce, why is a $5k repair setup unreasonable?

    This is not to mention the direction things are really going, DRAM on Package/Die. The signaling speed and bus widths possible with co-packaged memory and HBM are impossible to avoid, and I'm not going to complain about the fact that I can't upgrade the RAM separately from the CPU, any more than I complain about not being able to upgrade my L2 cache today. The memory is part of the compute, in the same way the GPU memory is part of the GPU.

    I hope players like iFixit and Framework aren't too stubborn in opposing the tight integration of modern platforms. "Repairable" doesn't need to mean the same thing it did 10 years ago, and there are so many repairability battles that are actually worth fighting, that being stubborn about the SOTA isn't productive.

    • Timshel 11 days ago
      >I'm sure this will find use in Business-Class "Mobile workstations", but having integrated DDR4 in my own hardware, I have a hard time seeing this as the mainstream path forward for mobile computing.

      Don't know would say the reverse, workstation might need the performance of DRAM on Package/Die, but I don't believe it's the case for mainstream user.

      > A robotic reflow machine

      Same maybe to service enterprise customer but probably way too expensive for mainstream.

      I certainly hope that players continue to oppose tight integration and I'll try to support them. I value the ability that anyone can swap ram and disks to easily upgrade or repair their device more than an increase of performance or even battery life.

      I recently cobbled up a computer for a friend's child with component from three different computers; any additional cost would have made the exercise worthless.