Details on AMD's Quirky Chipset Solutions for AM5

(angstronomics.substack.com)

142 points | by walterbell 697 days ago

12 comments

  • AaronFriel 697 days ago
    What an unusual departure from their previous generation. The prosumer and just-below HEDT market loved the Ryzen chips for their generous amount of PCIe lanes. The x670 looks to - confusingly - reduce CPU-available lanes/slots and risk contention on the chipset for peripherals and storage. No one will want to debug a daisy chained M.2 or USB 4.0 connection suddenly slowing down.

    If true and it's also true that the non-Pro Threadripper line is gone, this leaves a large gap for Intel to exploit with Alder Lake X. The Threadripper line's 60+ lanes of PCIe effectively took over the HEDT market. Intel not only couldn't compete on performance, but for storage arrays, ML workstations, other important niches, they were the only viable option.

    I'm hoping the rumors of Threadripper's demise are wrong, and both Intel & AMD are able to compete in the space. That's best for consumers, not this alternating pendulum. (See: Threadripper price hikes.)

    • toast0 697 days ago
      I don't see the departure?

      AM4 cpus have an x16 for GPU, an x4 for NVMe storage, and an x4 for chipset, according to the article, AM5 cpus will have an x16 for GPU, an x4 for NVMe storage, and adds an x4 for for USB4 or more NVMe storage, and then an x4 for chipset, plus whatever the USB stuff means.

      The rumored chipset takes that x4 and offers two x4s, four sata/3.0 x1 ports, and 6 usb3, six usb 2 ports.

      There were a bunch of different chipsets for am4, most of which didn't have more connectivity than that. If you build the x670 link two of the new chipsets, and consider the added x4 from the cpu, you still have pretty similar connectivity as the x570, but you only have 12 lanes squeezing into the x4 for chipset instead of 16. There's certainly a discussion to be had about the CPU lanes being 5.0 and the chipset lanes (including uplink to the CPU, apparently) being 4.0; but while there's not much in the way of PCIe 5.0 devices, it's mostly hypothetical. If PCIe multiplexers weren't immensely expensive, the 24 CPU lanes at PCIe 5.0 would be equal to 48 lanes of PCIe 4.0 (plus 4 more for the chipset), and pretty close to the Zen2 Threadrippers's 56 + 8. Lack of cheap multiplexers means that's not really how the cookie crumbles though.

      • tedunangst 697 days ago
        The three x4 links aren't sharing equally. One gets half bandwidth, two get a quarter each, when contended. If you're trying to read data off three drives, it's going to appear like one is twice as fast. But then you try to reproduce reading data off one drive at a time, and they all work at the same speed.
        • toast0 697 days ago
          Ok, but if you had 5 x4 drives before, one got full bandwidth from the cpu x4, and the other four would share (equally, I guess) on the chipset x4. Now, you'd get two on the uncontended cpu x4 lanes, and the three sharing (unequally, as you mention), seems slightly better that last time, but not an 'unusual departure' from the previous design; the CPU to chipset link is still very oversubscribed, but now, you can add a second oversubscribed chipset if you want, instead of having a choice between chipsets with different levels of oversubscription (or no chipset, only available in boards that don't expose most of the lanes)
    • adrian_b 697 days ago
      According to the article, there is no departure from their previous generation.

      On the contrary, compared with the previous generation, the AM5 socket has more PCIe lanes, because it has 4 additional PCIe lanes, to attach directly to the CPU a Thunderbolt/USB4 controller, for maximum performance.

      The changes are strictly in the chipset, which will now be able to provide more PCIe lanes than before, on more expensive motherboards with 2 southbridge chips, but with the price that the extra lanes will pass through an additional PCIe switch, so they will have a greater latency and their throughput can be diminished by congestion, when all the peripherals are active simultaneously.

      Nevertheless, with 64 Gb/s links between chips (4 x PCIe 4.0), few peripherals will be fast enough to saturate the links and cause congestion, i.e. typically only the extra SSDs or extra GPUs, besides the primary GPU and the primary SSD, which are attached directly to the CPU.

      So the extra SSDs and the extra GPUs should be preferentially attached to the first southbridge chip, not to the second. This is the change about which the users of the new AM5 motherboards must be aware.

    • zamadatix 697 days ago
      Threadripper/HEDT is a different line, socket, and chipset. This is for the consumer stuff and looks to have the same number of CPU direct lanes as last generation, just higher speed.
      • AaronFriel 697 days ago
        I am aware and I believe I accurately described first these rumored changes to the Ryzen line, and second the rumored cutting of the consumer Threadripper line.

        Is there some way I could edit my comment to make it clearer?

        > This is for the consumer stuff and looks to have the same number of CPU direct lanes as last generation, just higher speed.

        I think I disagree, as the PCIe4.0x4 slot off the chipset is now an additional step removed (& subject to greater contention). But I digress, my point is that AMD risks leaving a gap in their market coverage by doing two things:

        1. There are rumors they are no longer competing in consumer-accessible high lane count CPUs/chipsets.

        2. They have an opportunity with the next gen Ryzen to close the gap by (1.) offering more lanes at 4.0 speed (they could offer 40 CPU-attached lanes!) but by moving to 5.0 exclusively, they limit expansion options.

        If both leaks/rumors are true, it suggests a missed opportunity.

        • zamadatix 697 days ago
          I think it'd be a little clearer to avoid referring to any of the Threadripper family as "below HEDT" or the sort. That's the Ryzen 9 family, even the lowest end plain Threadripper was always HEDT. Nothing here has changed for the consumer market.

          I won't comment much on the rumour side but overall I'm not as worried about whether things are named "Threadripper" vs "Threadripper PRO" as this never set the price floor anyways E.g. a 16 core Ryzen Threadripper PRO 3950X was $350 MSRP higher than a 16 core Ryzen 9 3950X.

          Whether or not they ever release anymore low price high lane count options is an interesting question but that AM5 was supposed answer this and because it didn't AM5 is a departure from the past is just too many layers of assumption, rumour, and leaks to make any useful discussion about.

          • AaronFriel 697 days ago
            I think you're still misreading me. The first graf is largely about Ryzen, and their 16 core model is just below HEDT. If anything, that SKU raised the ceiling for mainstream/consumer parts well into the HEDT space except by lane count/expansion options.

            Threadripper changed the definition of HEDT, but rumor is that there won't be a Zen 3/4 option for it, and those users will only have Intel as an option. That's the unusual departure.

            The disappointing thing is that this chip on paper has the bandwidth - some pro users would likely be happy to get Zen 4 with the option for 0-2 PCIe 4.0x16 slots, and 2-6 PCIe 4.0x8 slots. That's the amount of bandwidth this CPU has.

            • zamadatix 697 days ago
              Again there is no reduction in CPU lanes compared to AM4. There are 16 for the GPU, 4 for storage, 4 for secondary storage or USB 4 external PCIe, and 4 for the chipset to share put as desired. That's 28 total lanes just like AM4.

              Chagall/Zen 3 did drop the TRX40 chipset but again the sWRX8 class was only a small uplift from prosumer anyways so I'm not sure the loss of sTR4 is all that interesting or best compared to AM5. The question is whether they decide to continue to release Threadripper Pros at an attractive price point not lamenting why AM5 didn't absorb sTR4.

    • __alexs 697 days ago
      PCI-E 5.0 has a ridiculous amount of bandwidth for consumer purposes. They just don't need as many lanes.
      • Laforet 697 days ago
        More lanes, more wires, higher complexity, higher cost.

        Not surprising at all given that the same vendor thought it was fine stratagem to market a midrange dedicated GPU with only 4x PCIe 4.0 lanes. They are banking heavily on the trend of turning PCIe into a serial bus of sorts.

        • formerly_proven 697 days ago
          > on the trend of turning PCIe into a serial bus of sorts.

          PCIe switches used to be far more common before certain things happened in the market and they became uber-expensive. I have a few NICs where, instead of using multi-port controllers, they just used PCIe switches... arguably I/O hubs have probably been the cheapest PCIe switches around for some time now...

          • 3np 697 days ago
            Still haven't found anything compelling for PCIe4 but for PCIe3, I had no issues so far with 10Gtek, which are reasonably priced.
        • colejohnson66 697 days ago
          > They are banking heavily on the trend of turning PCIe into a serial bus of sorts.

          It was always a serial bus? PCIe 1.0 was just too slow to handle everything without 16x systems. In fact, 32x slots/cards existed in the server market to make up for the missing bandwidth.

      • amarshall 697 days ago
        This only works if all devices are PCIe 5, though. There are still a lot of PCIe 3 or even PCIe 2 devices that could make do with very few PCIe 5 lanes but need a lot of lanes regardless.
        • mikepurvis 697 days ago
          Does an older device still occupy a whole lane right to the processor though? Or does the motherboard/chipset multiplex that onto a shared 5.0 lane into the CPU?

          My read of it is the second, but I'd be happy to be shown wrong on this.

          • zamadatix 697 days ago
            Only the CPU direct lanes are PCIe 5.0 and, being direct, none of that gets multiplexed. You either use all of a direct slot's assigned lanes at 5.0 speed or you miss out on that bandwidth.

            Everything else not CPU direct (wired and wireless networking, SATA storage, non-primary USB ports, and 3 PCIe 4.0 addon cards) is connected via the chipset(s) which connect back via a single shared PCIe 4.0 x4 connection to the CPU. These are multiplexed but nothing here is using PCIe 5.0 bandwidth and even just 1 busy slot alone off this collection is able to consume the entire uplink bandwidth.

    • 7speter 697 days ago
      >this leaves a large gap for Intel to exploit with Alder Lake X.

      Maybe the pricing you expect will be at a level that I am going to ignore anyways, but I think AMD may be reducing features and pci-e lanes because Intel has been doing just fine not giving away the kitchen sink in that area, at least from what I observe. Isn't intel kind of dragging their feet with any new HEDT lines?

    • freemint 697 days ago
      If the amount of pins in the socket doesn't change significantly (as does between Ryzen and Threadripper) i doubt there will be any room for more PCI-E lanes, ever.
  • formerly_proven 697 days ago
    Interesting to note that, just from memory, this means most of the increased I/O possibilities actually come from the CPU itself, not the chipset. 2x7 W also means that chipset power remains pretty much the same compared to X570, though two chips will lend themselves better to effective passive cooling.
    • ridgered4 696 days ago
      Yeah, there's a lot to be said about the daisy chain but the real news for me here was that 4 CPU lanes are added from last generation. I personally would rather they be used for a m.2 slot than a thunderbolt controller and hopefully they will in some boards. (I don't see the point of thunderbolt on a desktop)

      While the daisy chain is interesting, it doesn't sound like it gets you much at the end of the day and will double power usage (which was a real sore spot with x570 vs the asmedia chipsets) The dual chipset sort of seems like it's just a luxury item without a purpose in most cases. Maybe we'll see some interesting designs with server oriented boards but doubling the power usage doesn't seem worth what it will get you to me.

  • zamadatix 697 days ago
    x16+x4 has worked fine for consumers at PCIe 4.0 since GPUs aren't benefiting from the bandwidth so it should work fine with PCIe 5.0 and even if you bifurcate for x8 x8 + x4 so you can have another high speed addon card of some sort that's still ~250 Gbps of bandwidth per card which is more than any home or gaming setup will be bottlenecked by (workstation and server sure but those will be different boards).

    I'm not quite sure the dual linked chipset option for the high end really gets anyone anything. I mean sure, double all your ports... where half now have 2x the latency penalty to the CPU and all in all they are sharing less bandwidth than your primary storage slot has alone.

    • wtallis 697 days ago
      > where half now have 2x the latency penalty to the CPU

      Are any of the typical uses for those IO lanes latency-sensitive enough for that to matter at all? Consumer-grade networking, SATA storage, and most USB use cases all involve latency high enough that one more PCIe switch wouldn't be noticed.

      • zamadatix 697 days ago
        The NVMe off that second slot in particular is what's going to be painful for that market segment. If it weren't the topology that was only used specifically for the high performance market segment I'd say it doesn't matter but I'd be willing to bet most in that segment would actually want that 2nd x4 lane direct to the 1st chipset like last gen, not to the a second chipset just so they can have even more built in USB and SATA ports.

        But who knows, maybe "I have a great gaming PC, it's got 25 USB ports!" (yes, that's the actual number) is more marketable than actually tuning for performance.

        • wtallis 697 days ago
          > The NVMe off that second slot in particular is what's going to be painful for that market segment.

          Definitely not. Consumer Optane is dead, so the only devices that will get used in those slots have inherent latencies that are multiple orders of magnitude higher than those typical for a PCIe switch. The extra switch latency gets entirely lost in the noise, which I've observed on multiple occasions while reviewing "NVMe RAID" products.

      • justsomehnguy 697 days ago
        I would say what in a typical consumer setting even 2x latency for the NVMe drives wouldn't be noticeable, especially considering there is one directly on the CPU.
      • Strom 697 days ago
        How much extra latency are we talking here? Anyone have any actual numbers?
        • jleahy 697 days ago
          I would guesstimate low single digit microseconds (and guesstimating latency is pretty much my job).

          Typical USB poll rates are 10-100 milliseconds. AMD made a perfectly sensible choice.

          • NavinF 697 days ago
            Wat. All but the cheapest crappiest mice have been using 1ms polling for the last decade. IIRC USB 3.0 is effectively 8kHz and doesn’t use polling which does make a difference for real-time video (webcam and capture card) latency and lip syncing.

            But nobody cares about USB. A lot of applications do 10000 reads from your SSD in series (qd1) and that’s where latency matters. I agree that it should be on the order of 1us. And since most people only have one SSD, chipset latency is not that important.

  • shantara 697 days ago
    How much of this information was confirmed, and how much are rumors and speculations? With such a radical change from the previous generation, it seems more prudent to wait the remaining couple of days until Computex to be absolutely sure.
  • glowingly 697 days ago
    Sounds like a decent approach. On the current AM4 setup, the chipset had an x8 link allocated (at least in dmi), but only x4 was ever hooked up. Based on information here, too bad they weren't able to build on that and avoid the daisy chain setup altogether (x4 direct to each "southbridge").
    • _carbyau_ 697 days ago
      I thought the length of board traces were part of the reasoning against that idea.

      If you could have your CPU placed symmetrically in the middle of the board with slots, ports etc both above and below so they don't get too long traces then maybe? But that would go against the ATX spec.

      I also wonder how often communication happens from one device direct to another without touching CPU? Is that a thing? DMA has been a thing for many years but with the memory controller on the CPU communications would still have to get there.

      • bpye 697 days ago
        Peer to peer DMA is a thing, though I don't know enough about PCI to say how it works on this sort of topology.
  • potiuper 697 days ago
    Amazing no vendor has come out with a X300/Knoll mini ITX board with a PCI x16 along with NVME slot.
    • wmf 697 days ago
      There's some kind of market segmentation at play where nobody will make a high-quality board with a low-end "chipset" even though it would be perfect for many customers.
      • walterbell 697 days ago
        The segmentation goes the other way too. At one point, there were no Ryzen boards that could be rack-mounted for enterprise use.
        • JonChesterfield 697 days ago
          Memory slot direction doesn't seem that important. At least, I've got a desktop board in a rack that seems happy enough. Or is there something other than memory direction that puts you off, ipmi perhaps?
          • walterbell 697 days ago
            Yes, ASRock X470 was the first Ryzen board with BMC/ipmi.
          • formerly_proven 697 days ago
            If you put it in a 1U case I would be very surprised if it didn't matter - I'm doing that with a very low power platform and the clearance between the memory and the case only leaves a 2-3 mm gap, which I doubt would permit sufficient airflow for a normal passive CPU heatsink with a normal CPU power.
            • JonChesterfield 683 days ago
              That's definitely true, 1U is mostly filled by memory height. My boxes all have GPUs in so I don't get to go that short.
    • numpad0 697 days ago
      I suppose this Knoll chip is intended as a license key for motherboards. Else there will be tons of ... interesting stuffs.
      • baybal2 696 days ago
        Indeed, it's basically an expensive ROM chip with BIOS.
  • walterbell 697 days ago
    AMD Computex 2022 keynote with announcement: https://www.youtube.com/watch?v=BRtBB2VnF8M&t=145s
  • cebert 697 days ago
    I was under the impression that AM5 would support PCIe 5.0 instead of 4.0. I’m becoming less excited about this next generation from AMD than I was a few months ago.
    • wmf 697 days ago
      It will support PCIe 5.0 on the slots directly connected to the CPU, but probably only on X670E motherboards. Note that there will be very few 5.0 devices on the market and you won't be able to notice the performance difference.
      • trashtester 696 days ago
        My interpretation is that the chipsets wil have PCIEe 5 connections as follows:

        X670E: "Everywhere" according to the talk, which I interpret as: - 16 lanes of pcie 5 to the main pcie slots - 4 lanes of pcie 5 to the primary nvme slot (i would not be surprised if some boards will expose this as a pcie x 16 slot (with 4 lanes)) though. - 4 lanes of pcie 5 to the chipset (speculative)

        For the regular X670: - 16+4 pcie5 lanes to "GPU" and "nvme" slots, as for the X670, perhaps only 1 nvme slot, perhaps with an additional nvme x8 slot when running 8/8/4 instead of 16/0/4. - "only" nvme 4 to the chipset

        B570: - Only the "nvme" slot will have version 5

        If I'm right, the Extreme card will offer twice the performance over the chipset, compared to AM4, which could allow it to fill some of the use cases currently covered by the non-pro threadrippers. If I'm wrong, and even the extreme X670E chipset will not be a good choice for those who depend on a lot of IO, compared to threadripper or Intel.

        Hopefully, I'm right, and in that case, I'm not sure if we need an intermediate step between the 7950x and lower end threadripper pro setups, at least not for connectivity and I/O. And those who actually need more compute than the 7950x will over will probably not care that much about the price premium of threadripper pro vs regular threadripper.

      • mahkeiro 697 days ago
        Saying there are few devices yet on the market is not helpful. You don't buy a PC for the next 6 months. In a 1-2-3 years when your PC will be still ok it is very likely that more PCIe5.0 devices will be out, and probably not compatible with your MB.
        • NavinF 697 days ago
          I upgrade every couple of years to the best hardware on the market, but even for me PCIe 5 is just “nice to have”. No GPUs support it yet and rumors suggest that RTX40 will also be PCIe 4.
    • paulmd 696 days ago
      This has been clarified by the actual presentation: X670E gets PCIe 5 on all CPU links, X670 gets PCIe 5 on the first slot and storage, B650 gets PCIe 5 only on storage.

      Of course, the chipset itself has nothing to do with what connectivity is provided to the PEG slots since those run straight to the CPU - this is market segmentation in action, the CPU simply won't turn on those feature-bits if you don't pay for the premium chipset.

      Also, as noted, on AMD the chipset has nothing to do with anything anyway. It is just an IO expander, it doesn't boot the CPU or do anything like that, and it is entirely uninvolved, out of the loop entirely, when the CPU is talking to things that are attached to PEG direct lanes.

    • cududa 697 days ago
      What specific benefit are you expecting from PCIe 5
      • mjevans 697 days ago
        I'd like room for addon cards with more NVME drives or existing RAID / JBOD bulk IO cards.

        I'd be fine with PCI-E 4 across: x16 (gpu) + x16 (gpu/accessory) + x4 (nvme) + x4 (nvme) + x4 (their slow IO chipset).

        PCI-E 5 is 2x the bandwidth, so maybe some higher end boards might offer more choices for splitting those lanes up at slower speeds that work well with older addin cards.

        A (1.25GB/sec) 10GBit Ethernet card is on the radar for the lifetime of these systems. A 4.0 x1 link is sufficient at ~ 1.97GB/sec.

        I'm worried too many boards will ship with two GPU slots (5.0? / 4.0 x8) and at most 2x NVME slots, and load _everything_ else on those x4 slots for the board chipsets, including accessory slots.

        • wtallis 697 days ago
          The PCIe root ports in CPUs usually don't support dividing up x16 links into anything smaller than x4 links. Even for dedicated PCIe switch chips, bifurcation down to x2 or x1 links is usually only found on the smaller switches that don't have multiple x16 links to begin with—though PCIe gen4 and gen5 have made x2 link width support show up for some of the big PCIe switches.

          If you want to load up a bunch of low-speed devices (and 10Gbit/s is low-speed now), the chipset is the right place to do it. Consumer systems really are not going to be bottlenecked by lots of devices sharing one x4 uplink to the CPU.

          • mjevans 697 days ago
            Sure, a chipset is the correct place to do that, but I'm worried about the bandwidth it can offer in this specific setup.

            The block diagrams show (for useful devices):

              CPU / APU:
              * RAM ???
              * 5.0 x8 + 5.0 x8 GPU(s?)
              * 5.0 x4 NVMe
              * 4.0 x4 2x USB4 #1
              * 4.0 x4 for PROM21
              * DP / HDMI #1
              * 2x USB 3.2 10Gbit
              * 3x USB 2.0 (why?)
              * SPI / GPIO HDAudio #1
            
            #1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.

            That 4.0 x 4 set of lanes is expected to service up to...

              * 4.0 x4 (bridged daisy)
              * 4.0 x4 NVMe
              * 4x SATA 6Gbit/sec
              * USB 3.2 20Gbit
              * 4x USB 3.2 10Gbit
              * 4x USB 2.0
            
            +

              * 4.0 x4 (daisy to cards)
              * 4.0 x4 NVMe
              * 3.0 x1 2.5Gbit Ether
              * 3.0 x1 + 2.0 USB WiFi
              * 2x SATA 6Gbit
              * USB 3.2 20Gbit
              * 4x USB 3.2 10Gbit
              * 4x USB 2.0
            
            __ideally__ they'd have done something more like x4 PCIe 5.0 for the secondary NVMe drives and downstream system ports to share, along with an x2 PCIe 5.0 link utilized for the other existing ports on the two chips.
            • wmf 697 days ago
              Realistically people don't max out all their ports at once. Heck, most of the ports on X670 boards probably won't be used at all.
              • mjevans 697 days ago
                The sort of person that buys the _high end_ prosumer boards DO use things in burst often enough for it to be a consideration.

                In the end, the devices chipset AMD has sourced from 3rd parties is good as a general tool for manufacturers... My issue is that I'd _really_ like their high end chips to support more PCIe 5.0 x4 lanes, possibly used in aggregations.

                Imagine if they instead supported 5.0 x16 (or x8+x8) AND 5.0 x16 (2x8 || x8+x4+x4 || 4x4). That'd allow for either a second full x16 slot for future mass IO devices (be it a GPU or NVMe riser card) or the full sized ATX boards with a good number of x4 slots.

                Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.

                • paulmd 697 days ago
                  > Maybe that is what a lower core count, higher mhz, thread-ripper socket was really for.

                  It is. Historically the HEDT sockets have often overlapped with the consumer socket in terms of core count - this was true of X58, X79, and sTR4, and X99 was so cheap that de facto it did overlap anyway (5820K basically cost the same as a 4790K, and motherboard costs were in-line with what we saw from X570 boards until the B550 line settled prices down a bit).

                  That’s fine because HEDT is not about core count, it’s about memory and PCIe lanes. The current offerings leave a void for "I want a big platform but I don't need >24 cores and I'm still somewhat price-sensitive", the classic "workstation/prosumer" tier that used to be serviced by things like the 5820K/3930K/1900X/1920X.

                  Potentially you could get to a similar place with a bunch of PCIe 5.0 slots attached to PCIe switch chips - this style of board used to be called “supercarrier” by one brand. Unfortunately it pretty much died out in the wake of SLI and crossfire becoming niche and then extinct. And the current crop of Intel and AMD boards only offer PCIe 5 on the first slot anyway so that isn’t quite as possible as you’d think at first glance.

                  It’s really a shame the way AMD hollowed out the HEDT segment and cranked prices. A 3960X is four 3600s on a HEDT package with a single bigger IO die instead of four little ones, it’s a very cheap chip to produce, it should really go for more in the $700-800 price range than $1600+.

                  (And the HEDT boards are also quite expensive for what they are - the ROMED8-2T gets you 7x slots of PCIe 4.0x16 full-capacity, with power delivery for 280W TDP CPUs, dual 10gbe, and BMC, for $600. Look at what a $1000 sTRX40 board buys you and just laugh, "gamer" boards are ripping you off.)

                  Again, the precedent is the 5820K and the TR1900 series where these savings were passed on to the consumer - it is historically abnormal for HEDT to be such a huge reach compared to desktop chips, but AMD isn’t interested in pursuing low-end (actually they aren’t even interested in releasing Zen3 HEDT chips at all outside WRX80) and Intel has abandoned the segment entirely for now. Maybe Alder Lake-X will change the situation and force AMD to pay a little more attention, just as it has forced some of the ridiculous 5000 series price increases to be backed down.

                  Right now it is actually worth a strong look at Epyc server boards like ROMED8-2T and chips like the 7402P because if you don’t need the absolute clock rate of Threadripper the Epyc chips are often cheaper per-core while offering a better PCIe and memory capability. That’s completely opposite from how HEDT has always worked but AMD is pushing hard in the server segment and sandbagging in the HEDT segment and that flips the math in a lot of homelab or workstation situations.

                  • paulmd 696 days ago
                    (note: the "pcie5 only on the first slot for X670E, no PCIe 5 for X670" appears to have been a false rumor, per the computex presentation it's "X670E is PCIe 5 on everything, X670 is PCIe5 on the first slot, B650 is pcie5 only on storage".)
            • wtallis 697 days ago
              To justify worrying about available bandwidth, you shouldn't be listing available ports but instead listing specific devices along with a use case that would actually have them actively transferring data simultaneously in the same direction at speeds that would make the x4 uplink problematic.
            • toast0 697 days ago
              > #1 expected ASM4242 mux with 2x DP for APUs -- It'd be nice if these pins could export additional PCIe lanes for non-APU systems.

              Keynote slides that just came out show RDNA2 in the Zen4 I/O die, so it looks like there won't be non-APU systems. I think you'd have trouble using pins for PCIe sometimes, and DisplayPorts other times, depending on the CPU you install, would make things more confusing, IMHO.

              • mjevans 696 days ago
                Yes, seeing those this morning, if even the higher end CPUs all come with at least an anemic framebuffer GPU a server could use the x8 and x8 links intended for a desktop's GPU as IO expansion slots. It looks more palatable as a 'could be pressed into service as a server' segment for both new and hand-it-down builds.
      • cebert 697 days ago
        I have an old Zen 1 desktop right that has served me well, but is becoming outdated. If I am going to upgrade to DDR5 and AM5, I’d prefer to be PCIe 5 compatible so that my upgrade investment is maximized.
    • rasz 697 days ago
      it will, but only if you pay extra for fake "X670E" chipset while getting same X670 motherboard, but with unlocked pcie 5.0.
  • cududa 697 days ago
    Here’s where this gets cool. If you’ve ever got into pro level “over clicking” RAM channels to maximize performance, this would leave you scratching your head.

    Won’t the timing of each chipset on each board need to be painstakingly individually configured to avoid latency from a “default” profile used in all chips (which inherently vary between every batch)? Yes, yes they would.

    Unless you had an automated solution. And if you have an automated way you could optimize CPU and DDR timing instead of using “buffered”/ “safe” latency settings to accommodate any manufacturing variance.

    Well turns out AMD has a method for that. And given the necessity for a tech like this to make daisy chained chipsets work, it would seem this isn’t an idle patent. AM5 systems are basically going to get a 10-20% performance bump due to that optimization. Which is wild.

    https://www.tomshardware.com/news/amd-patents-automatic-memo...

    • wmf 697 days ago
      Won’t the timing of each chipset on each board need to be painstakingly individually configured to avoid latency from a “default” profile used in all chips (which inherently vary between every batch)?

      No, because PCIe works completely differently from RAM. PCIe is a packet-based protocol that's very latency-tolerant. (Multi-lane PCIe links may require deskew but that's a standard feature that has existed all along.) I think it was Marcan who tunneled PCIe over RS-232 and the chips didn't even notice.

      • zamalek 697 days ago
        This isn't really true for PCIe 4 and beyond. For example, if you use a competently manufactured PCIe 3 riser in a PCIe 4 GPU configuration then VGA will fail to POST more times than it succeeds (on the order of successfully booting 1/10), if it POSTs at all. Given how expensive PCIe 4 risers are most people choose to downgrade to PCIe 3 in the firmware settings.
        • AlphaSite 697 days ago
          Are you sure this isn’t because of the tightened electrical tolerances for PCIe’s physical layer, not the protocol it’s self?
        • wmf 697 days ago
          That's signal integrity not latency though.
      • cududa 697 days ago
        Well the chipset also handles shuttling RAM channels..
        • NinjaKitten 697 days ago
          No. You appear to have a fundamentally flawed understanding of the system at hand.
          • cududa 697 days ago
            Perhaps. 16 years out from AMD ram/ cpu channel optimization. If the chipset isn’t handling RAM channels what component is?
            • wtallis 697 days ago
              The DRAM controller has been on the CPU socket since AMD's Opteron/Athlon 64 (2003) and Intel's Nehalem first-gen Core i5/i7 products (2008). AMD's recent migration toward multiple chiplets on the CPU package has not changed the direct connection between DRAM and the CPU socket.
              • Laforet 697 days ago
                >AMD's recent migration toward multiple chiplets on the CPU package has not changed the direct connection between DRAM and the CPU socket

                This is true. However with the MCH off-chip and worst case inter core latency comparable a DRAM refresh cycle, AMD could in theory move the MCH away from the socket and see no major performance penalty.

                Not to mention the Zen 2 IO die is just a cut-down version of X570, or maybe it is the other way around heh?

  • sylware 696 days ago
    Maybe everything (ethernet/hdaudio) should be on the cpu die for good (if not too much EM interferences). Maybe a legacy SATA controller hardwired to a few cpu pcie lanes, and an optional wifi chip hardwired the same way.
  • rasz 697 days ago
    >their 600-series chipset

    No, the chipset is in the Ryzen CPU. What they mean is a Southbridge used by board manufacturers for market segmentation purposes.

    >AMD’s Ryzen line of client Desktop processors don’t even need a ‘chipset‘

    Amazing, an article stating the truth for once!

    >Despite this, nearly all Desktop motherboards still contain a chipset, but in its modern iteration serve as I/O expansion hubs

    and back to muddying the water. No, "chipsets" are used for market segmentation, and AMD is complicit with gems like this:

    https://www.itworldcanada.com/article/amd-zen-3-processors-w...

    https://www.extremetech.com/computing/320548-amd-will-suppor...

    https://www.anandtech.com/show/14477/amd-confirms-pcie-4-not...

    https://www.eteknix.com/some-asus-x470-and-b450-motherboards...

    not to mention overclocking lockouts

    all this bullshit depending on what ASMedia IO dongle connects over PCIE bus on the motherboards.

    >dual-source these chipsets

    and proceeds to explains how there will be none of that with exclusive ASMedia once again :)

    >Low-end, Mid-range, High-end

    ah yes, back to artificial segmentation based on limited BW 4x PCIE switch while CPUs provide plenty of dedicated PCIE lanes.

    >“X670E“ branded motherboards will require the primary PCIe and M.2 slots to support PCIe 5.0 linkrate. The chipset itself is identical to X670.

    and now we are at making up totally fake name for a non existent product X670E to upmarket PCIE 5.0, sweet!

    >designing only a single piece of silicon to span across multiple market segments, it is much more cost-effective to design for the mass market middle-end solution and double up for the high-end rather than designing a larger, more expensive die that fits the requirements of the top end

    spilling truth by accident once again. Yes, AMD is selling us the same cheap solution under different names for different prices, and we should be thrilled!

  • Linda703 697 days ago
    undefined