AMD Ryzen 9 3950X Review

(anandtech.com)

434 points | by neogodless 1617 days ago

20 comments

  • mrandish 1617 days ago
    I noticed yesterday's articles about Intel's upcoming unusually large release of security mitigations. Under serious competitive threat for the first time in a while, I'm curious if Intel may have slowed the release of some mitigations to land after this round of comparison benchmarks.
    • Someone1234 1617 days ago
      That's an interesting idea I didn't consider.

      edit: Below is wrong, see author of article's reply below for why. Preserving for record.

      They have been holding back review samples of their latest i9s so reviewers wouldn't do head-to-heads with this new 3950X (but people have anyway, and the 3950X crushes the i9). See Linus Tech Tip's review of the 3950X this morning for citation.

      • IanCutress 1617 days ago
        The new 18-core i9-10980XE isn't out yet, and review embargoes haven't been lifted. We know it's a slightly clock bumped version of the i9-9980XE, which was included in the AnandTech review and not the Linus review.

        Linus tested the 8-core i9-9900KS, which came out a few weeks ago, and those results are also in the link which is the title to this thread.

        • mrandish 1617 days ago
          Ian,

          What are your thoughts on the question of whether Intel may be strategically timing releases of some mitigations to favor their performance in review benchmarks?

          • IanCutress 1616 days ago
            This decision was likely made from a different department, and just lucked out that it was the day before an AMD launch. If Intel wanted to delay it specifically to appear for better benchmarks, they would have waited until after Cascade Lake-X benchmarks were out. It's meant to be being launched later this month.
        • greatpatton 1617 days ago
          Yet the main question remain if the benchmark were done with the latest mitigation (and version of the microcode) or without...
          • IanCutress 1616 days ago
            No, I didn't have time to run 30hr+ of tests on 7-10 Intel CPUs with only 24 hours notice from the announcement yesterday
            • washadjeffmad 1616 days ago
              That would be responsible to mention in the review, with a promise to publish new comparisons whenever new mitigations are released.

              Intel's security flaws pushed our LCRs up at least two years and cost us extra in over-provisioning to account for real and potential performance loss. If Intel isn't actually hardware mitigating (as in, making no effort to redesign their chips with security in mind and instead playing whack a mole with vulnerabilities as they're discovered) and reviewers are not highlighting and disclosing this, they're lying by omission.

              I'm not being rude, but don't give Intel a pass here because of any prior fondness you might have held for them. Every benchmark of an Intel CPU you've ever published was made false by their negligence. That's a big, confusing problem for consumers, and they rely on your clarity and diligence to bring it to their attention and correct.

              • swinglock 1616 days ago
                How much does performance degrade an average year for Intel and AMD? To a consumer a desktop CPU is an investment lasting for 3-8 years. Attempting to factor in predicted degradation of performance results based on history would be interesting and I think the right thing to do.
        • NicoJuicy 1617 days ago
          Those results are not valid anymore.

          Better phrased: won't be valid for long, if the security patches are released ;)

      • neogodless 1617 days ago
        https://linustechtips.com/main/topic/1123580-intel-could-tak...

        For those of you without really good search engines :) (Contains link to YouTube video review.)

    • NicoJuicy 1617 days ago
      Intel has been cheating their single core performance against AMD for many years, thanks to predictive and insecure algorithms.

      Reality is just catching up, AMD is already number one for a long time. Intell's cheating just didn't make it clear.

      I've waited for a year to express my opinion, before I could see any evidence.

      The time has come that AMD is even beating Intell's fake single core performance, so slowly we will see all of Intell's shit appear. There is no point in hiding it anymore for the consumer/gaming market, if they are not nr 1 in single core. Not if they lose trust of their business clients and profitable server market.

      All those gamers that bought Intel for single core performance and OEM's that stick to Intell because of monopolies betted on the wrong horse.

      https://twitter.com/damageboy/status/1194751035136450560?s=1...

      This feels "nice", although I'm affected to :(

      Waiting for decent AMD laptops...

      Ps. Long AMD, ever since Spectre and meltdown.

      • paulmd 1616 days ago
        Do remember that Meltdown and Spectre affected literally everyone else in the industry (including POWER, ARM, and SPARC) and it's only by chance that AMD's branch predictor happens to be relatively difficult to train to follow a predictable path (which is a necessary condition for some of these exploits). Intel isn't cheating any more than anyone else, AMD got lucky.

        Cascade Lake and the latest Coffee Lake (and Comet Lake) steppings have hardware mitigations for all the previously known exploits (although there is a new batch) including some for which AMD still requires software mitigations themselves (Spectre V2).

        • NicoJuicy 1616 days ago
          • paulmd 1616 days ago
            Most of those exploits are software/driver vulnerabilities, and AMD has plenty of their own. Like when Threadripper drivers installed an out-of-date web server running with SYSTEM privileges, or when a series of PSP and BIOS bugs let an attacker escalate from VM guest to control of the PSP and BIOS persistence. If you're not finding vulnerabilities in your software, you're not looking.

            Intel is in a lot more verticals than AMD is - networking, storage, etc - and all of those get their own CVE number. But that's a more nuanced approach than "big number bad" isn't it?

            Specifically, Intel is finding a lot because they're doing an enterprise-wide initiative to lock down their IPs. Is AMD making the same effort, or are there more vulnerabilities in those GPU drivers or in the chipset drivers that they're not looking for? After all, a year ago they were at the stage where they were installing an out-of-date Apache instance along with your motherboard drivers... doesn't sound like security is priority #1 for them.

            Again, low number sounds good. But if you're not looking, it's actually not good.

            And researchers are not looking on AMD much either. AMD is less than 5% of the server market right now, you get the big headlines from finding vulnerabilities in Intel now, not AMD. Give it another 5 years and there will be more scrutiny on AMD's hardware/software.

            • LargoLasskhyfv 1616 days ago
              > Specifically, Intel is finding a lot because they're doing an enterprise-wide initiative to lock down their IPs.

              Do they now? Why may that be? Not voluntarily i guess, with all the mess, others have found first, they fear losing ground, and shareholders blood thirst.

        • Retric 1616 days ago
          Last I read AMD processors aren't affected by the Meltdown bug. And the overall performance hit was 9x as large for Intel.

          However, if that’s wrong I would love to read about it.

          • paulmd 1616 days ago
            Like I said, AMD was unique on Meltdown in that their branch predictor uses a neural-net based approach that is difficult to train to follow a specific path. It affected everyone else in the industry - Intel, IBM, ARM, Oracle, everyone.
            • Eopia 1616 days ago
              I think you are confusing Meltdown with Spectre V2.
            • tomp 1616 days ago
              Isn't the reason that AMD isn't affected, that they don't prefetch / cache pages before checking instruction's access privileges?
            • NicoJuicy 1616 days ago
              So researchers aren't checking AMD.

              A bit later, AMD was uniquely not affected. But they were investigated, yes?

              Seems like you like Intel more than AMD.

              No problems with that, I'm the other way around :)

            • Retric 1616 days ago
              If that’s what you meant then saying ‘Meltdown and Spectre affected literally everyone in the industry’ was literally false. Perhaps you could say almost everyone else in the industry in the future.

              As to being lucky, I have no idea what their internal process is or what they considered. But luck seems like an odd way to talk about such things.

      • tempguy9999 1617 days ago
        Well, maybe. Probably so, in part, but intel makes large dies rather than AMD's style of multiple smaller dies, which means signals have less far to go (and possibly don't have to be boosted to get between mini-dies).

        Perhaps someone who knows about this stuff can comment.

        Edit: the mini-dies are chiplets/CCDs. Couldn't remember the name.

        • NicoJuicy 1617 days ago
          As I understood, they had to much hick-ups going to smaller dies because of internal issues.

          They plan to recover in the end of 2020, while temporarily leaving the market open for AMD. Accepting that made them willing to fix their security issues, which decrease performance.

          Since they have no product line in response to AMD for now, that was not an easy call for sure.

      • benibela 1616 days ago
        >Ps. Long AMD

        So you recommend to buy AMD stock now? Or TSMC? Or is it already too late for that? The share price already went up from 28 to 38 since October

        • paulmd 1616 days ago
          AMD has been overbought by brand fanboys/"amateur investors" egging each other on (see: r/AMD_Stock) and is now effectively pricing in a massive amount of expected future growth. Probably decades worth.

          For context: Intel currently has a P/E ratio of 13.57, which is 70th percentile of tech stocks. AMD currently has a P/E ratio of 201.89, which is 5th percentile of tech stocks.

          https://www.gurufocus.com/term/pettm/intc/PE-Ratiottm

          https://www.gurufocus.com/term/pettm/intc/PE-Ratiottm

          AMD as a company has great potential in the long term, but still barely makes any money today. Like, AMD is hoping to double their server marketshare (only part of the company) by the end of 2020... which will take them to... almost 10% of the market. It's real easy to make big percentage gains when you have 0% share of the market. The AMD of 2016 had nowhere else to go but up.

          • NicoJuicy 1616 days ago
            Server, desktop, laptop and video cards.

            Lots of room if they break Intel's monopoly though.

            Intel had 16 billion revenue in 2018 and AMD "profits" from TSMC and their clients ( eg. Apple, Nvidia,... )

            In the server market, it seems that they are partially breaking Intel's monopoly. Today's news in Intel won't change that perspective. P/e is also a bad ratio if a company invests a lot.

            But yeah, a lot of hype on the stock.

            Intel is keep getting hit hard though. Benchmarks, security issues, delays, failing at 7-10nm. They are currently still holding the monopoly though.

            • NicoJuicy 1616 days ago
              Fyi, Amazon's P/e was even 1.100 at a time. P/e is not the best indicator for a company investing/researching and selling in lower prices. The AMD product line-up proves that they are investing a lot.

              What I see lately is that AMD is raising prices. This seems like a good indicator of how they are positioning themselves versus the market.

          • mythz 1616 days ago
            Your bias is showing with your "brand fanboys/amateur investors" slurs.

            Their P/E is an irrelevant measure for AMD's market value, as it was for AMZN when they were aiming for 0 profit and re-invest everything towards growth. They're obviously interested in maximizing R&D and selling their chips at very competitive pricing over profit which currently offers much better value over Intel's chips.

            AMD only made 120M in profit last quarter on 1.8B revenue up 9% YoY, Intel made 6B profit down 6% YoY. How much more likely is AMD to double their profits to 240M then Intel is to 12B? If the market perception of AMD's chip superiority and OEM vendors start preferring AMD over Intel it could happen within the very near future, not decades out.

            Amazon only recently started to turn on profits after achieving dominance in a number of markets, I don't expect AMD to focus on profits until they're able to get much higher market penetration so I expect their focus is going to continue to be on R&D and low margins. So their P/E will remain high along with their potential for future growth.

        • NicoJuicy 1616 days ago
          I won't recommend anything. The p/e is too bad to recommend it, Warren Buffet would think I'm a fool :p

          But I do think that AMD has enough space too grow the next year:

          - servers

          - laptops

          - desktops ( OEM )

          And I'm curious if their video card line-up will be as good as their CPU line-up.

          Intel had 16 billion revenue in 2018. My guess is they will take a lot of that the coming years.

    • NicoJuicy 1617 days ago
      Probably, since they have known it for 6 months ( according to the researchers)

      I suppose, if you get this bad in the news and excepting it.

      This is kinda weird: https://twitter.com/gchip/status/1187519015544967168

    • agumonkey 1617 days ago
      In a similar vein, some local supermarket catalog showed half laptops based on ryzen, the only intel I saw were ultra cheap atom x5.
      • velox_io 1617 days ago
        Wow, I didn't expect that. Laptops were the only market Intel was still strong, still dominated. Having brought an rx5700 XT recently, the power usage has been just as impressive as the performance. AMD will have some very nice products when they start putting mini RX5000s in their CPUs. Decent performance without discrete GPUs is a real possibility. (Just need add some HBM (High Bandwidth Memory) to the mix and things will get rather interesting!)
        • rarecoil 1616 days ago
          I bought an RX 5700 XT almost immediately after launch, and I can't say that I've had had much luck with RDNA/Navi. I experienced freezes/crashes in Windows 10 while gaming, and the card's hot, loud, and still unsupported by ROCm for compute on Linux. There are rumours that ROCm won't ever support this generation of Navi and is waiting on Arcturus[1]. The drivers still aren't there and support is limited outside of some gaming use. Reviews also appear relatively mixed on Amazon.

          That said, even though perf-per-watt isn't great and that's usually my #1 concern, I still buy primarily AMD GPUs and suffer with the issues, as the NV CUDA/DL monopoly needs a counterweight. I'm just sitting on Vega until Arcturus.

          [1] https://www.phoronix.com/scan.php?page=news_item&px=Radeon-R...

          • snvzz 1616 days ago
            >RX5700XT almost immediately after launch

            So a reference card, which came with an horrendous, insufficient cooling solution.

            The experience has been much better for those that bought the custom cards (such as sapphire pulse/nitro) later. They also got far more mature drivers to start with.

            • Matthias247 1616 days ago
              I would agree on bad cooling on reference cards. But regarding drivers and general I would not expect a lot of difference between all those cards.
          • wtallis 1616 days ago
            > There are rumours that ROCm won't ever support this generation of Navi and is waiting on Arcturus[1].

            I think you're misinterpreting that bit. Arcturus refers to an upcoming product based on their older, pre-Navi Vega microarchitecture. It seems Vega may be a more compute-oriented microarchitecture while Navi is more graphics/gaming-oriented and thus in the near term they are still developing more compute-oriented products based on the Vega microarchitecture. But that doesn't mean they have no plans to ever bring Navi support to ROCm, it just means that it takes less work and has more short-term payoff to add support for another Vega-based product.

        • NicoJuicy 1617 days ago
          I'm expecting decent AMD laptops next quarter, I'm not seeing decent ones now, yet. It's one of the few markets that "is in progress", so it seems.

          Mostly because of power consumption, fyi.

        • nolok 1617 days ago
          You will probably still see mostly intel in expensive (>1000€) gaming laptop but everything else the power consumption of AMD new chips alone should give them the reign until intel can react.
          • what-the-grump 1616 days ago
            Until AMD releases stable drivers and gains market share, we'll just sit on the porch and buy cheap Intel.

            Fleet of BSODing machines because we decided to buy workstation graphics from AMD, I thought they would have figured out drivers by now.

            The knee jerk reaction to go and because they finally got closer to closing the gap with Intel is funny to watch but not very forward thinking.

            • nolok 1616 days ago
              Uhh, have had no problem with the AMD integrated graphics on the twenty or so 2400G I have acquired in the past 6 months. Are you talking Linux ? On windows it works great.

              Or if you're talking about dedicated graphics and not integrated, then I don't see how it relates to the subject at hand, their CPUs.

          • lostlogin 1617 days ago
            ... Until Apple leave Intel. It’s been the eternal rumour but it doesn’t show any sign of dying.
            • nolok 1617 days ago
              My comment talks about specialized gaming laptop series (MSI G-series, ASUS Republic Of Gamers, HP Omen, Acer Predator).
            • NicoJuicy 1617 days ago
              Well, Apple has never been about performance/$ though.

              It's the "total package" ;)

    • starpilot 1617 days ago
      Intel is part of the deep state.
      • moonbug 1617 days ago
        If that's so, they're helping Make AMD Great Again.
        • starpilot 1617 days ago
          AMD was never great.
          • rasz 1615 days ago
            AMD had three great moments before Ryzen.

            1991. Am386DX-40 introduced at $200, quickly discounted to <$90 against Intels slower ~$200 386DX-33 and $258 486SX-20. With this one CPU AMD won the budget segment for a couple of years https://redhill.net.au/c/c-4.html#dx40 Intel replied by suing AMD, they lost the case.

            2000. Duron/Athlon and first desktop 1GHz CPU. Intel replied by faking release of Pentium 3 1GHz, (only limited review samples, no stock shipped) followed by real release of 1.13GHz Pentium 3 dubbed the "Fastest x86 CPU in the world", except they crashed in users computers and resulted in expensive recall and delays https://www.tomshardware.com/reviews/revisiting-intel,221-2.... https://www.theregister.co.uk/2000/08/28/intel_recalls_1_13g...

            2003. AMD64, first desktop 64bit cpu starting at $218, half the price of comparable Pentium 4. Intel replied by directly bribing, intimidating and blackmailing distributors/retailers, resulting in biggest EU antitrust fine at the time of over $1.3 Billion. Intel hasnt paid a dime of this fine to this date! and keeps throwing lawyers at the problem and appealing. https://www.telegraph.co.uk/technology/2017/09/06/victory-in...

            • speedplane 1615 days ago
              > AMD had three great moments before Ryzen.

              AMD's current ascension can also be explained by the death of Moore's law. Moore's law is an exponential curve. Meaning, if you started behind someone else on the curve and increase on the same curve, over time, they will always get further and further ahead of you. You can never catch up.

              Assuming progress stops following an exponential, and starts looking more linear, with a reasonable (i.e., non-exponential) amount of investment, you can overtake someone who is ahead of you.

              This news may be great for AMD, but it does not bode well for everyone else.

          • wolf550e 1617 days ago
            Against the Pentium 4, AMD was amazing.
  • ibobev 1617 days ago
    As a developer I am keen to see some compilation benchmarks. Unfortunately those kinds of benchmarks are almost never included in such reviews. Instead there are many gaming benchmarks which purpose is not exactly clear for me, after obviously gaming is not the primary target market for R9 3950X.
    • mmastrac 1617 days ago
      I believe that some of the YouTube channels have started including compilation benchmarks. GamersNexus [1] and Linus Tech Tips [2] both do!

      [1] https://www.gamersnexus.net/guides/3460-new-cpu-bench-method...

      [2] https://youtu.be/stM2CPF9YAY?t=4m49s

      • simplyinfinity 1617 days ago
        In addition, Level1Techs is more developer/sysadmin oriented than consumer oriented : https://www.youtube.com/channel/UC4w1YQAJMWOz4qtxinq55LQ
        • polivka2 1617 days ago
          And in addition to that, we have Open benchmarking and Phoronix for that. :)
        • velox_io 1617 days ago
          Likewise (I 2nd L1Techs btw). Having 16 cores/ 32 threads will greatly aid testing server applications. Making multi-threading & locking issues much more apparent (Amdahl's law) than they would on a slightly older CPU with only 4-6 cores (a i7 6700k in my case).
      • IanCutress 1616 days ago
        Normally we at AnandTech run a Chrome compile benchmark, but for whatever reason it wasn't running properly on Win 10 1909. When I get a chance to debug (55k miles of travel over the next four weeks), I'm going to see if I can fix it and expand that bit of our testing.
        • seminatl 1616 days ago
          "Failed to compile Chrome" is a pretty huge caveat in which, I guess, your readers might take an interest.
          • mav3rick 1616 days ago
            Chrome has its own Windows builders as well on each platform.
      • sbov 1617 days ago
        Note that the GamerNexus review of the 3950x didn't include their compile benchmark test, at least on Youtube.
    • lliamander 1616 days ago
      As a developer, other benchmarks I would like to see:

      * compile benchmarks for different mainstream languages (Java, C#, etc.)

      * IDE related benchmarks (i.e. how long does it take to index a large solution/workspace)

      * source control conversion tests (svn to git). Obviously not something that happens everyday, but I've done it at two different jobs, and generally when it happens you are often converting many repos as an organization shifts its policy.

      * maybe some VM/docker related tests

      I'm probably asking for too much, but working on a micro-service system even with enough RAM my system becomes less responsive when I'm testing the interactions between services on my local machine. I'm also limited on the number of Java projects I can add into a single workspace in Intellij. I've been forced to open each project as a separate workspace and in a separate window.

      Granted, I'm working off of a dual-core i7 laptop (with 32GB of RAM) but I want to know what kind of upgrade it would take for those problems to go away.

      > Instead there are many gaming benchmarks which purpose is not exactly clear for me, after obviously gaming is not the primary target market for R9 3950X.

      Well, it must be said that the 3950X has the fastest single-core performance of AMD's lineup, so while it is overkill from a core-count perspective, it's still technically their best gaming CPU.

      • fierarul 1616 days ago
        That makes two of us! Always wanted to see reviews/benchmarks from the angle of a software developer job.

        I don't care how about video transcoding or most of the metrics such article show. But how long does my IDE take to index a big source tree? You bet!

      • IanCutress 1616 days ago
        Got any standardized methods that could be automated/scripted under a clean Windows environment and doesn't require internet access / licensing? Give me a shout - ian@anandtech.com
        • manigandham 1616 days ago
          .NET Core is open-source and free, with an offline SDK installer. [1] There are plenty of large open C# code bases, like the ASP.NET web framework [2], that you can use to compile and get a good performance score.

          And like the other comment said, you can probably get complimentary copies of the IDEs from Jetbrains to run benchmarks with. They're also scriptable which helps.

          1 .https://dotnet.microsoft.com/download/visual-studio-sdks

          2. https://github.com/aspnet/AspNetCore

          3. https://www.jetbrains.com/

        • lliamander 1616 days ago
          > Got any standardized methods that could be automated/scripted

          I'm sorry, I wish I did! Interest in hardware performance is only a relatively recent interest of mine. I especially think the IDE tests would be a bit challenging as I'm not even sure how to instrument or measure those. I might reach out to the Intellij folks to see if they have any ideas..

          > under a clean Windows environment

          Ah, shucks. I haven't touched Windows in years :). That's the tricky thing: according to Stack Overflow's surveys, only about half of software developers use Windows. The other half are split evenly between Mac and Linux. Covering all three platforms would likely be a challenge.

          > Give me a shout

          Thanks! I'll do some research, and if I have something more concrete I'll be sure to pass it along.

        • Bayart 1616 days ago
          Honestly something like a Linux Kernel compilation would be the most straightforward, static and consistent method to test it out. Just have a basic environment without overhead (like Alpine Linux) with GCC or LLVM/Clang sitting on a USB key, use GNU time to measure the execution of your command, redirect to a log file and you're done.

          user@host$ (time make -j[insert number of threads]) 2>logfile.txt

        • hajile 1616 days ago
          Linux Kernel would probably be good. WSL could probably do it without much trouble though I don't know how WSL affects performance compared to native Linux.

          Compiling Visual Studio Code Typescript could be an interesting target too.

          There should be a large open-source C# and/or Java project out there that can be compiled as well.

          • scns 1616 days ago
            Is Typescript compilation multithreaded?
            • mtone 1615 days ago
              The source code is in .ts, and appears to use Node child processes [0], so probably yes. I'd be curious to see the kind of resource usage is involved building TSC.

              My limited experience using Node worker threads (to compile small React apps with WebPack in parallel) showed lots of overhead and memory usage for just starting the build (Node and library instances, can't say in what proportion), although it is still worth the expense.

              [0] https://github.com/microsoft/TypeScript/blob/master/scripts/...

            • lliamander 1616 days ago
              A good question, though I will point out that either way benchmarking would still be useful.
        • fierarul 1616 days ago
          Maybe anandtech.com could create / pay for such tests?
    • Matthias247 1617 days ago
      As already said by others, Phoronix has some compilation benchmarks.

      I have recently bought a 3900X. Some things about compilation I would note is that it heavily depends on your programming language and tooling. And that you would note is that you get most benefits on clean builds of big projects, which scale very well per core. On incremental builds or smaller projects it is however not uncommon to see < 25% of the CPU being used. That is especially with Rust, where a single compilation unit (crate) is compiled in a single-threaded fashion. Might be better with C or C++. But then again linking might also be blockers.

      It's all nice and fast, but on day to day use you likely won't see a 100% speed increase compared to a 3600.

      • zlynx 1617 days ago
        Beside clean builds is Git branch checkouts. I hit this at least twice a week.

        Finish a feature branch and push it. Check out master. Pull master. Make new feature branch. Get an urgent bug fix request, switch to release branch, make a bugfix branch. Now go back to the new feature.

        During this I may end up doing large rebuilds because Git changes modification times even if you end up back with the same header file you started with.

        On a Thinkpad T540 the rebuild can take 30 minutes. It's less than 5 on a 3900X.

        • namibj 1616 days ago
          Use a build system that works with input-hashes. djb's redo is an example.
        • boring_twenties 1616 days ago
          Do you know about `git worktree`? You may not want a separate one for each branch you work on, but at least one for master and one for release would help you I think.
          • zlynx 1615 days ago
            There are problems with worktree. But yes, I often do use multiple checkout directories. When you do the git clone using the --reference option it gets most of the objects from the referenced directories.

            But this still results in slow compile times because the less used release directories get stale, have to be pulled up to date and often mostly rebuilt anyway. If I remember to actually use the second checkout directory, it does save on needing to rebuild the master/feature branches afterward.

        • nwallin 1616 days ago
          ccache solves this fairly well if you're on *nix.
      • mtone 1615 days ago
        Another potential area might be unit testing. As an example, I've been running a test suite on a C# github project (xUnit and mocking libraries) with ~1.5K tests. The CPU is at a solid 100% cpu usage all the way through.

        In higher-level languages, running tests often takes considerably more time than building. A prosumer CPU like this would bring nice day to day improvements.

    • gameswithgo 1617 days ago
      GCC at least goes very fast on the new Ryzens, due to the huge l3 cache. That may translate to many compilers. Rust (which uses llvm on the backend) also compiles very quick.
      • velox_io 1617 days ago
        For the 4000 series, they're planning on sharing the L3 caches between CCD's/ chiplets rather than per CCX. It will be interesting to see if the more cache availible per core, and more cores being able to access the same cache will improve the performance.

        The speed AMD has raised caches & core-count lately is insane. When the max for mainstream CPUs was 8MB (and 4 cores) until only a few years ago. (Things seem to have stopped when quad-core launched, the Intel Q6600 was an amazing CPU at the time.)

        I've been a computer enthusiast for over 20 years and didn't expect to see advancements to return like the good ol' days (Moore's law) to ever return. (I don't think Intel did either, halving their prices pre-launch is unprecidented!)

        • gameswithgo 1617 days ago
          Yes I look forward to the better cache system in the next generation!

          Do remember that part of how AMD can get so much L3 on there, is due to the chiplet design, which also adds a LOT of latency to ram accesses. Almost all of the time, it seems to be a net win though, and with the next generation it will only get better.

        • mjevans 1616 days ago
          It would be interesting if we moved back to the final layer of cache being off-core, likely within the IO die or attached as additional chips within the package. That should decrease the core size, improve yields, and allow for differentiation in L3 size as well.
          • gameswithgo 1616 days ago
            but it would add latency :( but maybe an L4 cache, off core, as additional chips!
            • wahern 1616 days ago
              At some point both higher latencies and deeper cache hierarchies seem inevitable: http://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html

              Though, perhaps 3D integration can stave that off for many more generations.

            • paulmd 1616 days ago
              Interestingly, Broadwell-S and Broadwell-R did this using the "Crystal Well" L4 cache.

              However, DDR4 memory now provides more bandwidth than the L4 could (although probably at a higher latency).

    • IanCutress 1616 days ago
      Normally I run a Chrome compile benchmark, but for whatever reason it wasn't running properly on Win 10 1909. When I get a chance to debug (55k miles of travel over the next four weeks), I'm going to see if I can fix it and expand that bit of our testing.
    • e12e 1617 days ago
      Agreed. At least it's part of the phoronix benchmark, and openbench has some results:

      https://openbenchmarking.org/showdown/pts/build-linux-kernel

      I'd love to hear of a more richly supplied crowd sourced database.

    • spamizbad 1617 days ago
      I'd love to see more container-oriented benchmarks, particularly with docker-compose and/or minikube.
  • zamubafoo 1617 days ago
    I find it interesting how desktop CPUs are essentially coming down to two enthusiast markets, developers/content creators/workstations and gaming.

    While the gaming market is (usually) seeking the highest top single core clock speed with respect to CPUs, it also relies on other expensive hardware. Meanwhile the dev/content creator/workstation market is much better served by these multi-core behemoths.

    Intel really has their work cutout for them with performance to cost for consumer desktops.

    • CoolGuySteve 1617 days ago
      Before Vulkan, there were more bottlenecks on multithreaded rendering. Most games pushed everything to a single drawing thread.

      Hopefully going forward that will change, but even with Red Dead Redemption 2's Vulkan implementation, a 6 core Intel/AMD chip is competitive with this processor at lower settings/resolutions where the GPU is less of a bottleneck.

      Compute bound games like Civilization 6 do scale with processor count however.

      • jsgo 1616 days ago
        that reminds me of what I had read on EverQuest 2. I downloaded it when it went f2p years after launch. My gaming PC had some issues with it which seemed nuts as it could handle more intensive games. After reading up on it, EQ2 came about a little bit before the shift to multiple cores which was to their detriment: they basically expected single core performance to keep going up substantially (as it had during EQ1 era) and not expanding to multiple cores. As such, they optimized against a configuration that never ended up existing.
      • swebs 1617 days ago
        Factorio is also limited by single core performance, with the developers saying they have no plans to implement multithreading.
        • kuzko_topia 1616 days ago
          This is unfortunate, and I'm saying this as an AMD owner and factorio player. Do you have a source for this?
      • shmerl 1616 days ago
        The Witcher 3 likes all 12 cores on my Ryzen 9 3900X when running in Wine+dxvk, thanks to Vulkan (especially for faster shader compilation). Same as they are useful for compiling Mesa and Linux kernel ;)
    • api 1617 days ago
      It's like vehicles. Desktops/workstations and even most non-tablet laptops are now analogous to small trucks such as pickups. Tablets and phones are cars. Most people drive cars. Trucks are for work or people who like to haul stuff.

      Servers and huge workstations I guess are like 18-wheelers and locomotives. :)

      The "mobile is the future of everything" people were wrong. It's not that desktop is going away, but the market is re-segmenting. Most casual users don't need a desktop. They just need a UI device that can run apps and access services. So you now have a bisection of the computing device market into pro/workplace type devices and casual user devices. I suppose there's a third category too: hobbyist and enthusiast devices. That's where I'd put things like the Raspberry Pi or more techie geared laptops that run Linux.

      I predict that rather than converging with mobile, desktop will actually pivot more toward pro, developer, and power user needs. Apple's recent 16 inch Pro release is a baby step in that direction on the hardware level. You'll probably see it at the software and OS level too. Desktop might actually shed a little bit of its user-friendliness gloss in favor of being unabashedly "pro." "If you don't want to know what a network, a folder, a file, or an IP address is, get a tablet."

      • jtbayly 1617 days ago
        Most people don't need a truck, maybe, but the three top-selling vehicles in the US are trucks. And in 2017 only 35% bought cars, with the rest being trucks and SUVs.
        • ajuc 1617 days ago
          US is far from average needs of a car user. Not a representative sample.
      • neogodless 1617 days ago
        Suddenly number of wheels is analogous to cores...

        Motorcycle - fast for gaming

        Car/SUV - good general purpose

        Truck (dually) - serious work (probably fun)

        18-wheeler - move a load of loads at once

        But apparently that makes the Ryzen 9 3950X a 16-wheel Porsche Cayenne Turbo S? Super fun to drive but you can use it to haul...

        • api 1617 days ago
          Hah... yeah the analogy gets silly if you make it literal and stretch it too far. I was just using it to talk about how different types of devices specialize for different use cases.

          Phones: ultra-portable communication and rapid service interaction device -- basically a pocket terminal. Skill level: beginner / non-technical.

          Tablets: casual computing devices suitable for reading, writing, some forms of content creation, and games. Skill level: beginner / non-technical.

          Desktops/Laptops: more powerful and flexible devices for serious content creation, high-end gaming, development, heavy number crunching, etc. Skill level: intermediate to advanced users. They're still relatively easy to use for casual use cases but they have more knobs, expose more details, and have less training wheels / guard rails to keep you from breaking things.

          Servers: big machines that sit in one place in a data center, office, or basement and run lots of stuff. Skill level: mostly advanced users.

          SBCs like the Pi: hobbyist devices for hacking, DIY IOT, DIY automation, etc. Skill level: mostly advanced users.

          Then of course there's a long tail of specialized and smaller devices like watches, FitBits, IOT, etc.

          • zamubafoo 1617 days ago
            Phones: Electric Scooters or Regular Bike

            Tablets: Electric Bike

            Laptops: Standard Car, SUV, or Light Work vehicle (F150 space)

            Desktops: Luxury, Enthusiast, or Specialist Market (Sports Cars, Anything above the F250, Hot Rod territory)

            Servers: Work Vehicles (Semis, Flatbeds, etc.)

            SBCs: DIY non-motorized vehicles

            IoT: Shoes

            I think its a great analogy. You can always get an over priced or over spec'd vehicle for your needs, but there are clear uses for some of the form factors.

    • intarga 1617 days ago
      >While the gaming market is (usually) seeking the highest top single core clock speed with respect to CPUs, it also relies on other expensive hardware. Meanwhile the dev/content creator/workstation market is much better served by these multi-core behemoths.

      It's not really that cut and dried, plenty of dev workloads are better served by high top single core, while plenty of gaming workloads are better served by multi core.

      • zamubafoo 1617 days ago
        You are absolutely correct. I might be wrong but I've always heard that the general rule of thumb for gaming CPU purchases is get best single core speed if you aren't buying for a specific game. Also, I based my statement on my workloads which involves a lot of virtualization.
        • orclev 1617 days ago
          The gaming market is in some ways on the trailing edge of technology. While they're often one of the first to embrace new hardware, they're very slow to adopt changes in software or architecture because the engines that drive most of that are few and tend to evolve slowly. Until Unreal Engine, Unity, or one of the other major game engines fully embrace an architecture that allows developers to easily saturate multiple cores, most games are still going to be bottlenecked by single core performance. On the other hand, it should be blatantly obvious to anyone even remotely connected with development that full multi-core utilization is the only way to keep improving performance going forward, so sooner or later the game engine developers are going to need to fully embrace that.
          • e_proxus 1616 days ago
            Unity's new Data-Oriented Technology Stack (DOTS) aims to take advantage of multi-threading by default. It's done transparently as far as it is possible, with the help of a new burst comipiler system, their new job system and the Entity Component System (ECS).

            https://unity.com/dots

        • paulmd 1616 days ago
          That's generally accurate. Per-thread ("single-thread") performance is the dominant factor in gaming performance. Even in multi-threaded games there is usually a critical thread that is limiting performance. Amdahl's Law: never not applicable.
    • Someone1234 1617 days ago
      I agree with your overall synopsis of the status quo. We might however see gaming shift more to multi-core workloads as the next gen of consoles are highly likely to contain 8 core (16T) AMD CPUs based on Zen2 (and a lot of games are made for the lowest common denominator).

      This shouldn't be confused with the 8 core APUs found in current gen. Next gen will have an 8 core CPU and a NAVI 12+ GPU.

      • undersuit 1617 days ago
        LinusTechTips made a video recently where they took an Intel Skulltrail, which is an enthusiast dual-socket Core 2 Quad platform from 2008(!), through a modern OS and game test. They platform was struggling until the enabled the Vulkan API in games. We have consoles to thank for that, without the low speed, high core count CPUs of the PS4 and Xbox One there'd be little justification for optimizing games for Vulkan and DX12
      • paulmd 1617 days ago
        To be clear it's a shift from 8C8T to 8C16T (8C with SMT). The main change here is a massive (~3x) improvement in per-core performance.

        It's not actually a shift to (significantly) more threading but actually a shift to more per-core performance, like desktop PCs currently use. Consoles are getting desktop-like processors for the first time.

        This is creating quite a bit of anxiety in the desktop gaming community since high framerates depend on desktop processors being able to significantly outperform consoles on a per-core basis, and that will no longer be the case.

    • burtonator 1617 days ago
      I LOVE that the gaming market has created a serious desire for quality desktop hardware.

      I now build my workstations by hand and install Ubuntu. It's really nice to work on quality hardware!

      • kevin_thibedeau 1617 days ago
        As long as you want unnecessary lighting and gaping, EMI releasing, windowed cases.
        • LinuxBender 1617 days ago
          To your point, it is getting much more difficult to find heavy duty cases that are 100% shielded. I prefer heavy duty cases both for shielding, durability and the benefits of being an additional heat sink.

          Case manufacturers could just add their LED lighting outside the case in a clear plastic lining, since it is just for effect. Or if they want to get fancy, put a slim LED screen on each side so you can display the inside of any PC, or make it look like you have a RasPi in the case. Or make it look like you have vacuum tubes, or perhaps a fish tank, or an animated hamster wheel with tiny humans running fast. And don't link it to the OS. Just put a USB port that reads a directory of animated things or pictures, much like modern LED picture frames. Come to think of it, just buy two LED picture frames and stick them to the side of the case.

        • overcast 1617 days ago
          All easily avoidable.
          • redisman 1617 days ago
            I just build a mid-range gaming PC. CPUs don't have RGB, memory modules don't have RGB, some overpriced motherboards have RGB, HDD/SSD don't have RGB, some overpriced GPUs had them but I was looking for value for money.
            • jsgo 1616 days ago
              on the GPU side, it seems RGB is the common thread. There are ways to disable it, but they aren't particularly great as they really only come into play post-OS loading (I have my GPU in an external enclosure so I see it more than I would in a desktop).

              Memory modules do have RGB, but you specifically have to buy them (they're the exception, currently). CPUs don't have RGB, but the fans that are going to be mounted on them may be (Ryzen's CPU fans that come with the CPUs have them iirc). I don't see HDD/SDD going the RGB route considering they tend to be somewhat obfuscated, but the faster storage sticks that are all the rage in gaming PCs now may be a candidate (I say that mostly because of the RGB memory modules mentioned earlier).

              It is a trend I wish didn't happen, honestly, but I know it is a case of trying to consolidate and appease two audiences (performance hungry users and enthusiasts) so it is what it is.

            • kevin_thibedeau 1616 days ago
              > CPUs don't have RGB

              The AMD Wraith Spire is a stock heatsink and comes with lighting except on the APU models.

          • fierarul 1616 days ago
            Only if you really try and know about it. The best case I liked had a 'window'. But at least it's black otherwise.

            Turns out my graphics card has a led blinking logo too. I would have never assumed while buying it it's a thing.

        • AnIdiotOnTheNet 1617 days ago
          LEDs on RAM: for when you absolutely positively must have the most fabulous rig at the LAN party. I guess.
          • Fnoord 1617 days ago
            I don't find it fabulous at all. MacBook aluminium, SGI cases, old Mac Pro case though...

            For a gaming rig, I would take pride in it being silent and cold without spending too much (I know, those Noctua fans are still pricey but that is my upper limit).

            I decided to update my Intel 4770k to an AMD 3900X. I had to upgrade to DDR4 as well. One of the things I specifically did not have to upgrade is the case. Just a couple of new fans really.

          • api 1617 days ago
            It's so your VTEC will kick in, yo.
        • lorenzhs 1617 days ago
          We built a Ryzen machine in an old rack-mount workstation chassis at work before EPYC was released. The motherboard has a bunch of RGB contour lights on it, which you can see through the grille at the back of the chassis. Looks really daft in a rack.
          • gameswithgo 1616 days ago
            you got a couple of options there. you can turn them off or black electrical tape them if there isn't a way (there usually is a way)

            or, own it, make them as bright and blinky as possible and then just love it.

            • account42 1616 days ago
              > you can turn them off

              Often only through the Vendor's software that of course is only available for Windows.

    • pmoriarty 1617 days ago
      I'd expect there are far, far more servers than there are developer machines. There are probably many more servers than hardcore gamers as well, and maybe even more than all gamers put together. It'd be nice to see some hard stats on this, however.
      • jwandborg 1617 days ago
        The vast majority of servers run on server CPUs, not on desktop CPUs. AMDs server CPUs are called EPYC, Intel's are called Xeon.
      • streb-lo 1617 days ago
        Which don't use desktop CPUs...
    • Tepix 1617 days ago
      You're forgetting about the low budget CPUs used for desktop PCs. AMD has a pretty good standing with their Vega GPUs built into the Ryzen/Athlon APUs.

      Or are you only talking about enthusiasts?

      • zamubafoo 1617 days ago
        If we are looking at the budget systems, then AMD wins hands down. Not only do their Athlon 200 CPUs have appallingly good cost to performance ratios (with Vega GPUs and a CPU cooler), they also share the same socket with the Ryzen lineup.

        At the moment, I wouldn't recommend anything but AMD for CPU since you can buy at any price point then upgrade. I think the only price point that Intel does have a slight advantage is the ~$500 mark.

        • aoeusnth1 1617 days ago
          And even then Intel's only advantage is on single-threaded workloads. At $500, you can get a 12-core AMD (3900X) for the same price as the 8-core Intel (9900K).
  • paulmd 1617 days ago
    GamersNexus really hammering AMD's deceptive marketing around boost clocks. The only time it can hit its advertised boost clocks in when it's in the menu and you have near-zero load, and it barely touches it for an instant. Under even a single-core load it's failing to meet its advertised spec.

    There was a lot of hubbub around this with the initial release, AMD released a BIOS which they claimed fixed this, looks like it still has not.

    https://www.youtube.com/watch?v=M3sNUFjV7p4

    The performance is good enough as it is, no need to lie about it being 200 MHz higher than it can actually do. But here we are again 6 months later and AMD does it again...

    • nolok 1617 days ago
      Which is weird because they have the performance.

      Showing the perf of the processor does all the marketing it needs, no matter what clock it's running at, and with normal/turbo clocks customers don't even get it anymore anyway, so I don't understand why AMD is handicapping itself like this in the marketing department.

    • vondur 1616 days ago
      As far as I can see most of the reviews have mentioned how the 3950x is able to hit the advertised boost, so it looks like AMD is paying attention.
    • mackal 1617 days ago
      The issue sems to be that the max advertised boosts is based on the number the plug into the boost algorithm. (I say based since some of them, especially higher end chips are 25-75 MHz higher than advertised boosts, a BIOS Dev posted them somewhere that I can't find :/)

      There is an issue when the chips don't really hit it.

    • gameswithgo 1616 days ago
      people want big boost clocks which is dumb. they got them (when under no load) which is dumb.

      seems everything is fine.

  • NKosmatos 1616 days ago
    Way to go AMD! One of the benefits of competition between the duopoly. AMD has cornered Intel these last couple of years and I don’t see this trend changing soon, not with all these vulnerabilities that chipzilla is having ;-)

    One thing I’d like to note, is that with all this computing power being available to users at a relatively affordable price, software developers (games - commercial software) won’t optimize their code. I’ve seen it happening where a loop/scanning/sorting algorithm won’t be optimized since the user will anyhow have a few cores and GHz to spare.

  • oouiterud 1617 days ago
    I read that all these new AMD CPUs support ECC, but it’s been hard finding verification. Can any one recommended a motherboard that both supports and uses ECC RAM with this new CPU?
  • neogodless 1617 days ago
    This isn't the most groundbreaking release - it's not the first 16-core chip you can buy (edited), nor the first 7nm. No new clockspeed records were set. Still, $750 for all that power!

    Any professionals shopping for this, or waiting for 24 or 32-core Threadripper?

    Anyone trying to upgrade on an older motherboard, or are you getting a matching X570 to ensure maximum boost and PCI-e bandwidth?

    • jagger27 1617 days ago
      I would say it is the first 16-core "consumer chip", since every other 16-core chip has either been HEDT-class or server-grade, both requiring expensive motherboards. I would be happy to try plopping this into my B450 motherboard from my 2700X. I'm happy with my current NVMe storage, which is just about the only thing that takes advantage of the extra bandwidth. Graphics cards aren't there yet.
      • neogodless 1617 days ago
        That's fair - looks like you need LGA 2066 for the Intel chips with those core counts, which aren't terribly expensive ($150+) but it's hard to find exactly which motherboards support something like the i9-9960X.

        Running a 2700X in an X470 but this is 3x the price of that chip, and personally, I won't get the necessary benefit. Still pleased to see exploding CPU power in the past few years!

        • distances 1617 days ago
          Also, nice to have that socket compatibility. I imagine I'll upgrade to the top-of-the-line CPU of last AM4 iteration in 4-5 years when their prices have dropped to a reasonable level. Let's see if that actually happens though as changing a CPU is work wise about the same as a full rebuild.
      • jankotek 1617 days ago
        I had 24 core workstation 5 years ago for price of this single chip. Refurbished server...
        • velox_io 1617 days ago
          The problem with this strategy is that they often have much lower clock speeds/ IPC, plus they are often quite power-hungry and lose the ability to overclock (that one's subjective). So you do often get bitten by Amdahl's law. However, if it is purely a rendering machine and single-threaded performance isn't an issue, you can get a lot of processing power for the money.
          • Damogran6 1617 days ago
            And potentially REALLY NOISY...
            • paulmd 1616 days ago
              GamersNexus had a fun video playing with a new Epyc server with Wendell from L1Techs... the delta fans on that pull up to 7A. Not 0.7A, 7A. More wattage than your CPU.

              https://youtu.be/la0_2Kmrr1E?t=164

              Not an epyc thing, servers are hilariously loud and I've seen so many posts on selfhosted and homelab about "the wife made me get rid of my R710 because you can hear it in the closet/basement/etc all the time"....

          • jankotek 1617 days ago
            It was good complement for light laptop.
    • MrGilbert 1617 days ago
      I did some math for myself and realized that I can skip the 4 extra cores, and go with the Ryzen 9 3900X (530€ vs 820€).

      That's 44.16 € / Core (3900X) vs 51.25 € / Core (3950X) where I live. Yes, the base clock is 200 Mhz lower, but for my use case (homeserver with gaming vm and minecraft server etc), I need cores that I can assign. Im not ready to pay 290€ für an extra 4 cores yet.

      • gameswithgo 1617 days ago
        the 3900x base clock is actually higher.
    • velox_io 1617 days ago
      I'm looking forward to a 3950X, this is the first time you can get a lot of cores without the expense of low clockspeeds.

      I was interested in the new ThreadRippers however, I do beleive this gen is overpriced. I realise this is how supply and demand works, but I don't think the expense is worth it personally.

      In addition, from what I've seen so far. AMD/ OEMs are chrging a large premium for TRX40 mother boards. You're talking ~$550 & up. Bare in mind that for a similar price you can get i an Epyc board with double the memory channels (8), AND add an addional CPU, AND duel 10gbe! (you would think a single port would be default on most workstations late 2019) It's a lot of money for 2 addional memory chennels & and PCIe lanes (PCIe is a serial interface so it only needs one wire per lane, per direction).

    • bob1029 1616 days ago
      I am waiting for the 3990X announcement before pulling the trigger on 3rd gen TR. I am already running a 2950X in my main workstation, and it's currently really hard to make an argument for even more performance... That said, I am targeting the 3970X as my next upgrade option, but AMD might be able to upsell me on the 3990X. Once the trigger has been pulled on 3rd gen, I will look at repurposing the 2950X workstation as a Jenkins build agent. We currently only have 2 vCPUs in AWS for Jenkins, so this could make a huge difference for our build process and give the old machine a really good 2nd life.

      In terms of the 3990X specifically, I am most interested to see if they are going to provide additional platform capabilities. There were rumors regarding a TRX80/WRX80 platform, which seemed to imply to some tech journalists that there would be an octal channel variant of TR available early next year. It would be very hard to turn this down if it were an option.

    • snagglegaggle 1617 days ago
      The step from Ryzen to Threadripper is quite a big one. I'd benefit from a Threadripper system, but budget only allows a (top tier) Ryzen for now. I suppose spare money could go towards a Threadripper right now and I wouldn't consider it wasted per se.
      • Tepix 1617 days ago
        Even if you can get a good price for the CPU, the mainboards are very pricey compared to the consumer CPU boards.
        • penagwin 1617 days ago
          Last time I checked, motherboard's for thread ripper cpus were around 200~300$, is that still the case?

          The motherboard cost really erases the budget for entry, as that's the price of a decent ryzen cpu.

          • snagglegaggle 1617 days ago
            Exactly, and there's not a lot of used DDR4 (w/ or w/o ECC) on the market.
    • solotronics 1617 days ago
      If you look at performance per watt AND performance per dollar it is a record breaker. Also, it is the highest base clock speeds of any of the AMD chips.
    • chaosbutters 1617 days ago
      I'm waiting for 64 core CPU. 32 is nice but still not enough.
      • ChuckNorris89 1617 days ago
        I feel your comment is just pointlessly snarky without mentioning for what workloads it's not enough.

        Nothing is ever enough but we'll always be limited by the economics of scale and in case you're in the exclusive demographic where this doesn't apply to you than you'd have resources for workarounds to performance limitations without complaining (server grade chips, FPGAs, supercomputers, dedicated ASICs, clusters, etc.).

      • lorenzhs 1617 days ago
        Then you can buy an EPYC 7702P. It's not like they don't make 64 core CPUs. Or get a dual-socket board and buy two 7702s if 128 cores are more to your liking.
      • ClumsyPilot 1617 days ago
        Try running kubernetes dev environment with all the crap your team has hoarded on it, elastic, Prometheus, 5 different database engines and dashboard, etc. Then watch the system chug.
      • Koshkin 1617 days ago
        Are you on 48 cores currently?
      • qzw 1617 days ago
        64 cores ought to be enough for anybody...
        • neogodless 1617 days ago
          Yeah, that's what... 10kb of RAM per core?
          • dragontamer 1616 days ago
            Funny story. The Vega64 has 8GB of RAM and 4MB of L2 cache (last-level cache) across 4096 SIMD-cores. That's 1kB of L2 cache and 2MB RAM per core.

            It gets worse: although there are 4096 cores, the Vega64 isn't fully loaded until you stick 4-threads-per-core (Or more precisely, 16-wavefronts per Compute Unit (256-SIMD-cores)). That means you will actually need to run 16384 SIMD-threads before the Vega64 is fully utilized.

            That's less than 512kB main-RAM per core, and less than 256 bytes of L2 cache per core. Better hope your threads are sharing a lot of memory...

            --------

            AMD addresses this problem with NAVI / RDNA: NAVI is fully utilized at 1-thread per core. So you only need 2816 threads on the 5700 XT to fully utilize the system.

  • wayneftw 1617 days ago
    Any problems with AMD cpus and containers or virtualization?

    (Really? Wow. Sorry for asking a question!)

    • neogodless 1617 days ago
      I only have limited experience so far, but I was able to enable virtualization and run WSL on Windows 10 with a Ryzen 7 2700X with no obvious issues. I'm sure more in-depth, longer term container testing would be more useful to you, though!
      • jmkni 1617 days ago
        Same with a 3700x. Running an Android Emulator, Docker, and WSL2 on Hyper-V simultaneously with no obvious issues.
    • Bayart 1616 days ago
      No problem on my side of things running Docker at home on an R5 2600. I heard there were a few problems with IOMMU in the beginning but it should be long gone. Considering AMD's getting into the server market pretty convincingly I don't think you'll have issues on the sysadmin side.

      If you've got any problem it's gonna be from dodgy BIOSes and half-compatible hardware, the kind of stuff you might get running an older AM4 motherboard.

    • pb82 1617 days ago
      I'm running Kubernetes inside a libvirt VM on a Ryzen 3600. No issues at all.
      • wayneftw 1616 days ago
        Thanks to you and others for your data.

        I look forward to building my first AMD rig in like 15 years or something! I got an RX580 GPU just sitting around, from when I planned to upgrade my 2012 Mac Pro to support Mojave (not worth doing I decided). Now I just need the rest of the machine!

  • tracker1 1616 days ago
    Been waiting for this for about a year now... pulled the trigger early on an X570 build as my old system (i7-4790K) was acting up and bought early. Been running an r5 3600, but replacing cpu with a 3950X.
  • fock 1617 days ago
    Can anyone comment on the IOMMU-grouping on typical boards? From what I've just googled, the CPU "lanes" seem to support ACS so it could indeed be working to replace my IGP+GPU-for-the-gaming-VN-Haswell system with a dual-GPU-Ryzen one (contrary to what I believed previously).

    Have been eyeing TR for that reason, but as I don't really need this amount of I/O (and cores), I might be well served by AM4.

    • bootloop 1617 days ago
      I am finally running a Ryzen 1700 + dual GPU system so I could get rid of dual-booting:

      CPU: AMD Ryzen 7 1700

      Mainboard: GA-AX370-Gaming K5 (Bios: F25)

      GPU (Host): Gigabyte GeForce GTX 970

      GPU (Guest): Gigabyte GeForce RTX 2060

      OS (Host): Arch Linux

      OS (Guest): Win 10

      As said it appears to me IOMMU grouping depends a lot on the board used. For mine it looks like all the chipset external PCIes are in one group and the CPU PCIes (basically the 16x for GPUs) are in two seperat groups. So I run two GPUs on 8x on each slot now. I was also able to pass through an USB 3.0 on-board controller as it was also in it's own group.

      But there is the ACS patch so you are able to pass through single devices which are in a physical IOMMU group (not only cpu lanes). I don't have personal exp with it tho.

      Warning: There are issues with the latest Ryzen gen and bios updates and they break vifo.

      • gravypod 1616 days ago
        Have you experimented with things like lookingglass? Is that stuff more streamlined now?
        • bootloop 1616 days ago
          Personally I have no use for lookingglass. To make it work you need anyway a display connected (or something simulating a connected display) otherwise it will not render to the shared memory.

          In addition to that I use the VM mostly for fullscreen application, either games or some windows only software so I don't get any advantage out of the windowing.

          Instead I decided to take an auto-switching approach via CDD and some shortcut menus to switch GPU outputs on my main display, see in action here: https://gfycat.com/tepidliquidcero

    • stamps 1617 days ago
      I don't have the exact grouping on hand, but I'm able to run a Ryzen 2700x (8 core) with a 2080 (Windows VM GPU) and wx4100 (manjaro host GPU) all on a GIGABYTE X470 AORUS GAMING 5.

      USB passthrough with an Startech USB PCI-E card was a bit problematic as the group was not isolated. Turned out the 2080 has a USB controller for the type C (I'm told for VR).

  • LatteLazy 1617 days ago
    7nm is about 35 times the diameter of a silicon atom.
    • Nokinside 1617 days ago
      7mm is so called 'commercial name' nothing physical.

      Million transistors per square millimeter (MTr/mm²) is better comparison metric than the commercial name for the process. Here is handy chart I copied from https://www.techcenturion.com/7nm-10nm-14nm-fabrication:

          Tech Node name  (MTr/mm²)
      
          Intel 7nm       (2??) 
          TSMC 5nm EUV    171.3
          TSMC 7nm+ EUV   115.8
          Intel 10nm      100.8
          TSMC 7nm Mobile 96.5
          Samsung 7nm EUV 95.3
          TSMC 7nm HPC    66.7
          Samsung 8nm     61.2
          TSMC 10nm       60.3
          Samsung 10nm    51.8
          Intel 14nm      43.5
          GF 12nm         36.7
          TSMC 12nm       33.8
          Samsung/GF 14nm 32.5
          TSMC 16nm       28.2
      • IanCutress 1617 days ago
        It's worth pointing out that the value for Intel's 10nm is on its high-density libraries, not its high-performance libraries, which are less dense.
        • kilo_bravo_3 1617 days ago
          Are there any current products one can buy, regardless of expense, using Intel's 10nm?

          I know there was one product, the Core i3-8121U, a very middling performance chip which saw an extremely limited release.

          But it was discontinued.

        • Filligree 1617 days ago
          "Libraries"?
          • KuiN 1617 days ago
            There's a lot of detail in a really in-depth (but very interesting) AnandTech article about Intel's 10nm process [0]. The basic idea is Intel have multiple distinct sets of logical unit designs at different densities which have different characteristics. It's like having multiple software libraries that do the same thing but have different performance etc. except it's a library of silicon building blocks. You trade-off speed with density (highest performance library uses the least dense circuits). Each chip's design can be made up of bits of silicon from these different libraries. Different densities also have different power consumption, so in cases it makes sense to build low power chips that have small high performance, power sucking parts that can be used sparingly.

            [0]: https://www.anandtech.com/show/13405/intel-10nm-cannon-lake-...

          • monocasa 1617 days ago
    • lm28469 1617 days ago
      It sure is, but 7nm in cpu lithography doesn't mean anything, it's pure marketing.

      https://en.wikichip.org/wiki/7_nm_lithography_process

      > The term "7 nm" is simply a commercial name for a generation of a certain size and its technology and does not represent any geometry of a transistor.

      • api 1617 days ago
        Yeah, since about 14nm the nanometer number has become kind of like CPU speed (mhz/ghz) became in the early-mid 1990s and for the same reason.

        Long ago you could say e.g. a 600mhz CPU was faster than a 500mhz CPU. Then things like instruction level parallelism and other significant optimizations started and you had lower-core-speed CPUs that would in practice be quite a bit faster than higher-core-speed CPUs. Today a 1.6ghz mobile chip is far faster than a 2.4ghz earlier generation Pentium-4 for example, at least for most workloads.

        My understanding is that chip fab processes have developed similar numbers of devils in details. It's not just about the smallest feature size but lots of other things like materials, transistor types, layout, power density, etc. I've heard that Intel's 14nm processes are comparable to other fabs' 10nm processes for instance.

        Still this 7nm node is better than what Intel is currently shipping.

      • mc32 1617 days ago
        We need a new “PR rating”[1] system for feature size to make some sense of these numbers being bandied about.

        [1]https://en.m.wikipedia.org/wiki/Performance_Rating

    • Filligree 1617 days ago
      Of an individual silicon atom, apparently. I'd think the "grid size" of silicon crystals might be more relevant, and that's not necessarily the same number... Hmm.

      0.235nm between atoms, 1/30th of 7nm. (Though, precisely what does the 7nm number measure?)

      • andy_ppp 1617 days ago
        Nobody knows anymore, it’s kind of feature size, but everyone defines feature differently!
        • mantap 1617 days ago
          It seems like transistor density (N per unit area) would be a more useful measurement.
          • gruez 1617 days ago
            Isn't that the logic behind the "fake" nanometer measurements? Let's say everyone is on "32nm", but you got a new process[1] that packs twice the number of transistors than everyone else, but with the same feature size. Essentially, your 32nm feature size process is equivalent to everyone else's 22nm feature size process. Are you going to advertise your new process as "32nm" which is the same as everyone else, or "22nm"?

            [1] eg. https://en.wikipedia.org/wiki/Multigate_device

          • andy_ppp 1617 days ago
            I mean I'm no chip designer but surely they are using some crazy hacks like transistors that take up more space than the lines between them by being part of the substraight above and below the etched lines. And presumably they use interference/coherence to etch even finer details? There are probably about a million different techniques they use to pack in more transistors without making the process smaller.
    • BubRoss 1617 days ago
      That's interesting but to put it in perspective I need to know how it relates to a human hair or how many times it can circle the earth.
      • shantly 1617 days ago
        If I've used Wolfram Alpha correctly, a 7nm span of silicon circles the earth 5.4874958×10^-16 times at the equator.
        • codyb 1617 days ago
          I may be reading this wrong, wouldn’t a 7nm span of anything span the Earth 5.4874958x10^-16 times then?

          Or is this somehow different than a pound of feathers vs a pound of gold question?

          Edit: definitely glossed over the dead comment there this is in response to

      • dewey 1617 days ago
        Sorry but I only calculate in football fields.
      • hkjhreiou 1617 days ago
        I don't know, I find it more relatable if put in terms of how high a stack of CDs would be or how many times we can write the Library of Congress on it.
      • awareBrah 1617 days ago
        About the length your hair grows in one second
        • Koshkin 1617 days ago
          I am sure you meant one microsecond.
  • Tepix 1617 days ago
    Looking at some benchmarks done by PC Games Hardware it appears that games are still not capable of taking advantage of 16 cores and 32 threads. It's not surprising - why would developers optimize for something that's not yet widely used. But I wonder when we'll get there...
    • paulmd 1617 days ago
      Consoles determines how many cores game engines will be coded to use, and consoles are staying with 8C processors for another generation.
  • rafaelvasco 1616 days ago
    Went with AMD with my latest build. Last time I went with AMD was back in 1999 with an AMD K6-2 500mhz.
    • ArlenBales 1616 days ago
      I think the last time consumers were swarming AMD for their CPUs was Athlon 64 X2 around 2005.
  • piinbinary 1617 days ago
    When GPUs became vastly more powerful over the last ~10 years, it made big neural nets practical. I wonder what an equivalent jump in CPU power will unlock.
    • pizza234 1617 days ago
      At this stage, generic computing is unlikely to get any more significant boost (under the assumption that generic computing is something not necessarily [highly] parallelizable); instead, we'll likely see specialized processors. Therefore, ultimately, the market/industry will decide what to accelerate - and the technology to support the acceleration.
    • Scarbutt 1617 days ago
      More electron apps.
      • Filligree 1617 days ago
        Sadly, I'm afraid you're right.

        I love my 1920X for making a modern desktop snappy. I hate Discord, chrome et. al for needing one.

        • udhbeeui 1617 days ago
          Max spec KDE with ALL batteries included on a 12 year old laptop computer with a new cheap Crucial SSD even with the old UDMA-6 is ultra low latency and very snappy.

          KDE is an excellent Mac replacement for power users.

      • adzm 1617 days ago
        That's more of a memory bottleneck though.
        • mantap 1617 days ago
          Yeah indeed chromium is a more efficient 2D rendering engine than what's available natively and performance of JS is comparable to Java. Electron apps being slow is mostly a developer laziness problem, not a technology problem. Making a fast electron app is actually easier than making a fast C++ app IMO, when you take into account the libraries and tooling available.
          • mrec 1617 days ago
            I think the difficulty is in making a lightweight Electron app, not a fast one. Games run fast too, but they can get away with hogging resources because they're typically the only app you're using at the time.
          • megous 1617 days ago
            It's not more efficient than directly using skia or cairo. Not on ARMv7 anyway.
      • ralusek 1617 days ago
        Only if people make more usage of worker threads.
        • dkersten 1617 days ago
          Not faster electron apps, just more. One per core.
    • lonelappde 1616 days ago
      The reason GPUs exist is because CPUs don't scale.
  • th-miracle-257 1617 days ago
    Why did Apple not release their new MBP 16 with the 3950X? [1]

    [1] https://news.ycombinator.com/item?id=21523780

    • 3JPLW 1617 days ago
      Because it's not a notebook chip? 105W is... a little toasty.
    • neogodless 1617 days ago
      Yeah, I don't think you need downvoted into oblivion for not knowing, but AMD mobile APUs are also not anywhere near this level. The top-end Ryzen 7 3780U is a 4-core multi-threaded part, using Zen+ rather than Zen 2, and the older Vega graphics. (And that's only available in a Surface laptop.)

      https://en.wikipedia.org/wiki/List_of_AMD_accelerated_proces...

    • Someone1234 1617 days ago
      They're Intel based mobile systems. The 3950X is a desktop class AMD CPU.
    • HHad3 1617 days ago
      In addition to the points already mentioned, a good bit of Apple's software depends on Intel-proprietary features, e.g. QuickSync for video encoding. So far, no macOS system ever shipped with an AMD CPU, and porting the OS and user-mode application stack is not trivial.
      • wmf 1617 days ago
        There must be Macs like the iMac Pro and Mac Pro that don't have Quick Sync.
      • tachion 1617 days ago
        You will find people using Hackintosh machines with AMD cpu's just fine (as much as fine it is possible for a Hackintosh, that is).
    • ErneX 1617 days ago
      Desktop chip and even AMD recommends water cooling it.
  • mtarnovan 1616 days ago
    I'm very curious how this CPU performs for Elixir compilation.
    • Bayart 1616 days ago
      Compilation should profit pretty linearly from more cores thrown at it. Considering the Erlang VM is very concurrency-conscious it should be a pretty natural fit.

      But I'm not at all in on the intricacies of it, are there any particular CPU features (instruction set or module) it's known to take advantage of ?

      • mtarnovan 1616 days ago
        You mean like vectorized instructions and such? I don't know, but if I had to guess I'd say no.

        Anyway, I'm sure there are some compilation phases that won't benefit from more cores, but the bulk of the compilation seems to do so. Even though Elixir has incremental compilation, due to use of metaprogramming in the Phoenix framework sometimes even changes to a single file will trigger recompilation of hundreds of others, so I was thinking these new AMD CPUs with lots of cores could be very helpful here.

        • Bayart 1616 days ago
          Try to ask around Stack Exchange or whatever, I'm certain some people have been running the BEAM on the second generation Threadripper chips (this one being pretty much just a better 16c Threadripper on a consummer socket) since they've been so popular for productivity/compilation.
    • lliamander 1616 days ago
      Indeed, the only compiler benchmark one typically sees are for C/C++. But Lots of developers use other languages, and it would be nice to have a cross-section because each language is so different.

      BTW, how much does elixir benefit from multiple cores for compilation?

  • seminatl 1616 days ago
    I don't really get how they reached their conclusion. It seems like on most of the tests this new part gets beat by a cheaper one from Intel. It seems like a kinda unfair approach to use handbrake without AVX-512 support? Also not sure why they include the 3-d particle thing without AVX ... I guess because Intel is just too fast on that?

    If you look through the results, the things most people want to do with a computer, like browse the web and start their applications, are noticeably faster with the Intel i9-9900K at half the price. And the only game where the CPU makes a difference in these benchmarks is also a lot faster with the Intel part.

    • linkgoron 1616 days ago
      You could just buy a CPU for a third or fourth of the price if you want to just browse. You don't need a $750 16-core CPU. You can buy a $200 9600KF or a $240 3600x or something even cheaper. For anything multi-core the 3950X blows the 9900k out of the water, and for 1440p gaming the 3950x trades blows with the 9900k. If you want 1080p gaming, or web-browsing, this is not the CPU for you. That said, even for 1080p gaming and single-threaded benchmarks the 3950 is usually within a few percent of the 9900K.
      • seminatl 1616 days ago
        Ok but if it’s “trading blows with” and “within a few percent of” a CPU costing half the money, why is it rational to prefer it?

        I know why I want AMD parts and that is because of ECC memory support but this article doesn’t even mention it.

        • linkgoron 1616 days ago
          First of all, the 9900k doesn't cost half the money. Second, you can also buy a 3900x if you want 12 threads with similar ST characteristics and pricing closer to the 9900k. There are other options.

          If you want to game @ 1080p and do mostly single threaded work (without parallelism), this CPU is not for you. If do a lot of single-threaded jobs in parallel (for example, gaming + being a server or gaming and streaming) or anything multi-threaded, this CPU destroys the 9900k, while being just a bit weaker when doing just one or two single-threaded tasks.

          • seminatl 1616 days ago
            Look I'm not arguing about which CPU is better. I'm arguing that taking the geometric mean of a bunch of completely unrelated benchmarks doesn't make any sense. There are clearly workloads in this very article where a cheaper part is better, sometimes even a cheaper AMD part.
            • linkgoron 1616 days ago
              And the article states clearly: "[the Core i9-9900KS] does pull out ahead in a number of ST tests as well as in low resolution (CPU-bound) gaming."

              Regarding the geomean, they clearly state that their benchmarks prioritize multi-core benchmarks: "This metric also puts the 3900X above the 9900KS, because despite the 5.0 GHz all-core on 8-cores, moving to 12-core and 16-core at almost the same performance per core gives more of an advantage in our test suite's MT-heavy workloads."

              They're not hiding behind anything. Everything is clearly stated.

    • gameswithgo 1616 days ago
      avx2 performance between the new amd cpus and intel is comparable. ive done some detailed comparisons with my own code that is tuned for intel and the ryzen is quicker in some places, slower in others.
      • seminatl 1616 days ago
        AVX-512 is not AVX2.
        • gameswithgo 1616 days ago
          I'm pretty well versed in the SIMD instruction sets, thank you. You might surmise that from the fact that I'm writing and benchmarking SIMD code....

          I was responding to:

          "Also not sure why they include the 3-d particle thing without AVX"

          in which the op was not clear which AVX instruction set they were referring to. It seemed to be a difference benchmark from the one which had avx-512 disabled, due to the word "also"

          • seminatl 1616 days ago
            So aside from your irrelevant comments about AVX2 being about even between the two parts, what is a reasonable explanation for the second chart on page 4 of the article, assuming you can bother yourself to go read it. Their analysis seems to be essentially that if the Intel part wasn't 300% faster, then the AMD part would be 15% faster. But it is a fact that the Intel part is really three times faster on this benchmark, because of AVX-512. The only reason to include the chart with AVX disabled is to fit the article into a prejudiced narrative.
    • hajile 1616 days ago
      AVX-512 severely downclocks all cores of a processor.

      If I'm using some of my cores to run AVX-512 code in the background, it's going to drastically slow down everything else I'm doing.

      This can be a significant issue even if you are only running the one task. If you have intermittent AVX-512 code, any time any one of the cores starts running that code, everything else also slows down too which can result in a net loss in performance. Even if you only ran a single core and it used AVX-512, this could still be the case because it takes precious time before and after to lower and raise the clocks that much and everything else runs slower that whole time in addition to running slowly while the AVX stuff executes.

      • seminatl 1616 days ago
        None of your statements are correct.

        1) Intel has per-core frequency and voltage control. To the extent that AVX-512 inhibits higher clock speeds that effect is local to the core or cores on which AVX-512 is active, and only those cores.

        2) Not all AVX-512 instructions have this effect. `vpermb` for example does not.

        3) Some other AVX instructions that are not AVX-512 have this effect. Also anything else that makes a core hot has this effect.

        4) Most importantly, Cannon Lake microarch and later don't have this effect.

  • cracker_jacks 1617 days ago
    Where is the 9980XE @ $979 coming from in the price vs performance chart? Where can I get a 9980XE for $979??
    • stagger87 1617 days ago
      Intel recently announced price cuts for the 9th gen Skylake X series processors to match the upcoming 10th gen versions. I don't think resellers/vendors have caught up yet, this was only a few weeks ago. They pretty much cut the price in half for all of the processors in this category. I haven't seen them available at these prices yet anywhere.
    • neogodless 1617 days ago
      https://ark.intel.com/content/www/us/en/ark/products/189126/...

      Seems like it should be $1979 to match Intel's suggested pricing. (Retail looks like it's $1949 on sale, and typically even higher.)

      • IanCutress 1616 days ago
        Intel has announced the Cascade Lake-X i9-10980XE coming out later this month for $979. It's slightly higher in frequency,
        • neogodless 1616 days ago
          Right - I didn't notice a footnote. I just saw your comment / reply on the article. It's a fair label, just wasn't clear to me!
  • ArlenBales 1616 days ago
    I wish the reviews that included gaming benchmarks would have included Monster Hunter World. Capcom's MT Framework game engine is extremely multi-threading capable.
  • davidy123 1617 days ago
    Slight meta, and part of a larger 'rant,' but when are we going to get away from reviews that might as well have been printed on glossy pages in PC Magazine in 1986? Anandtech has been at it since the 90s and hasn't changed their format at all. This is a serious decades-long stagnation of the web. The graphs should be dynamic (able to choose what scenarios and components to compare, and able to search within them), and user contributed. Instead we get feeble excuses like "it doesn't make sense to compare a two-year-old-generation to this one," well yes it does if I'm considering an upgrade.

    Only a few sites support these options. Storage Review was an early leader but hasn't much moved much, Notebookcheck is another, and of course Phoronix.

    • wtallis 1617 days ago
      > Instead we get feeble excuses like "it doesn't make sense to compare a two-year-old-generation to this one," well yes it does if I'm considering an upgrade.

      You don't get that excuse from AnandTech. We do our best to keep a long history of benchmark data for users to peruse: https://www.anandtech.com/bench/CPU-2019/2224

      The main limiting factor on how far back our benchmark database goes is software updates. When we have to update the OS or CPU microcode for Spectre, Meltdown, etc., or update GPU drivers, that invalidates results, and re-testing a large pile of older hardware takes a long time. Historically this has mostly been a problem for GPUs since their drivers are such a moving target, but the past two years of CPU vulnerabilities have been a hassle.

      • davidy123 1617 days ago
        That's great to see, thanks. I thought I had asked about it some time ago to no avail. Still, if the interest is moving the front forward, making it available as cc-by would be nice, so people could use and contribute their own analysis around it.
    • uluyol 1617 days ago
      Few figures benefit from being interactive. I'm confused as to why why you'd want user contributed results. Getting a benchmark setup to be fair takes effort and you cannot simply take numbers measured using different CPU/memory/OS version/etc combos and draw conclusions from them. The fact that components combos are curated is a feature.

      Personally, all I want from Anandtech is better editing and more articles.

    • whatshisface 1617 days ago
      Dynamic JS graphs sound like a pain, I'm not a professional user of this specific website, and I don't want to learn a new dynamic interface for every website I visit. Sometimes I just want to look at a chart. User-contributed data would be of limited value on a website whose specific value proposition is that they know more about benchmarking than the average Joe. The pagination is annoying, though.
      • driverdan 1617 days ago
        It's really not. Generating them involves passing a collection of data to a script. Images can be generated server side for use by people who disable JS.
    • Dunedan 1617 days ago
      > Anandtech has been at it since the 90s and hasn't changed their format at all. This is a serious decades-long stagnation of the web.

      Aside from the quality of their content, their format is something I really like about Anandtech. Is it really stagnation when you have figured out a format which works and stick to it?

    • tedunangst 1617 days ago
    • yifanlu 1617 days ago
      > and user contributed

      There's been a lot of debate in the enthusiasts community, but reviewers believe that user benchmarks don't have much value. There's so much variation in software, cooling, RAM speed, GPU speed, etc. Even misconfigurations like different running background apps can skew the results.

      However, for a processor that's been in the market for a while, I think userbenchmarks is a good site to look at the aggregate data. Their rankings were recently updated to disfavor AMD chips, so don't take those too seriously. But for head-to-head comparisons of processors with a lot of users, you can get a good idea of how much faster a processor is.

      However, I disagree that review sites should consider "user data" because 1) these are new processors and people who read these reviews are usually early adopters who want to make a buying decision and 2) the testing setup and methodology is a time consuming and scientific process and shouldn't be discounted by just asking random people to run an app.

    • lettergram 1617 days ago
      Not trying to be snarky, but:

      If you think that it’s be better - why don’t you make it?

      I suspect the current format is the way it is because it’s easier and “good enough”. I feel if you reach out to Phoronix you might get a response.

      • davidy123 1617 days ago
        > If you think that it’s be better - why don’t you make it?

        Because I am already quite busy on other projects.

    • bransonf 1617 days ago
      I mean, there are plenty of options to compare new cpus to old ones.

      0. CPUBoss 1. CPUBenchmark 2. UserBenchmark

      And if you’re considering an upgrade, it’s almost always the case that you should be comparing latest generation offerings to arrive at a purchase decision.

      Don’t call the web stagnant because they decide not to flood the page with JS libraries.

      • ihattendorf 1617 days ago
        But if you're deciding whether or not to upgrade, a comparison between your current hardware and newer hardware is relevant.
        • bransonf 1617 days ago
          That’s why I suggested the first three websites.

          The role of Anandtech/TomsHardware and the like is not to do this. It’s to give news about new CPUs and where the fit in current offerings. They are news sites after all.

      • solotronics 1617 days ago
        CPUBoss sucks. It doesnt have chips from the past 4 years in it.
      • IanCutress 1616 days ago
        www.anandtech.com/Bench for real world benchmark data
    • joaobeno 1617 days ago
      There are huge selections of manufactures, speeds, powers, capacities, and so on... Thus, it is hard to simply let you pick and choose... They pick some parts, limit external factors that could influence the benchmarks, and show you the metrics so you can make an somewhat more informed decision...
    • bluegreyred 1617 days ago
      the german site computerbase.de does this. they also do crowdsourced benchmarks for various applications and video game titles. I find that their articles are some of the best on the web, though you'll need machine translation to read them.

      case in point: https://www.computerbase.de/2019-11/amd-ryzen-3950x-test/3/#...

      you can press the button on the top right of the graph to show/hide other, less relevant entries. click an entry to lock it in for relative comparison. the top dropdown menu lets you choose between multicore, singlecore and application specific results.

      • davidy123 1617 days ago
        It's funny how the German sites tend to be so much better at this. I don't follow computerbase, but notebookcheck has some of the most thorough and consistent reviews, and they allow cross-device comparison. They cover news now, too, though I consider that a drawback since it's less focused.

        There are a very large number of review sites, Anandtech being one of the first, presumably all following the same sustainability formula, but despite all that effort, consistency, thoroughness, built in tools, and building on aggregated output are the exception.

    • puranjay 1617 days ago
      On the content side, it isn't financially beneficial to invest in higher quality content or presentation. Most websites that aren't outsourcing their content to $10 writers on UpWork are already running on razor thin margins. Outside of a handful of big name publishers (the NYTs and Economists of the world), paid subscriptions are unviable for most.

      The lack of innovation on the web can be directly attributed to the lack of advertising money on the web - outside of Google and FB, of course.

    • localhost 1617 days ago
      Jupyter could form the basis of an interesting publishing pipeline for this kind of a review using various Python viz libraries. For user-contributed content, it would be interesting if you could "fork" an existing publication and add your own color to it. But somebody would need to pay for the compute and storage for this to work, as well as enable the kind of advertising / layout requirements that drive the business side of sites like Anandtech.
    • xondono 1617 days ago
      User contributed is not a good idea.

      One of the sad realities is that even in the enthusiast market a lot of people don’t know/don’t care to tune and test their system, and there’s a lot of traps that would make the results useless.

      There’s some of it that you can check and verify, but things like watercooling or chosing the appropriate RAM configuration and timing will have a big impact.

      • davidy123 1617 days ago
        I think this is solve-able through user reputation.
        • mmmrk 1617 days ago
          Reputation is not an objective measure.
          • davidy123 1617 days ago
            It's not, but it's a measure, like money, that can help sort high quality/relevant contributions. Phoronix / OpenBenchmarking.org provides an example of how many technical people can provide benchmarks for many component and software combinations, in the mainstream reputation would help for even broader views.
    • dageshi 1617 days ago
      They don't do it because it won't lead to any additional page views but will require extra effort.
      • davidy123 1617 days ago
        This is probably the real answer. No profit innit. Having been through a few revolutions, there are advantages and challenges we won't know until we get there. Open it up so the users are encouraged to participate, rather than just using their computers to decide what new gaming system to build, with "participation" a ten comment per page system designed to support throwaways.
    • xbkingx 1616 days ago
      I had a side job as a tech reviewer ~15 years ago and I can tell you why this is never going to happen. It's a classic case of "works in theory, but impossible in practice." What you're actually asking for is a database of every permutation of hardware, hardware revision, firmware/BIOS, benchmarking software, benchmarking software version, driver version, OS, OS patch, user selectable feature state, etc. is tracked and tested. It's a nearly infinite data set size, and you'll still miss things like ambient room temperature/humidity for CPU thottling, that one finicky USB device you spilled water on that sometimes disconnects randomly, and specific workflow quirks.

      "Objective" benchmarks were an almost tractable problem 15-20 years ago, but with the way modern OSes run background tasks, access network resources, and perform self-maintenance, it's even more difficult to bridge the synthetic-to-real world divide.

      So, you select a point of reference (e.g. - new GPU), you choose the most common system components and a selection of components that tell a story (usually "if you're building a new PC now, here are the options"), you assemble the current versions of all software/drivers, and run your tests x times. The good sites will have "sub-stories," like specific workflows or new/changed features, but once you get to around 4 of these, you start hitting too many permutations to clearly communicate the significance of your choices.

      Even if you crowdsourced it, it's A) extremely difficult to verify the integrity of results and prevent manipulation from marketing departments or brigading, and B) an unrewarding, tedious process where the best practice is to leave the machine untouched for hours. Most of the crowdsourced benchmarking sites are set up to be little competitions and sanity checks when overclocking. For a pure review of shipping hardware/software, you run the benchmark and you're done. It isn't particularly fun, requires invasive cataloguing of system specs, and isn't very new-user/first time builder friendly.

      Dynamic graphs would be nice (I love them), but I suspect many review sites have run the numbers and found they reduce overall web metrics. Many (I would argue most) people don't notice interactive page components, aren't interested enough to turn their quick article skim into a deep dive, or are reading in less optimal settings (e.g. - on phone on the toilet). Instead of getting 7 page views for 1 minute each with separate graphs, you're getting 1 page view for either 1 minute (probably 70%) or 10 minutes (probably 30%) with a dynamic graph.

      I'm a total data junkie and I WISH there was a good solution to these problems, but there isn't a practical implementation that I've seen or dreamt up that doesn't have enough variance to render the point of having such fine-grained data moot.

    • pg_is_a_butt 1617 days ago
      Is the larger "rant" that you are funded by Intel, and their market dominance is drying up, and you find yourself attacking reporters as a last ditch effort?