18 comments

  • nfriedly 2439 days ago
    > Intel’s big aim with the new processors is, as always, to tackle the growing market of 3-5+ year old devices still being used today, quoting better performance, a better user experience, longer battery life, and fundamentally new experiences when using newer hardware. Two years ago Intel quoted 300 million units fit into this 3-5+ year window; now that number is 450 million.

    Yep, Intel's problem is that most folks don't need a new CPU, especially for a computer that's always plugged in.

    I'm refurbishing a 6-year-old system with a Pentium E5800 for a friend, and initially it felt dog slow. However, once I swapped the mechanical hard drive with a solid state disk, it instantly feat like a zippy little machine. It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)

    • bluedino 2439 days ago
      >> It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)

      Today's Javascript-packed web pages and HD YouTube content are pushing people to upgrade from their Core 2 Duo and early i5 machines.

      • zappo2938 2439 days ago
        My $130 lenovo Chromebook runs everything such as Cloud 9 IDE on the web except Facebook, that doesn't work.
    • jjjsdf87777 2439 days ago
      The big grief I have with "general computing" platforms is their insistence of sticking with the traditional form factor.

      ATX, ITX, PCI-, DDR ...outdated, overboard, clunky designs for most people.

      Take a Mac Mini-like design, and make modules that can stack or otherwise attach to expand capabilities. IMO, this is what Apple should do and be done with the whole "But Mac Pro users ...!"

      A project Ara-like desktop, both its size and modularity, would probably offer more than enough computing power for most users (browsing, office, youtube).

  • vladimirralev 2439 days ago
    Still LPDDR3 with 16GB RAM limitation. What an embarrassment, all phone SoC today use LPDDR4(x) and technically support more RAM than the desktop Intel CPUs.
    • jrs95 2439 days ago
      Does anyone know if Ryzen will support LPDDR4 in it's mobile chips? I tried Googling around for it but couldn't get an obvious yes or no.

      Seems sort of unlikely, but if they do support it a lot sooner than Intel, that would be a big win for AMD.

      Even more unlikely would be a Ryzen powered MacBook Pro with 32GB of LPDDR4 RAM...but I'd be willing to pay a lot of money for that. I know Apple tends to prioritize single core performance on their own chips, but almost all of the desktop software their pro users are using would run better on Ryzen than on Intel's current offerings.

      Plus, with Intel's Iris Pro gone, Ryzen might allow them to have better integrated graphics, and bring back a 15" model with no dGPU.

      • compuguy 2435 days ago
        Wait, they aren't making chips with Iris Pro graphics anymore? That seems like an odd decision.
    • matwood 2439 days ago
      That's a bummer. The upside is that I will continue to have no reason to upgrade my current MBP.
  • mrmondo 2439 days ago
    Good to see lower power usage but the one thing I still feel is missing is widespread support for ECC on the desk(lap)top.
    • virgulino 2439 days ago
      Funny thing is, back in the days of the PC XT (8086/8088), AT 286 and 386, all (or most?) computers had parity check (9 bits RAM). I rather prefer a halt error than silent corruption.
    • tammer 2439 days ago
      Curious what your use case is that entails ECC? Are you currently being held back without it?
      • throwasehasdwi 2439 days ago
        It's insane that we're still using systems without ECC RAM. As memory shrinks bit errors get progressively more common. The more memory you have the better chance of corruption as well of course.

        Literally everything else that holds "data" has been using some form of error correction forever. Hard drives, SSD's, USB flash drives, file systems, databases, even network packets. Even HDMI uses error correction, and how important is momentary pixel corruption on a screen???

        It's totally insane that we're not using ECC with such large amounts of RAM built on tiny processes. Its definitely just a cartel artificially maintaining a situation that's bad for everyone not selling server chips.

        • astrodust 2439 days ago
          Exactly. A "one in a billion" event now happens with great regularity on a system with over a hundred billion of bits of memory.
      • copx 2439 days ago
        Integrity of your data. Without ECC data in memory can become corrupted at any point.

        It's usually just a single bit but say you are working with images, do you care if a single pixel changes its RGB value because of a memory error? A character in the metadata?

        I do.

        Unfortunately there is a hardware cartel which deliberately limits ECC to enterprise / server products so that they can inflate the price / their profit margin.

        ECC RAM is more expensive to manufacture than non-ECC RAM but the price difference would be fairly minimal if ECC RAM were used everywhere - as it should.

        • devwastaken 2439 days ago
          Also any form of file that is easy to corrupt to a non-decodable state. Such as any binary save/config file, or any sort of file conversion or transferance. The data you transfer from location to location will always pass through memory, and in the case of converting that data to another format, it may not be possible to validate the destination format in relation to the original data.

          Hypothetically, even with hash checks when transferring files, if the chunk of data read from the source file changes, that data will be used to calculate the hash sum, along with being written to the destination file. Meaning the hash sum would match the destination file anyways. You could even get wrong hash sums and think the transfer was wrong.

          Really when memory can just 'change', anything can happen and there's no real good ways to get around it. ECC memory should just be everywhere.

      • magicalhippo 2439 days ago
        Well I just had data corruption on my Intel NUC due to a stick of RAM failing. Had I had ECC the fault would most likely have been spotted right away. Instead it dragged on for a few months. Glad I kept multi-month backup sets.

        Firefox crashed every now and then, but that's not something which raised any flags with me. Other than that the box seemed just fine. Then one day I couldn't boot anymore as the filesystem had been severely corrupted.

        Ran memtest86 and sure enough, a span of addresses invariably generated errors in all tests.

      • TorKlingberg 2439 days ago
        This shouldn't de downvoted, it's a fair question. Just a few years ago ECC was widely considered an unnecessary belt-and-suspenders thing that made enterprise hardware expensive. I guess the general perception changed with the Rowhammer attack.
        • MichaelGG 2439 days ago
          I don't think so. Well before Rowhammer, Google published their paper showing the high amount of memory errors they get.

          What's changed is higher memory densities, making it even more important.

        • Dylan16807 2439 days ago
          It only made things expensive because of low segmented volume. Otherwise it's a 10% bump on RAM and free on everything else.
      • madez 2439 days ago
        Is the lack of a secure door to your property something that would hold you back? I doubt that. Still, secure doors are important.
        • varjag 2439 days ago
          Do you have blast doors installed home?

          Anyway, not sure how exactly ECC relates to security. Is there any specific attack vector where it helps?

          • madez 2434 days ago
            A blast door protects against explosions – a very unlikely scenario, that will happen to practically nobody. ECC protects against bit flips in RAM which happen significantly often ( https://www.cnet.com/news/google-computer-memory-flakier-tha... )

            Your comparison is inadequate.

          • baq 2439 days ago
            rowhammer, though it apparently can work around ecc.
  • zachruss92 2439 days ago
    I'm kind of disappointed in this. While they are upping the core count, the overall clock speed is being decreased across the board. This means that single threaded processes will theoretically perform slower (I know it still turbos up).

    Honestly, I just upgraded to Ryzen from a 3770k and. My 3770k ran all cores at 4.2GHz (overclocked, obviously) and the only reason I updated was becasue I wanted to upgrade to NVMe and DDR4. That was 4 years old and I had no CPU-bound performance issues. I really think Intel needs to start innovating more rather than being complacent or AMD is actually going to steal the show.

    Super happy for the competition though!

    • vith 2438 days ago
      > I know it still turbos up

      Is the base clock relevant in any interesting situation? As far as I know it's just telling you what to expect if you disable Turbo Boost in the BIOS.

      The frequency at idle should be lower through SpeedStep, and the frequency during load should be higher through Turbo Boost.

      If there are thermal limits preventing the maximum turbo frequency from being reached at all times, I still wouldn't expect the average frequency to be related to the base clock. It should be more or less bound by how insufficient the cooling system is. I think even the instantaneous clocks can fall somewhere between the various listed frequencies under throttling conditions.

      Also note that the max turbo frequency is different depending on how many cores are loaded.

  • tachion 2439 days ago
    I wonder how companies like Apple, that have quite stagnant and stable release cycles (compared to other brands) will handle that situation. Does it mean their customers will have to sit on 'old' CPU's again for another generation or two? Latest MacBooks were released ~80 days ago and their release cycle is ~300 days on average. Obviously I wonder, because I was about to order a new Apple machine for myself and now I'm not sure if I shouldn't (the same problem over and over again) just wait a bit longer.
    • twoodfin 2439 days ago
      Considering how recently the Apple Kaby Lake bump was, and that the 8th generation Coffee Lake parts for the "real" Touch Bar Pros won't be released for several months, I'd be shocked if this wasn't one of the better times to buy.
    • nik736 2439 days ago
      I don't think the regular MacBook have those chips. These are the ones for the 13-inch MBP entry level models.
      • tachion 2439 days ago
        I thought of MacBook as of 'MacBook family' and not as 'The MacBook 12"'. To be more precise, when I said I wanted to buy a new machine, I was thinking about The MBPR 15".
    • stephens_chris 2439 days ago
      After seeing those internal emails from Microsoft regarding the failures they had on the Surface products caused by problems in the then-recently launched Skylake chips, my guess is Apple is just fine on their delayed release schedule.
      • sliken 2437 days ago
        Er, if you read those closely you'll realize that it was microsoft's fault and they were trying to blame skylake. No other manufacturer had problems like microsoft with the same chips.
  • sundvor 2439 days ago
    Oooh moving to a baseline of 4 cores; this means we'll see quad core Lenovo X1 Carbons / Ultrabooks soon. :)

    I for one am excited.

    • cm2187 2439 days ago
      Even if you don't use multi-threaded applications much, the doubling in L3 itself should help a lot.
    • jws 2439 days ago
      Maybe I'll finally be able to buy a Mac Mini that is faster than the 2012 model I use for grinding up data.

      Edit: maybe not. Looks like the Mini is about a 45 watt CPU and these are the 15 watt line. Oh well, it's been 1040 days since the last update (downdate? Maybe that is the term for a product update that releases a slower computer). I can wait for the 45 watt CPUs. Probably, I am past my half-life.

      (The current 2 core and 4 core processors that make sense in a Mini have different footprints, so Apple just did the two core to keep costs down. There hasn't been a quad since 2012.)

    • 013a 2439 days ago
      I believe these are also the chips that will come to the 13" MBP w/o TouchBar. We'll see how the thermals look, but call me excited.
    • throwaway613834 2439 days ago
      Do you happen to have any idea how long we'll have to wait for that? Might they be out in a month, or might it take a few more?
      • sundvor 2439 days ago
        I'm guessing January next year with February availability, going by previous releases. (This is purely my own speculation, considering CES/previous releases.)
        • throwaway613834 2439 days ago
          Thanks! :) Do you think this would be the case for the first quad-core ultrabooks/notebooks as well, or just these particular Lenovo products would take that long? I haven't been keeping up with CPU releases so I don't recall how long it takes for them to reach the portable market...
    • KitDuncan 2439 days ago
      I am really looking forward to see how next generation Intel Ultrabooks compare to the upcoming Ryzen APU laptops.
    • bhouston 2439 days ago
      But look at how slow the GHz are. Crazy slow chips.
  • samstave 2439 days ago
    What does the term "lake" represent in these family of CPUs?

    Apparently asking this makes me an idiot to some... while I'll admit to simply laziness...

    I assume that it would tie a technology together as a code name for this family of procs, but in the case of "lake" they use it in multiple differing technologies...

    So was curious if it meant something else non-obvious to me.

    • wmf 2439 days ago
      It doesn't mean anything; Intel has a bunch of unrelated products that all have lake codenames.
    • tedunangst 2439 days ago
      Similar microarch.
    • mkbnnh 2439 days ago
      Are you an idiot?
  • Zekio 2439 days ago
    Wait isn't 7th gen already essentially a refresh of 6th gen?
    • Synaesthesia 2439 days ago
      Yeah Intel are not following the tick-tock pattern anymore, they are having more revisions on the same node, and some of the revisions are more slight. Anandtech had an article on it a while back.
      • stephens_chris 2439 days ago
        Correct. About a year and a half ago, Intel announced they were ditching tick-tock for a three step model of process-architecture-optimization.
    • vladimir-y 2439 days ago
      Sure, but this time AMD made Intel to show a real progress with 8 cores ULV CPU (yes to show, as I guess Intel had it ready-made, but was not going to present it for the time being). Before this year Intel didn't have to show the real progress as there was almost monopoly on the CPU market.
      • vladimir-y 2439 days ago
        > with 8 cores ULV CPU

        4 cores obviously

  • Aardwolf 2439 days ago
    How about some consumer desktop ones with ECC RAM support?
    • mamon 2439 days ago
      Never gonna happen. ECC is "pro" feature for Intel, reserved for Xeons only. They have to justify high price of Xeons somehow.
      • chx 2439 days ago
        You mean server chips. The server Atoms also support ECC. For eg. https://ark.intel.com/products/97927/Intel-Atom-Processor-C3...
      • sborra 2439 days ago
        If I'm not mistaken there are a few Pentiums and i3 chips that support ECC. For example the G4560.
      • TazeTSchnitzel 2439 days ago
        Even AMD think of it as such. Ryzen doesn't have it disabled on the desktop, but AMD haven't validated it either.
        • agumonkey 2439 days ago
          Didn't AMD employees confirm it a few times on the web already ?
          • onli 2439 days ago
            ECC is working with AMD if the motherboard supports it, but you can't always be sure that the motherboard does support it correctly. You need to rely on user reports/what the motherboard producer promises, instead of it being a default feature that always works.

            Still a lot more than what Intel offers in that space.

          • TazeTSchnitzel 2439 days ago
            I didn't say ECC doesn't work. It is enabled on Ryzen. But it's not something AMD go to the effort of validating to make sure it works properly and it doesn't get support. It's left in there as a footnote for enthusiasts.
    • jpalomaki 2439 days ago
      What's the price difference between consumer (no ECC) and workstation (ECC support) (mobo+cpu+memory)?
      • sliken 2439 days ago
        Xeon E3-1230 varies by generation (v1 to v6 or so), but is typically slightly cheaper or slightly lower clocked then the top of the line i7.

        The motherboards are about another $50, and the dimms are another 25% or so more than non-ecc dimms.

    • k_lander 2439 days ago
      Why is ECC sought after on the desktop?
      • bitL 2439 days ago
        Some people run >32GB RAM with long uptime and there the chance of a random bit-flip might not be acceptable. Imagine working on some Deep Learning model, training it for 30 consecutive days and then hitting a memory bug during computation.
        • Synaesthesia 2439 days ago
          Not your everyday requirement, also ML is somewhat tolerant to small faults like that.
          • bitL 2439 days ago
            Depends. If a bit is flipped in a dataset you are likely fine, if in code your computation might crash. If you use enterprise-grade software like ZFS filesystem that keeps a lot in memory, it's much better to have ECC and accept a bit slower memory access for a bit better protection.
            • OoooooooO 2439 days ago
              ZFS without ECC is pretty useless ...
              • michaelmrose 2439 days ago
                ZFS is no more vulnerable to corruption when running without ECC than any other file system.

                The developers of zfs suggest ECC because ECC is a worthwhile thing for those who care about their data.

                You should stop spreading misinformation.

      • Retr0spectrum 2439 days ago
        Rowhammer protection, for one thing.
        • onli 2439 days ago
          ECC does not protect against Rowhammer attacks.
          • Retr0spectrum 2439 days ago
            It seems you are correct, but surely it must make a practical attack much harder?
            • onli 2439 days ago
              I'm not so sure. But it's not an area I know much about. But practically thinking: If you try to change the memory content of an area, that means you have software running on the target machine. Does it matter much then whether you need more time because of ECC?
  • throwaway613834 2439 days ago
    I remember seeing claims of 15-30% improved single-threaded performance. Does anyone know how legitimately I should take these? They sound way too good to be true...
    • Synaesthesia 2439 days ago
      They have pretty freaking high turbo frequencies, up to 4-4.2ghz. I don’t think they had 15W processors going quite so high before.
    • slyn 2439 days ago
      Typically the claims of heavily improved single threaded performance are "up to x% faster", and the only time you see those peak performance improvements are during uncommon benchmarks.

      Until full reviews come out it's hard to say how much of improvement across the board we'll see, but recently 2-3% ipc improvement on average plus whatever boost to frequency seems to be standard per release.

    • skummetmaelk 2439 days ago
      Maybe per watt?
      • throwaway613834 2439 days ago
        That's not the impression I got, but I'm not well-versed in the marketing terminology. Is that the impression you get from here? https://arstechnica.com/gadgets/2017/05/intel-claims-30-perf...
        • onli 2439 days ago
          I might miss something in that article, but this 15-30% performance increase when pitting a dual core against a quad core is pretty bad. I don't see the mention of single thread performance. It talks about overall benchmark performance.
          • throwaway613834 2439 days ago
            I'm pretty sure they mean single-threaded, but I've seen different numbers floating around. Here I see between 11-29% depending on the model: https://videocardz.com/72112/intel-claims-i7-8700k-to-be-11-...
            • onli 2439 days ago
              Okay. Well, you should wait for benchmarks. If like in the anandtech article mentioned the clock rate gets decreased, and that would be very normal when adding more cores, then a single thread performance increase is very unlikely. In the last launch Intel did not get close to those numbers, and that was without a core increase.

              Also, there seems to be some confusion whether those processors now are a kaby lake refresh or the new coffee lake architecture. The videocardz article mentions Coffee Lake (and some other news articles call those processors that as well), but the anandtech article defines them as a Kaby Lake Refresh. A new architecture would make a single thread performance increase more likely.

              • jsnell 2439 days ago
                The table in the article shows a ~5% increase in boost clocks for the high end models. Those are what matters for single-core performance, not the base clocks.
                • onli 2439 days ago
                  I think that would be correct for the Desktop, but in laptops the turbo clock normally(?) does not work for a sufficient long time to give it any meaning.
                  • lorenzhs 2439 days ago
                    It does in well-designed machines, although usually not in the ultraslim ones. The ThinkPad T470 can sustain full turbo indefinitely according to notebookcheck. Lenovo's premium line (X1 Carbon/Yoga) cannot, though, as they're too thin and light for a sufficiently capable cooling system, and will throttle after a while.
              • throwaway613834 2439 days ago
                Makes sense, yeah. Skeptical here as well.
        • rsynnott 2439 days ago
          That claim is for Coffee Lake. Intel have recently taken the opportunity to make their line even more confusing; _these_ 8th generation CPUs are "Kaby Lake Refresh". Coffee Lake will be along later.
  • ksec 2439 days ago
    I wonder how many Programmers here using Macbook Pro need an Iris Graphics? Compared to this newest UHD 620 ( Which really is just HD 620 with HDMI 2.2 support ), the Skylake Iris Graphics is rougly 50% to 60% faster. But with Kaby Lake Refresh you get Quad Core instead of Dual Core.

    I wonder how many would prefer to have a Quad Core Macbook Pro 13" instead.

    * These 15W parts can be TDP Config up to 25W. Which Fits the Macbook Pro uses.

    • jjawssd 2439 days ago
      Integrated graphics are great for the power savings but a 2015 Macbook Pro can not drive a 4K display higher than 24 frames per second.
      • CharlesW 2439 days ago
        That's an HDMI limitation. With DisplayPort, my early 2015 MBP drives my Dell P2715Q at 60Hz.
        • photojosh 2439 days ago
          My 2013 15" MBP drives my 4K display at 60Hz...
          • jjawssd 2438 days ago
            I'm assuming this is with DisplayPort?
            • photojosh 2434 days ago
              Yes.

              My comment was just an extra anecdote to the grandparent comment... if you want 4K at 60 fps, why aren't you plugging in via DP instead of HDMI?

      • laythea 2439 days ago
        Plug the monitor into the right port! (the mini display port/thunderbolt one)
  • happycube 2439 days ago
    Just not in time for back to school - good for intel's margins, bad for all the students stuck with dual core lie5's/7's.
  • chx 2439 days ago
    The Turbo/base ratio is getting interesting. The previous generation saw a 1.6x Turbo max but this generation now sees 2.2x -- a clear testimony how the four cores, alas, are for show. Obviously there will be a little improvement but I wouldn't expect earth shattering results.
    • bhouston 2439 days ago
      You get four cores at less than half speed. Sort of think, can you just get 2 cores at 4/5th speed and it would be able the same?

      Very strange scaling this chip has.

  • nik736 2439 days ago
    Isn't it better to have more powerful single thread performance for developing in single threaded languages? Looks like a step backwards than? Double the core count and more l3 cache sounds good, even though they crippled the base clock speed.
    • Filligree 2439 days ago
      They lowered the base clock speed, yes. That's the minimum clock you can count on, assuming a correctly designed laptop, even if all four cores are going flat out.

      In practice, the clock is set to limit power usage and thermal load. A better-cooled system will automatically run faster (not really applicable to laptops), and if you're only using a single thread then you'll see the same clock rate you did before, or a bit above.

      • lorenzhs 2439 days ago
        Cooling limitations are extremely applicable to laptops! You can easily have two different machines with identical CPUs and 10%+ performance difference because one has a proper cooling system while the other doesn't. Check the notebookcheck rankings if you want to see some specific numbers.
        • Filligree 2439 days ago
          Sorry, I meant that in the sense that no laptop is "properly cooled". There definitely can still be variations. :P
          • lorenzhs 2439 days ago
            That's not actually true! From https://www.notebookcheck.net/Lenovo-ThinkPad-T470-Core-i5-F...:

            "Our stress test with the tools Prime95 and FurMark (at least one hour) on mains is not a big challenge for the ThinkPad T470. Thanks to the increased TDP limit, both components can maintain their maximum respective clocks over the course of the review. [...] The two CPU cores maintain the full Turbo Boost at 3.1 GHz and the graphics card 998 MHz."

          • azurelogic 2439 days ago
            Also, this will only further increase the value of maintainable machines. A machine with good and accessible/serviceable cooling means that redoing the thermal paste after 3-4 years will be both feasible and helpful.
    • makapuf 2439 days ago
      At this time I think there shouldn't be single threaded languages (I'm not sure which ones you're thinking about since it's mostly about libs and OS primitives). Even Python is multithreaded (even if the GIL makes it better to just use multiprocess), I'd say if you're after 10% improvements - the level those kind of CPU upgrades can offer on single thread, you'd better change languages if you're stuck to single thread. If your problem is difficult to parallelize, well that's another story.
      • TorKlingberg 2439 days ago
        Even if the language supports threads, that doesn't mean your application magically parallel. No language will give you free parallelism. Besides, most software you run was written by somebody else.
        • makapuf 2439 days ago
          sure, that was my point about the problem being parallel or not. Of course the program make use of it and be multithreaded and cpu-bound or not. That was not the point. The OP talked about "developing in multi-threaded languages", which is 1) about new developments 2) about language being multithreaded or not. I believe we both say it shouldn't be a problem of language as in 2017.
    • wmf 2439 days ago
      According to the article, single-core turbo has increased from 4.0 GHz to 4.2 GHz.
  • xcasex 2439 days ago
    aaand with linux, baytrail is still an issue, even though 4+ intel engineers are working on cracking that nut.
  • Outrageous 2439 days ago
    Do you guys think they will wait until January 2018 to release the t480 series? I was torn between the t470p and t470 because of the quad core and finally decided on the t470 for the battery life and size (no t470s/x1 carbon because I already have 1tb 2.5" ssd)
  • ManyEthers 2439 days ago
    Just my two cents but I would still expect them to annouce coffee lake today. If you look at the marketing material they talk about VR and have a picture of a desktop monitor when referring to editing. I dont see how notebooks can provide you an "immersive VR experience".