> Intel’s big aim with the new processors is, as always, to tackle the growing market of 3-5+ year old devices still being used today, quoting better performance, a better user experience, longer battery life, and fundamentally new experiences when using newer hardware. Two years ago Intel quoted 300 million units fit into this 3-5+ year window; now that number is 450 million.
Yep, Intel's problem is that most folks don't need a new CPU, especially for a computer that's always plugged in.
I'm refurbishing a 6-year-old system with a Pentium E5800 for a friend, and initially it felt dog slow. However, once I swapped the mechanical hard drive with a solid state disk, it instantly feat like a zippy little machine. It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)
The big grief I have with "general computing" platforms is their insistence of sticking with the traditional form factor.
ATX, ITX, PCI-, DDR ...outdated, overboard, clunky designs for most people.
Take a Mac Mini-like design, and make modules that can stack or otherwise attach to expand capabilities. IMO, this is what Apple should do and be done with the whole "But Mac Pro users ...!"
A project Ara-like desktop, both its size and modularity, would probably offer more than enough computing power for most users (browsing, office, youtube).
That's not what I was trying to describe by offering project Ara as an example. Intel NUCs are an ITX board in a box with the usual ports. Not an ecosystem of quickly interchangeable modules that offering rapid change in utility.
The problem is that's a lot of jargon, and yes I know what a form factor is, DDR is, and if you look again, I clearly wrote PCI-*, as PCI-Express, which is NOT deprecated.
It's a lot of jargon to know when you want to build a device that hangs out and films birds, or some other overly specific example to make the point.
If general computing is stuck there, it seems like a bit of cultural functional fixedness over what general computing should be. Which is all over our nostalgia tripping culture.
Still LPDDR3 with 16GB RAM limitation. What an embarrassment, all phone SoC today use LPDDR4(x) and technically support more RAM than the desktop Intel CPUs.
Does anyone know if Ryzen will support LPDDR4 in it's mobile chips? I tried Googling around for it but couldn't get an obvious yes or no.
Seems sort of unlikely, but if they do support it a lot sooner than Intel, that would be a big win for AMD.
Even more unlikely would be a Ryzen powered MacBook Pro with 32GB of LPDDR4 RAM...but I'd be willing to pay a lot of money for that. I know Apple tends to prioritize single core performance on their own chips, but almost all of the desktop software their pro users are using would run better on Ryzen than on Intel's current offerings.
Plus, with Intel's Iris Pro gone, Ryzen might allow them to have better integrated graphics, and bring back a 15" model with no dGPU.
Funny thing is, back in the days of the PC XT (8086/8088), AT 286 and 386, all (or most?) computers had parity check (9 bits RAM). I rather prefer a halt error than silent corruption.
It's insane that we're still using systems without ECC RAM. As memory shrinks bit errors get progressively more common. The more memory you have the better chance of corruption as well of course.
Literally everything else that holds "data" has been using some form of error correction forever. Hard drives, SSD's, USB flash drives, file systems, databases, even network packets. Even HDMI uses error correction, and how important is momentary pixel corruption on a screen???
It's totally insane that we're not using ECC with such large amounts of RAM built on tiny processes. Its definitely just a cartel artificially maintaining a situation that's bad for everyone not selling server chips.
Integrity of your data. Without ECC data in memory can become corrupted at any point.
It's usually just a single bit but say you are working with images, do you care if a single pixel changes its RGB value because of a memory error? A character in the metadata?
I do.
Unfortunately there is a hardware cartel which deliberately limits ECC to enterprise / server products so that they can inflate the price / their profit margin.
ECC RAM is more expensive to manufacture than non-ECC RAM but the price difference would be fairly minimal if ECC RAM were used everywhere - as it should.
Also any form of file that is easy to corrupt to a non-decodable state. Such as any binary save/config file, or any sort of file conversion or transferance. The data you transfer from location to location will always pass through memory, and in the case of converting that data to another format, it may not be possible to validate the destination format in relation to the original data.
Hypothetically, even with hash checks when transferring files, if the chunk of data read from the source file changes, that data will be used to calculate the hash sum, along with being written to the destination file. Meaning the hash sum would match the destination file anyways. You could even get wrong hash sums and think the transfer was wrong.
Really when memory can just 'change', anything can happen and there's no real good ways to get around it. ECC memory should just be everywhere.
Well I just had data corruption on my Intel NUC due to a stick of RAM failing. Had I had ECC the fault would most likely have been spotted right away. Instead it dragged on for a few months. Glad I kept multi-month backup sets.
Firefox crashed every now and then, but that's not something which raised any flags with me. Other than that the box seemed just fine. Then one day I couldn't boot anymore as the filesystem had been severely corrupted.
Ran memtest86 and sure enough, a span of addresses invariably generated errors in all tests.
This shouldn't de downvoted, it's a fair question. Just a few years ago ECC was widely considered an unnecessary belt-and-suspenders thing that made enterprise hardware expensive. I guess the general perception changed with the Rowhammer attack.
I'm kind of disappointed in this. While they are upping the core count, the overall clock speed is being decreased across the board. This means that single threaded processes will theoretically perform slower (I know it still turbos up).
Honestly, I just upgraded to Ryzen from a 3770k and. My 3770k ran all cores at 4.2GHz (overclocked, obviously) and the only reason I updated was becasue I wanted to upgrade to NVMe and DDR4. That was 4 years old and I had no CPU-bound performance issues. I really think Intel needs to start innovating more rather than being complacent or AMD is actually going to steal the show.
Is the base clock relevant in any interesting situation? As far as I know it's just telling you what to expect if you disable Turbo Boost in the BIOS.
The frequency at idle should be lower through SpeedStep, and the frequency during load should be higher through Turbo Boost.
If there are thermal limits preventing the maximum turbo frequency from being reached at all times, I still wouldn't expect the average frequency to be related to the base clock. It should be more or less bound by how insufficient the cooling system is. I think even the instantaneous clocks can fall somewhere between the various listed frequencies under throttling conditions.
Also note that the max turbo frequency is different depending on how many cores are loaded.
I wonder how companies like Apple, that have quite stagnant and stable release cycles (compared to other brands) will handle that situation. Does it mean their customers will have to sit on 'old' CPU's again for another generation or two? Latest MacBooks were released ~80 days ago and their release cycle is ~300 days on average. Obviously I wonder, because I was about to order a new Apple machine for myself and now I'm not sure if I shouldn't (the same problem over and over again) just wait a bit longer.
Considering how recently the Apple Kaby Lake bump was, and that the 8th generation Coffee Lake parts for the "real" Touch Bar Pros won't be released for several months, I'd be shocked if this wasn't one of the better times to buy.
I thought of MacBook as of 'MacBook family' and not as 'The MacBook 12"'. To be more precise, when I said I wanted to buy a new machine, I was thinking about The MBPR 15".
After seeing those internal emails from Microsoft regarding the failures they had on the Surface products caused by problems in the then-recently launched Skylake chips, my guess is Apple is just fine on their delayed release schedule.
Er, if you read those closely you'll realize that it was microsoft's fault and they were trying to blame skylake. No other manufacturer had problems like microsoft with the same chips.
Maybe I'll finally be able to buy a Mac Mini that is faster than the 2012 model I use for grinding up data.
Edit: maybe not. Looks like the Mini is about a 45 watt CPU and these are the 15 watt line. Oh well, it's been 1040 days since the last update (downdate? Maybe that is the term for a product update that releases a slower computer). I can wait for the 45 watt CPUs. Probably, I am past my half-life.
(The current 2 core and 4 core processors that make sense in a Mini have different footprints, so Apple just did the two core to keep costs down. There hasn't been a quad since 2012.)
I'm guessing January next year with February availability, going by previous releases. (This is purely my own speculation, considering CES/previous releases.)
Thanks! :) Do you think this would be the case for the first quad-core ultrabooks/notebooks as well, or just these particular Lenovo products would take that long? I haven't been keeping up with CPU releases so I don't recall how long it takes for them to reach the portable market...
What does the term "lake" represent in these family of CPUs?
Apparently asking this makes me an idiot to some... while I'll admit to simply laziness...
I assume that it would tie a technology together as a code name for this family of procs, but in the case of "lake" they use it in multiple differing technologies...
So was curious if it meant something else non-obvious to me.
Yeah Intel are not following the tick-tock pattern anymore, they are having more revisions on the same node, and some of the revisions are more slight. Anandtech had an article on it a while back.
Sure, but this time AMD made Intel to show a real progress with 8 cores ULV CPU (yes to show, as I guess Intel had it ready-made, but was not going to present it for the time being). Before this year Intel didn't have to show the real progress as there was almost monopoly on the CPU market.
ECC is working with AMD if the motherboard supports it, but you can't always be sure that the motherboard does support it correctly. You need to rely on user reports/what the motherboard producer promises, instead of it being a default feature that always works.
Still a lot more than what Intel offers in that space.
I didn't say ECC doesn't work. It is enabled on Ryzen. But it's not something AMD go to the effort of validating to make sure it works properly and it doesn't get support. It's left in there as a footnote for enthusiasts.
Some people run >32GB RAM with long uptime and there the chance of a random bit-flip might not be acceptable. Imagine working on some Deep Learning model, training it for 30 consecutive days and then hitting a memory bug during computation.
Depends. If a bit is flipped in a dataset you are likely fine, if in code your computation might crash. If you use enterprise-grade software like ZFS filesystem that keeps a lot in memory, it's much better to have ECC and accept a bit slower memory access for a bit better protection.
I'm not so sure. But it's not an area I know much about. But practically thinking: If you try to change the memory content of an area, that means you have software running on the target machine. Does it matter much then whether you need more time because of ECC?
I remember seeing claims of 15-30% improved single-threaded performance. Does anyone know how legitimately I should take these? They sound way too good to be true...
Typically the claims of heavily improved single threaded performance are "up to x% faster", and the only time you see those peak performance improvements are during uncommon benchmarks.
Until full reviews come out it's hard to say how much of improvement across the board we'll see, but recently 2-3% ipc improvement on average plus whatever boost to frequency seems to be standard per release.
I might miss something in that article, but this 15-30% performance increase when pitting a dual core against a quad core is pretty bad. I don't see the mention of single thread performance. It talks about overall benchmark performance.
Okay. Well, you should wait for benchmarks. If like in the anandtech article mentioned the clock rate gets decreased, and that would be very normal when adding more cores, then a single thread performance increase is very unlikely. In the last launch Intel did not get close to those numbers, and that was without a core increase.
Also, there seems to be some confusion whether those processors now are a kaby lake refresh or the new coffee lake architecture. The videocardz article mentions Coffee Lake (and some other news articles call those processors that as well), but the anandtech article defines them as a Kaby Lake Refresh. A new architecture would make a single thread performance increase more likely.
The table in the article shows a ~5% increase in boost clocks for the high end models. Those are what matters for single-core performance, not the base clocks.
I think that would be correct for the Desktop, but in laptops the turbo clock normally(?) does not work for a sufficient long time to give it any meaning.
It does in well-designed machines, although usually not in the ultraslim ones. The ThinkPad T470 can sustain full turbo indefinitely according to notebookcheck. Lenovo's premium line (X1 Carbon/Yoga) cannot, though, as they're too thin and light for a sufficiently capable cooling system, and will throttle after a while.
That claim is for Coffee Lake. Intel have recently taken the opportunity to make their line even more confusing; _these_ 8th generation CPUs are "Kaby Lake Refresh". Coffee Lake will be along later.
I wonder how many Programmers here using Macbook Pro need an Iris Graphics? Compared to this newest UHD 620 ( Which really is just HD 620 with HDMI 2.2 support ), the Skylake Iris Graphics is rougly 50% to 60% faster. But with Kaby Lake Refresh you get Quad Core instead of Dual Core.
I wonder how many would prefer to have a Quad Core Macbook Pro 13" instead.
* These 15W parts can be TDP Config up to 25W. Which Fits the Macbook Pro uses.
The Turbo/base ratio is getting interesting. The previous generation saw a 1.6x Turbo max but this generation now sees 2.2x -- a clear testimony how the four cores, alas, are for show. Obviously there will be a little improvement but I wouldn't expect earth shattering results.
Isn't it better to have more powerful single thread performance for developing in single threaded languages? Looks like a step backwards than? Double the core count and more l3 cache sounds good, even though they crippled the base clock speed.
They lowered the base clock speed, yes. That's the minimum clock you can count on, assuming a correctly designed laptop, even if all four cores are going flat out.
In practice, the clock is set to limit power usage and thermal load. A better-cooled system will automatically run faster (not really applicable to laptops), and if you're only using a single thread then you'll see the same clock rate you did before, or a bit above.
Cooling limitations are extremely applicable to laptops! You can easily have two different machines with identical CPUs and 10%+ performance difference because one has a proper cooling system while the other doesn't. Check the notebookcheck rankings if you want to see some specific numbers.
"Our stress test with the tools Prime95 and FurMark (at least one hour) on mains is not a big challenge for the ThinkPad T470. Thanks to the increased TDP limit, both components can maintain their maximum respective clocks over the course of the review. [...] The two CPU cores maintain the full Turbo Boost at 3.1 GHz and the graphics card 998 MHz."
Also, this will only further increase the value of maintainable machines. A machine with good and accessible/serviceable cooling means that redoing the thermal paste after 3-4 years will be both feasible and helpful.
At this time I think there shouldn't be single threaded languages (I'm not sure which ones you're thinking about since it's mostly about libs and OS primitives). Even Python is multithreaded (even if the GIL makes it better to just use multiprocess), I'd say if you're after 10% improvements - the level those kind of CPU upgrades can offer on single thread, you'd better change languages if you're stuck to single thread. If your problem is difficult to parallelize, well that's another story.
Even if the language supports threads, that doesn't mean your application magically parallel. No language will give you free parallelism. Besides, most software you run was written by somebody else.
sure, that was my point about the problem being parallel or not. Of course the program make use of it and be multithreaded and cpu-bound or not. That was not the point. The OP talked about "developing in multi-threaded languages", which is 1) about new developments 2) about language being multithreaded or not. I believe we both say it shouldn't be a problem of language as in 2017.
Do you guys think they will wait until January 2018 to release the t480 series? I was torn between the t470p and t470 because of the quad core and finally decided on the t470 for the battery life and size (no t470s/x1 carbon because I already have 1tb 2.5" ssd)
Just my two cents but I would still expect them to annouce coffee lake today. If you look at the marketing material they talk about VR and have a picture of a desktop monitor when referring to editing. I dont see how notebooks can provide you an "immersive VR experience".
Yep, Intel's problem is that most folks don't need a new CPU, especially for a computer that's always plugged in.
I'm refurbishing a 6-year-old system with a Pentium E5800 for a friend, and initially it felt dog slow. However, once I swapped the mechanical hard drive with a solid state disk, it instantly feat like a zippy little machine. It already had enough processing power for everything they wanted (browsing, office, youtube, etc.)
Today's Javascript-packed web pages and HD YouTube content are pushing people to upgrade from their Core 2 Duo and early i5 machines.
ATX, ITX, PCI-, DDR ...outdated, overboard, clunky designs for most people.
Take a Mac Mini-like design, and make modules that can stack or otherwise attach to expand capabilities. IMO, this is what Apple should do and be done with the whole "But Mac Pro users ...!"
A project Ara-like desktop, both its size and modularity, would probably offer more than enough computing power for most users (browsing, office, youtube).
https://www.intel.de/content/www/de/de/products/boards-kits/...
No idea what any of that has to do with ATX, PCI (long deprecated technology), ITX (form factor), DDR (that's like USB).
The problem is that's a lot of jargon, and yes I know what a form factor is, DDR is, and if you look again, I clearly wrote PCI-*, as PCI-Express, which is NOT deprecated.
It's a lot of jargon to know when you want to build a device that hangs out and films birds, or some other overly specific example to make the point.
If general computing is stuck there, it seems like a bit of cultural functional fixedness over what general computing should be. Which is all over our nostalgia tripping culture.
https://en.wikipedia.org/wiki/Functional_fixedness
http://www8.hp.com/us/en/campaigns/elite-slice/overview.html
Slice >£1000 inc tax: http://store.hp.com/UKStore/Merch/Offer.aspx?p=b-pc-hp-elite...
Comparable spec small PC £520: http://www.misco.co.uk/product/2688486/HP-280-G2-SFF-Desktop...
Being designed for industrial slash embedded environments, you typically will not get the latest and greatest chips slash chipsets.
[1]https://en.m.wikipedia.org/wiki/PC/104
Seems sort of unlikely, but if they do support it a lot sooner than Intel, that would be a big win for AMD.
Even more unlikely would be a Ryzen powered MacBook Pro with 32GB of LPDDR4 RAM...but I'd be willing to pay a lot of money for that. I know Apple tends to prioritize single core performance on their own chips, but almost all of the desktop software their pro users are using would run better on Ryzen than on Intel's current offerings.
Plus, with Intel's Iris Pro gone, Ryzen might allow them to have better integrated graphics, and bring back a 15" model with no dGPU.
Literally everything else that holds "data" has been using some form of error correction forever. Hard drives, SSD's, USB flash drives, file systems, databases, even network packets. Even HDMI uses error correction, and how important is momentary pixel corruption on a screen???
It's totally insane that we're not using ECC with such large amounts of RAM built on tiny processes. Its definitely just a cartel artificially maintaining a situation that's bad for everyone not selling server chips.
It's usually just a single bit but say you are working with images, do you care if a single pixel changes its RGB value because of a memory error? A character in the metadata?
I do.
Unfortunately there is a hardware cartel which deliberately limits ECC to enterprise / server products so that they can inflate the price / their profit margin.
ECC RAM is more expensive to manufacture than non-ECC RAM but the price difference would be fairly minimal if ECC RAM were used everywhere - as it should.
Hypothetically, even with hash checks when transferring files, if the chunk of data read from the source file changes, that data will be used to calculate the hash sum, along with being written to the destination file. Meaning the hash sum would match the destination file anyways. You could even get wrong hash sums and think the transfer was wrong.
Really when memory can just 'change', anything can happen and there's no real good ways to get around it. ECC memory should just be everywhere.
Firefox crashed every now and then, but that's not something which raised any flags with me. Other than that the box seemed just fine. Then one day I couldn't boot anymore as the filesystem had been severely corrupted.
Ran memtest86 and sure enough, a span of addresses invariably generated errors in all tests.
What's changed is higher memory densities, making it even more important.
Anyway, not sure how exactly ECC relates to security. Is there any specific attack vector where it helps?
Your comparison is inadequate.
Honestly, I just upgraded to Ryzen from a 3770k and. My 3770k ran all cores at 4.2GHz (overclocked, obviously) and the only reason I updated was becasue I wanted to upgrade to NVMe and DDR4. That was 4 years old and I had no CPU-bound performance issues. I really think Intel needs to start innovating more rather than being complacent or AMD is actually going to steal the show.
Super happy for the competition though!
Is the base clock relevant in any interesting situation? As far as I know it's just telling you what to expect if you disable Turbo Boost in the BIOS.
The frequency at idle should be lower through SpeedStep, and the frequency during load should be higher through Turbo Boost.
If there are thermal limits preventing the maximum turbo frequency from being reached at all times, I still wouldn't expect the average frequency to be related to the base clock. It should be more or less bound by how insufficient the cooling system is. I think even the instantaneous clocks can fall somewhere between the various listed frequencies under throttling conditions.
Also note that the max turbo frequency is different depending on how many cores are loaded.
I for one am excited.
Edit: maybe not. Looks like the Mini is about a 45 watt CPU and these are the 15 watt line. Oh well, it's been 1040 days since the last update (downdate? Maybe that is the term for a product update that releases a slower computer). I can wait for the 45 watt CPUs. Probably, I am past my half-life.
(The current 2 core and 4 core processors that make sense in a Mini have different footprints, so Apple just did the two core to keep costs down. There hasn't been a quad since 2012.)
[1] http://www.anandtech.com/show/11738/intel-launches-8th-gener...
Apparently asking this makes me an idiot to some... while I'll admit to simply laziness...
I assume that it would tie a technology together as a code name for this family of procs, but in the case of "lake" they use it in multiple differing technologies...
So was curious if it meant something else non-obvious to me.
4 cores obviously
Still a lot more than what Intel offers in that space.
The motherboards are about another $50, and the dimms are another 25% or so more than non-ecc dimms.
The developers of zfs suggest ECC because ECC is a worthwhile thing for those who care about their data.
You should stop spreading misinformation.
Until full reviews come out it's hard to say how much of improvement across the board we'll see, but recently 2-3% ipc improvement on average plus whatever boost to frequency seems to be standard per release.
Also, there seems to be some confusion whether those processors now are a kaby lake refresh or the new coffee lake architecture. The videocardz article mentions Coffee Lake (and some other news articles call those processors that as well), but the anandtech article defines them as a Kaby Lake Refresh. A new architecture would make a single thread performance increase more likely.
I wonder how many would prefer to have a Quad Core Macbook Pro 13" instead.
* These 15W parts can be TDP Config up to 25W. Which Fits the Macbook Pro uses.
My comment was just an extra anecdote to the grandparent comment... if you want 4K at 60 fps, why aren't you plugging in via DP instead of HDMI?
Very strange scaling this chip has.
In practice, the clock is set to limit power usage and thermal load. A better-cooled system will automatically run faster (not really applicable to laptops), and if you're only using a single thread then you'll see the same clock rate you did before, or a bit above.
"Our stress test with the tools Prime95 and FurMark (at least one hour) on mains is not a big challenge for the ThinkPad T470. Thanks to the increased TDP limit, both components can maintain their maximum respective clocks over the course of the review. [...] The two CPU cores maintain the full Turbo Boost at 3.1 GHz and the graphics card 998 MHz."