Here's a fun fact (and I suspect shows what a loser I am :-)) Pat Gelsinger and I were colleagues at Intel in the early 80's. We both worked in what was called "MIPO" (Microprocessor Operation) but I was in Systems Validation and he was on the customer facing side with field engineering. My career went off through a variety of engineering roles, his went into management and up. When he joined EMC, I told him I had always thought he would have been Intel's CEO. He told me, "Hey, maybe I still will be :-)" That was like 10 years ago, and here he is.
There are a lot of things you can say about Silicon Valley, but one of the more interesting aspects of it for me has been how "small" it is in terms of people have an oversized impact. I have never been one of those people of course, just a part of the entourage. But it has been interesting to watch and learn from folks who are good (and bad) role models.
Pat went to my high school and computer science program. It’s in a small rural, Pennsylvania farming town - pop 2000. Somehow we got an amazing math teacher turned CompSci and he turned out world class students - we always ranked high enough each year to compete internationally in ACSL.
A few years back when the teacher retired, we all gathered together to thank him, including Pat! World-class guy, and makes us proud! So great to share a story with students, who come from nothing, how you can leave our hometown and eventually be the CEO of Intel.
From an economics perspective, it's kinda insane that teachers are paid so poorly, since they are such force multipliers. A single brilliant teacher can positively influence the future educational trajectory (and future earnings) of thousands of kids. Investing in making that happen more often seems like it would have great dividends.
The problem, in the US, anyway, is more that either they can't be paid differently or that the bad ones can't be fired and the new ones paid more and held to higher standards. The cost per student is already quite high.
Nations like Finland and Poland have figured out how to make that change and seen their students flourish; hopefully we will too.
I hear my wife and her friends constantly talk about XYZ job that makes more than they do and it's very demoralizing to them. I can definitely see why there aren't many passionate teachers. My wife has been teaching 6 years and is just now at the $42,000 mark. The state health insurance is decent at least.
I have a few former colleagues who were equals (or close) with me early in my career, and are now senior executives at Fortune 500 companies, and it does always make me feel kind of like a loser!
But also I think about how much crap they put up with in those roles, and how relatively low-stress my life is, and then I feel better about it. They're cut out for that kind of job, and I most certainly am not.
I see a remarkable number of people who have good personal experiences with Pat but I've got this concern.
Intel's technical failure with 10nm has gone hand-in-hand with financial success with 14nm. That is, without 10nm chips on the market in a meaningful way they've been able to raise prices for 14nm parts -- Intel put up better financial numbers than ever in a time when it has not been investing in future success.
VMWare was a big thing in 1998 but it was obsolete by the time Pat got involved -- a hypervisor is naturally part of the kernel and there is no way cloud providers are going to spend their margin on VMWare. Yes, many people in business are terrified of open source software and want a proprietary product so they have somebody to sue (VMWare) or they need somebody to hold their hand (Pivotal.) Either way, VMWare and Pivotal are units that can be merged and spun off whenever a company based in Texas (Dell) or Massachusetts (EMC) wants to look like it has a presence in San Francisco -- you see the vmware logo on CNBC every morning and somebody thinks the king is on the throne and a pound is worth a pound but that doesn't mean anything in the trenches.
Like Intel in the past 10 years, VMWare is entirely based on a harvesting business model. In the short term Intel made profits by pandering to cloud providers; but in the long term cloud providers invested their profits in better chips. (What if Southwest Airlines had developed a 737 replacement designed from the ground up for a low cost airline?)
Pat might be able to keep the game of soaking enterprise customers going for longer, but someday the enterprise customers will be running ARM and the clients will be running ARM and the coders will be thinking "did they add all of those AMX registers just to put dead space on the die to make it easier to cool?" and falling in love again with AVR8.
Speaking as an ex-Pivot (but only for myself), a couple of thoughts:
- VMware grew almost 2.5x under Pat. Obviously their core franchise is about harvesting, but new adjacent products like NSX and VSAN helped a lot with growth. They just took a long time to get traction given the customer base, who don’t like change. VMware Cloud on AWS has been surprisingly strong even to skeptics like myself.
On the other hand, companies would get rid of VMware if they could, and they tried and failed with OpenStack. The hypervisor is commodity, but the overall compute/network/storage private cloud system isn’t trivial. Turns out “holding hands” whether by software or services is pretty valuable?
Public cloud of course is a substitute, though private clouds and data centres have survived and thrived as well (for now).
- Pivotal had a boutique software and services model that would be difficult to scale as a public company given the current shifts in the enterprise fashion away from productivity-at-any-cost (PaaS and serverless) and towards perceived low cost building blocks (Kubernetes and its ecosystem).
But it would be an gross oversimplification to suggest this was merely a vanity project for EMC and Dell. It took in $500m+ annually on open source software (and another $300m in services), which is no small feat, though still minuscule given what surrounds it. But anything new has to start somewhere. Not enough to impact the mothership’s balance sheets, but there was something special there that could be nurtured. Whether it can be, or whether the differences matter enough is anyone’s guess.
Pat could take Intel in a direction we don’t anticipate. VMware was written off for dead when Pat came on but he managed to buy it another 10-15 years and at peak a doubling of the stock price. He’s probably learned from that.
This is spot on. And I think the TL;DR of the hypervisor argument goes as follows:
Hypervisor is a commodity, however management and support of hundreds or thousands of them is not. You can either pay people to support them and fix the software when it breaks or you can pay <vendor name here>. Given the former requires expertise and planning it's often more cost effective to go the latter.
Disclaimer: I'm employed by VMware (less than 1 year) and chose to come here based on pivots I feel they are making.
Keeping in context that I've not worked with him in (gulp, 30 years!) at Intel he was one of the folks who put problems into the whole picture. So a customer would say, this chip doesn't work, over at Systems Validation we would get hopefully enough information to re-create their problem on an Intel built board, and Pat's job (at the time) was to co-ordinate between us, design engineering, and marketing to figure out how to tell the customer to proceed.
The "actual" problem could be anything from the customer misinterpreting the datasheet (Marketing/Comms problem), to insufficient testing (Factory/Production problem), to chip function (Design Engineering problem). As a "new college grad", or NCG in the lexicon, I admired how he dug out details from various folks to get to the real problem. He always had ideas for things Systems Validation (SV) could do to maybe trigger the problem that made sense to me. He really embodied the philosophy of fixing the problem not fixing blame on some group.
People that can ask the right question are much more impactful than people who can “only” find the answer. (NOT trivializing the smarts it takes to find the answer. )
My Pat Gelsinger story: I worked in a successor org to MIPO, in those days called MPG. One day during pre-silicon validation on the Pentium II, one of my NCG’s came to me and said: “An old test from the historical test archive is failing in the simulator. I want to find the author and ask him about it. Do you know a Pat Gelsinger?”
Me: “Well, he is an Intel Fellow now, so he might not remember what that test was supposed to do. “
> There are a lot of things you can say about Silicon Valley, but one of the more interesting aspects of it for me has been how "small" it is in terms of people have an oversized impact. I have never been one of those people of course, just a part of the entourage. But it has been interesting to watch and learn from folks who are good (and bad) role models.
I believe this is what you feel when you come to the industry when it is just starting in a given locality, and then it explodes. Senior workers who join in the earliest days tend to drift from company to company, but they rarely leave the place entirely, or change industries.
A sociologist I met described it as the "generation effect" which is that people live in a cohort of other people about their same age +/-10 years. Throughout one's life the composition of the cohort changes as you begin to specialize starting with college/first job and continuing if you focus on a particular area (for me it has been systems analysis).
This effect can drive people to over-pay to go to "good schools" or work for less than they think they should be paid at the "good places to work" because it lets them join a cohort with members who are statistically more likely to be "successful" (for some arbitrary definition of success). I personally never paid much attention to it but thought about it when talking with this person.
I know from experience that it happens in Silicon Valley that someone will say, "Oh I know someone at some previous company who is really good at that, let me see if they would be willing to change jobs." That is why companies pay people bonuses for "referrals."
I suspect it happens in LA in the entertainment industry as well. If you watch the old American television series "ER". Produced by Steven Spielberg's Amblin Entertainment production company, it is amazing to see people who were on that show first as guest actors and then show up later in series.
"Notable absent from that list is he fired Pat Gelsinger.
Please just bring him back as CEO." -  2012 on HN, when Paul Otellini Retired.
"The only one who may have a slim chance to completely transform Intel is Pat Gelsinger, if Andy Grove saved Intel last time, it will be his apprentice to save Intel again. Unfortunately given what Intel has done to Pat during his last tenure, I am not sure if he is willing to pick up the job, especially the board's Chairman is Bryant, not sure how well they go together. But we know Pat still loves Intel, and I know a lot of us miss Pat."  - June, 2018
"This is the same as Intel pushing out Pat Gelsinger. The product people get pushed out by sales and marketing. Which are increasingly running the show at Apple."  30 Days ago.
And numerous other reference since 2009. Many more around various other forums and twitter. I am getting quite emotional right now. I cant believe this is really happening. ( I am wiring this with tears in my eyes! ) I guess Andy Bryant retired makes the decision a little easier. And Pat has always loved Intel. I guess he is pissed those muppets drove it to the ground.
This is 12 years! 12 years to prove a point! Consider 4 - 5 years of work in lead-time since he left in 2009. That is 2014. Guess what happen after 2014?
May be it is too little too late? Or May be this will be another Andy Grove "Only the paranoid survive" moment?
The King is Back at Intel, despite being a fan of Dr Lisa Su, I am little worry about AMD.
Pat's a great guy and first class engineer, but I think it's just too late for him to turn Intel around at this point. The problems have become too entrenched. They needed someone like him at the helm 10 years ago. Apple's M1 shows that the world has passed Intel by.
Similarly some (many?) thought Microsoft was a lost cause due to Ballmer (sales/business development background), but they seem to be doing okay under Nadella (engineering background). That said, only time will tell.
Intel's problem isn't really that they're involved in too many businesses that don't make sense or they aren't good at. Their core businesses are still their core competencies. The problem has been a catastrophic set of execution misfires.
Yeah, absolutely. There's still a lot of excellent engineering being done at Intel that you can't find elsewhere. It's not like Apple where their core product had eroded into irrelevance, they were making tons of undifferentiated products and they literally had nothing of value except the dying embers of fan loyalty to sustain them.
Amongst other things:
- SGX is really far ahead of AMD SEV. The latter is probably easier to sell because it's marketed as "drop in" (encrypts whole VMs), but SEV has been repeatedly hacked or shipped with obvious gaping design holes that they patch later and call features. SGX is a lot more focused, a lot more flexible and frankly a lot more secure.
- Optane NVRAM is completely unique, as far as I know. It's only 10x slower than DRAM which is nothing, but it's persistent! It totally changes the whole IO hierarchy.
- AVX512 / DLBoost are able to hold their own against mid-range GPUs for some AI tasks, which is impressive and useful.
- Their core chips are still very fast. AMD chips are selling more for less, which is a good position for them to be in, and TSMC's process advantage is helping them out for now. But they don't have a truly massive edge in tech like Apple's competitors had when Jobs returned.
- Intel have a long tail of obscure features that AMD doesn't, although it's often hard to know this. SGX and AVX512 are high profile but there are others that are less well known.
I don't count side channels as an issue because it's also an issue for all their competitors, and frankly I found the near single-minded focus on Intel by the security community to be rather misleading. I even read a side channels paper that admitted they suspected AMD had the same problem but they didn't bother to check simply because they didn't have access to any AMD hardware in the first place, which was unusually honest.
Apple's M1 is very impressive in its space, but for high performance cases like servers it's only quad-core - you can't use the little low power cores for much. Apple have shown no interest in making server parts for a long time, and Apple's engineering is bespoke so it tells us nothing about what other ARM vendors can do. So in that space it's still just AMD vs Intel and Intel is a long way from being on its back yet.
It's not that easy. Adding cores is not just changing "const int CORE_COUNT" somewhere in a VHDL file. Scaling up cores without hitting communication or other bottlenecks is a difficult engineering problem that has taken a lot of effort from Intel and AMD.
> How often do big companies like Apple and Intel succeed in righting a sinking ship though? You can’t keep pointing to the one guy that succeeded.
I feel like the view of Intel as a "sinking ship" is an inside-basebally misread of the situation.
If you buy a PC or a laptop or a server today, you're most likely getting an Intel CPU. It's now at least possible to buy AMD in many market segments from major vendors - and of course many of us do - but to the broader consumer and hosting world Intel still dominates.
You can look ahead and extrapolate and say "they can't compete with TSMC right now, and maybe they're going to start falling behind further." Fair analysis. But they dominate the market to an extent that's hard to overstate.
Contrast this with between-Jobs-stints Apple, which was a tiny shrinking company with a niche product and no clear strategy.
Intel is almost in too-big-to-fail territory. They aren't sinking, just because they now have real competition. Yes, they need to do "something" to maintain dominance - but why would you see their situation and conclude they won't? The new CEO here is a sign that they see the trouble ahead, and they're ready to steer around it.
Perpetually. These companies are always in competition, always under pressure, and always advancing their products. Not every tech company is IBM or Research In Motion. Intel and Apple are juggernauts.
If you have $233 billion (a LOT of money) and you want to own a world-class chipmaker, would you rather start from scratch or just buy Intel for cash?
As you say, Apple can be pretty unequivocally considered a success story. To the extent that it was ever troubled at all in the first place, Microsoft is also widely considered to have turned around its fortunes. Dell seems to be another turnaround story.
Then there are companies that have at least managed to stabilize themselves reasonably, e.g. IBM and HP (I wouldn't consider either of them a huge turnaround success, but I would not expect them to head for bankruptcy anytime soon either).
And even companies that did go under, e.g. Kodak, took an enormous amount of time and effort to do so, and not without launching a cryptocurrency first…
Competition is always an amazing driver for getting large companies to start doing the right thing. As long as they are not completely shut out, they have a chance of succeeding.
Until still have some amazing technology compared to AMD, they're five processes screwed right now but I think they have a chance to succeed. But given the timeline on new CPUs and fab processes, it'll be 4 years before we see the fruits of anything he does.
IBM still exists. Microsoft still exists. Apple still exists. Notably, AMD also still exists and flourishes again. All of these companies were at one time or another considered dead in the water. Did every big company survive? No. But I don't think there's enough data to make a prediction which fate Intel will see.
Yea, I'm not sure what their plan is other than to keep trying to sell to institutions until everyone makes the switch to someone else.
If they actually want to survive long term, there are two paths as I see it:
A) Be legitimately better than AMD, this could include opening up the management engine, much higher performance chips at lower price points, or some sort of space magic utilizing their Altera acquisition.
B) Embrace RISC-V and push it to laptops and desktops HARD, while not pulling the Microsoft Embrace Extend Extinguish™ play. If they go this route then their stock becomes an exceptionally strong buy IMO.
TSMC has a large design operation. It simply is part of a subsidiary rather than the parent company, that's all.
There are a lot of shady links between TSMC and GUC. I was once pressured to "collaborate" with GUC as an explicit condition of getting access to the then-bleeding-edge 16nm PDK. I turned it down. A competitor of mine (with whose chief engineer I am friends) had the same screws put to them.
Actually I'd be curious to see how a Korean Chaebol works in this regard. At various points, Apple was competing w/ Samsung (mobile phones) to get fab space on Samsung (chip fab), and I think at various times different parts of Samsung were suing Samsung...
That makes no sense. Intel's designs are fine, industry leading actually. Intel's biggest problem by far is the repeated inability to launch new process nodes. It makes no sense for Intel to give up on what it's good at to focus on the part where it's struggling.
I would love a RISC-V laptop, but only if the vector and matrix extensions can be used for training and inferring neural networks. Having separated vectorized processing for CPU (SSE) and a neural engine doesn't make sense to me.
I want to be able to train NNs with a laptop that can normally last 20 hours.
Imagine what Apple does to x64 to work on ARM, Intel having Altera, could do x86 translation to FPGA gates... It wouldn't take whole programs, but it could reconfigure itself to run most commonly executed routines on FPGA.
Don't forget Keller's contributions there. It was a much smaller operation so it was easier for a couple of people to change it. Intel, on the other hand, is a supertanker that's going to be very difficult to turn around in time.
I recently purchased a MacbookPro w/ an M1 chip and i'm taken aback at the value/dollar, compute/power, compute/heat, compute/noise. I love the MBP-m1.
I realize that WinTel machines arent going anywhere because of entrenched business use -- but between NVIDIA GPUs for heavy workloads and M1s for day-to-day activities -- what is the feeling inside Intel right now? Is this like a Microsoft-Netscape moment in the 90s?
No? It might. Keller's new startup doesn't sound particularly impactful or likely to satisfy a need for big career kills (assuming he has any such need).
We don't really know why Keller left. All we can say is that from the outside it looks like he might have just given up in disgust. Having an engineer at the helm again can make a big difference to an engineering organisation.
I'm old enough to remember writing IBM off in the early 90's and Microsoft off in the late 2000's. These companies had one thing in common: giant war chests of cash and legacy businesses that were still pumping out money by the truckload. It's not like a startup/small business that's riding on a razor's edge, these companies have so many resources they can keep searching for a way "out" of their predicament for a very long time until they finally land on the right combination of people, vision and timing.
We see it now with Google now being the "evil empire". They haven't had a real hit since Android and seem to be floundering, but online ad revenue is such a huge geyser of cash they're gonna be fine for a very, VERY long time.
IBM is irrelevant today but they saw a resurgence in the 90's after Gertsner took over.
The point is Intel still holds a huge amount of market share and has an enormous amount of cash. The new CEO seems to be universally praised in these comments and look at what Lisa Su was able to do at AMD.
> x86 is on the way out and can't compete on power or performance. In the end RISC won, it just took a while to get there.
What exactly do you think is the fundamental limitation of x86? Most chips do lot and lots of crazy logic to go from instruction set to microcode, it's hard to imagine that the variable nature of x86 instructions are the limiting factor.
At very least having a big fat decoder costs you power and space, measuring the tradeoff in raw decoder performance is probably dependent on being an engineer working on it at Intel or AMD (or Apple). At very least, Zen 3 is a cutting edge microarchitecture on a cutting edge node with lots of transistors to play with, and it still decodes only half as many instructions per cycle as M1 - an unbelievably wide CPU, which is probably the way forward.
The "x86 tax" (if it exists, I guess) is usually estimated at somewhere on the order of 5% to 10% - which is a lot of money at scale but probably not enough for a total rethink.
Judging from the M1, it seems the most important thing the big fat decoder costs you is not being able to manufacture billions of processors for small devices (where the decoder would be a much larger fraction of the CPU), develop experience and optimizations and capital based on that, and then have those benefits carry over to your high-end processors.
Devils advocate from a throwaway for reasons; Pat foisted VMWare on a small startup trying to find its engineering footing culture wise, after being invited in to advise on the business side (loan his name mostly).
Cost a bunch of engineering time and forward motion, internal politicking. Eventually it got binned after months of not getting what we wanted out of it. There was no technical reason for it.
Maybe hardware really is his thing, but that quid pro quo hurt productivity.
I worked at VMware when Pat first became the CEO. Pat is very much an engineer. If you ever wanted an engineer as your CEO, then that's Pat. That comes with some goods and some not-so-greats. Pat isn't very inspiring, at least not when he first became CEO. But I always got the feeling that he genuinely love engineers and is more comfortable around them than anything else. I once hosted a fun little engineering challenge (building bridges out of spaghetti). It wasn't a fancy event -- just a bunch of engineers having fun. Pat actually agreed to come by to hand out the awards at the end. I left VMware partly because I've been there so long and partly because I wasn't excited about it anymore. I felt its best days were behind it. Well, Pat proved me wrong by a wide margin. If no-nonsense engineering is what you need to win, then Pat is the right person for the job. It's a good day for Intel I think.
My purely personal view is that VMware's second act has begun and it'll do well. Pat deserves some of the credit for accepting that Kubernetes would be the future of the business and throwing his weight behind it.
There are aspects of Pat Gelsinger's leadership that I dislike, but they're orthogonal to his management style and foresight. He's been effective.
I will echo this. After we were acquired, I was incredibly hesitant about Pat. Over the past year, I’ve come to believe he is an excellent leader who has fantastic vision and insight even if we disagree on many things.
I’d be incredibly happy to have him if I were an Intel employee.
Intel's downfall in recent times has been "Only the paranoid survive". They strayed far too away from customers and focused on competition (and their customer feedback was a redirect from what competition was up to). I doubt there will be cultural changes.
>A few years ago, back in 2016, Intel did a “RIF” (reduction in force) of about 11%. Intel had previously done a significant reduction way back in 2006 of about 10%
>In an industry that runs on “tribal knowledge” and “copy exact” and experience of how to run a very, very complex multi billion dollar fab, much of the most experienced, best talent walked out the doors at Intel’s behest, with years of knowledge in their collective heads
bottom line: Intel created the hole by itself and jump into that deep end.
Plus Intel for many years has paid slightly above average wages. If I recall correctly they targeted paying at about 55-60 percentile. The problem with that is that the FAANG companies will literally pay their engineers twice that.
Many of the competent people I knew at Intel have left (not all), while many of the incompetent people I knew are still there.
It's actually more complex than that, when a big company wants to reduce workforce they will fire the bottom and offer retirement packages to their oldest employees which are above or near retirement age, sometime as much as a full year of salary.
The intuition is that older employees cost more and by cutting them you can reduce your payroll more significantly while doing what looks like smaller employee cuts from the outside. This is often viewed favourably by investors because on paper it doesn't seem as the company is stalling (head count is still high, costs are down). The obvious issue is that these older employees are not easily replaceable and you end up losing more velocity in the long run than originally anticipated.
The above is more applicable to traditional blue-chip businesses where workforce movements are more limited. For software engineering (which Intel is not really) your assumption is correct and once cuts are announced a lot of your great engineers will jump ship.
Those older employees better be replaceable! Many will be gone in a few more years because they retire anyway, so you should have a plan in place to save their knowledge.
The above applies to everyone. When I was an intern the company folklore was full of horror stories because the last guy knew anything about a very profitable product died suddenly. (the product was for mainframes: clearly near end of life, but it was still mission critical for major customers and got had to get minor updates)
I've also known important people to find a better job. Even when an offer of more money gets them to stay, my experience is they always wonder if they made the right decision and so are never again as good as they were before.
Moral of the story: don't allow anyone in your company to get irreplaceable. This is good for you too: it means you won't stagnate doing the same thing over an over.
The vendor of one of the security scanning tools we had subscribed to for the past six years told us they were shutting down the product because the PM left and no one else in the company wanted to take it over. I don't know how many SEs and other resources they had on the product but it was a major part of their offerings. They didn't even have an alternate upsale (or downsale for that matter) to offer us. So strange one person leaving could collapse an entire revenue stream like that.
I would venture that this is indeed how these exec rationalize the whole process. No one should be irreplaceable and therefore the move make sense. Even though management bashing is trendy these days, most managers/execs know these things and are not the idiots we satirize them to be.
Even Swan is probably not an idiot, he simply expected everyone to struggle as much as Intel on the 7/10nm node and when TSMC just breezed past Intel and AMD came out with a much better product than anticipated he found himself in very hot water.
(He could also be quite the idiot, I don't know him)
I work (but not for much longer) for a tech company that has done an ER package twice (and is well known for layoffs). The result is that the company has a sort of corporate alzheimers. It knows the inventory of all the things it used to know, but doesn't actually seem able to recall anything. The trajectory and outlook are not good.
I don't think tech companies can afford this practice. So much knowledge resides in their senior talent, and the hard-won experience-based understanding and things gleaned through opportunistic exposure that they seem to voluntarily surrender.
Pat was a "boy wonder" at Intel and could do no wrong — until Larrabee. I was working at Intel at the time and remember always assuming that Pat would someday be CEO. His departure came as such a shock to a lot of us, as does his return.
There was also Intel's whole pursuit of frequency--they demoed I think it was a 10GHz chip at IDF at one point (and Itanium was essentially an ILP-oriented design)--and resistance to multi-core. Some of it was doubtless Intel convincing themselves they could make it work. But they were also under a lot of pressure from Microsoft who didn't have confidence that they could do SMP effectively--at least that's what a certain Intel CTO told me. (Ironically, multi-core didn't end up being nearly the issue a lot of people were wringing hands over at the time thought it would be for various reasons.)
I’m not sure Itanium was a technical failure, to me it always was a business model failure as that CPU was co-developed with HP and essentially became a dedicated HP-Oracle box and by the time the ecosystem was opened up it was too late.
The heavy reliance on the compiler for ILP was an “odd-choice” but not something that was unsound in principle.
If the ecosystem was more open from the get go and more vendors were involved it had a much better chance of taking off.
And if nothing else at least it was something new.
The biggest disappointment I have with Itanium is that it and later Larabee/XeonPhi kinda pushed Intel even further into their own little x86 box when it came to processing units.
I think that failure is also why they haven’t really done anything interesting with Altera.
They do have Xe-graphics now. It's the closed Intel has come to a competitive non-x86 part in recent memory. It feels kind of forced though, everyone else has their own CPU+GPU now including Apple/Nvidia in addition to AMD/QC, so why wouldn't Intel? They also have OneAPI.
It would be interesting to see an explicitly JIT-based approach to ILP.
It wasn't. A few years earlier, I was the product manager for a line of large NUMA systems which admittedly had far larger near-far memory latency differences than it was on multicore systems. Commercial Unix systems still could have issues for write-intensive workloads but Windows was pretty much unusable for configurations that had far memory. Things were likely better by the mid-2000s but Windows was definitely still behind Unix in this regard. (Don't really know where Linux was at that point but IBM at least had done work in OSDL on various scale-up optimizations.)
In fairness it's not like Microsoft are alone in that. Single-thread performance is still incredibly important. The Mill focuses on single thread performance almost exclusively for that reason: nobody ever made the mythical auto-parallelising compilers we were all supposed to have by now, not even for Haskell. The big wins for exploiting parallelism in most ordinary software have been just scaling up lots of independent single-threaded transactional workloads via sharding, and massively concurrent runtimes like the JVM where you can move all the memory management workload onto other cores and out of the critical paths. In terms of ordinary programmers writing ordinary logic, single-threaded perf is still where it's at which is why the M1 has 4 big super-wide cores and 4 small cores rather than 32 medium cores.
Is the board leaning into the usual MBA moves 101 and turn Intel into a "services company" gradually going fabless and milking those sweet patents OR will they put the work boots on and start building an actual tech company with the people who actually can save them on the payroll? cutting on the usual contractors meat grinder and invite the vast armies of middle-management and marketing drones to leave?
All of the other fabs are so busy they can't handle Intel's chip production plus Intel's technology is wound around their own labs. Switching wouldn't be easy and might end up being a failure and taking the company with it.
The argument I've seen for outsourcing is that everyone uses machines from ASML et al anyway, so retooling to run a different company's silicon may not be as impossible as it seems.
I think there's an issue with helping Intel temporarily, because if you're TSMC you'd rather use your capacity to serve long-term partners rather than helping Intel bridge the gap to 7nm only to get dropped a couple of years from now when they get their chips in order.
Here I think of it as “committing to the direction,” the way you “lean into” a tight turn on a bike or motorcycle. It’s similar to “doubling down” or other euphemisms for committing harder to a course of action.
"Lean in" was popularised by Sheryl Sandberg. It's popular with management types and wildly unpopular with folks who feel that "just work harder" coming from an actual billionaire is a bit patronising.
To me this description sounds like a specialist high performance computing company rather than a consumer technology company. That may be a perfectly reasonable market to be in, but is that type of company worth $200bn? I'm not sure.
Roughly 40% of their revenue is consumer chips where, apart from some games optimisation, they are no longer standing out from the crowd, and the leader is arguably Apple, with AMD doing well. The next ~30% of their business is servers, where there may be a significant number of HPC clients, but the bulk of this is again likely to be VMs running non-Intel specific software, and this market is starting to realise that Intel is nothing special here.
Looking at their revenue breakdown, I struggle to put more than 20% into the things that you mention they are great at. Should they focus on this? It would lose them much of their market cap if they did.
>Roughly 40% of their revenue is consumer chips where, apart from some games optimisation, they are no longer standing out from the crowd, and the leader is arguably Apple, with AMD doing well.
You lost me at Apple. Apple owns around 15% of the PC market space and almost the entirety of that is Intel-based systems. Outside of HN, nobody cares about the M1 chip, it isn't a selling point to my mom or her friends. If someone at the Apple store recommends it they might buy it instead of an intel-based system but it definitely isn't something they're seeking out.
The only threat Intel has right now in the consumer space is AMD, and it's a very real threat. AMD won both Sony and Microsoft console designs, and the mobile Ryzen 5000 chips released at CES look to have enough OEM design wins to put a serious hurt on Intel in 2021.
Even if Apple goes 100% M1, there's the other 85% of the market that Intel is likely far more concerned about.
I get your point, but I think the M1 is more significant as proof of what is possible than because I think everyone will buy a Mac.
I can absolutely see Qualcomm offering laptop chips off the back of the M1's success. They may not be as good, but they might be much cheaper. I can also see Microsoft pushing Windows on ARM harder, and rolling out their own chips at some point.
Also once the market gets "used to" multi-architecture software (again), I think we'll see a renaissance of chip design as many more players crop up, because of the lower barrier to entry.
Maybe. Apple has solved the chicken and egg problem regarding software compatibility by forcing everyone to move on to ARM in the near future. Microsoft will not abandon x64 though, so there are far less incentives to port things. Also Microsoft cares far more about compatibility (e.g. 32-bit software). That means means a lot of things will run under a Rosetta 2 like system (probably less efficient if you need to support 32-bit as well). If you add the fact that Qualcomm is unlikely to match Apple in performance, the resulting product might not be very appealing compared to a classic x64 system.
> An ARM transition isn't a fait accompli just because Apple introduced M1 at the lower end of the Mac lineup. There's a huge lump of inertia there.
If you're just referring to Apple's future, the endgame is already a done deal. They don't roll back something this fundamental once it's public.
Their last Intel computers will be manufactured and sold within 3 years; I'd wager a month's salary on that if I had it lying around.
If they can't make it perform at the top level like AMD, which seems unlikely but obviously it's too soon to tell, they'll make up for it in other ways or simply live with the performance hit until they can.
Apple doesn't believe in inertia. Their partners and customers can either make the transition or get left behind. Microsoft's business model depends on making enterprises happy; Apple doesn't care.
True, but Apple has been the harbinger for a LARGE number of advances in consumer computing adoption. Mouse/windows desktop interface? Apple. Font libraries? Apple. USB? Apple. Touch interfaces? Apple. Modern smartphone? Apple.
The list goes on and on. M1 could be such a harbinger. Who knows if anyone else can compete, and likely Apple will be in the pole position for some time, but Apple has shown everyone it can be done.
I'm not so sure, since a lot of software is somewhat cross-platform now anyway. You need to port the JVM, .NET, and Chromium (for electron), and you've effectively ported a large part of the desktop application space already.
Web applications are portable if you have a browser
Many modern languages that compile to native, like Rust, trivially recompile for multiple target architectures.
Others are dynamic and don't need recompilation.
Of course some major software is written in non-portable C and C++. But the question is whether some emulation isn't acceptable here.
You misunderstand the point of the M1 out of Apple, and for that matter the graviton2 instances out of AWS. What was demonstrated in the marketplace is that the biggest tech companies are now able to develop in-house processors that are more cost efficient and more performant. These processors are based on ARM and have minimal overhead licensing costs, as compared to buying Intel or AMD chips for their vast fleets / products.
If AWS and Apple can do it, soon other very large companies will, but in a few years, even OEMs will be able to develop their own chips. The market for high end gaming is unlikely to be touched, but the vast consumer market is going to be eaten by custom made ARM-based chips.
So in a world where processor design becomes a commodity, what does that mean for Intel and AMD? And what does that mean for the overall datacenter, consumer markets?
> So in a world where processor design becomes a commodity, what does that mean for Intel and AMD? And what does that mean for the overall datacenter, consumer markets?
Processor design is already a commodity, and has been for many years. Any company with the cash can buy a license to the ARM64 instruction set and the reference core design, and have someone like TSMC or Samsung manufacture it.
These designs haven't taken over desktop market from x86 yet because those designs simply weren't performance-competitive with what AMD and Intel are pumping out, and it's not clear that that'll change anytime in the foreseeable future.
Apple knocked it out of the park with the M1, but they've been kicking ass for years, including their competition in the ARM processor space. Just because Apple's processors happened to use an ARM instruction set, doesn't imply that an ARM revolution is upon us.
For who? My mom uses her laptop at home 99% of the time, if the battery gets low she plugs it in. She needs a battery that will last 1-2 hours for the 3 times a year she flies.
You can find a place to plug in at basically any coffee shop or library you go to. My mom isn't spending 10 hours in a datacenter, so it doesn't really matter to her if the battery life is 3 hours or 12. For the average consumer, battery life has just been another stat on the spec sheet for years now.
> For the average consumer, battery life has just been another stat on the spec sheet for years now.
I'm not sure I agree with this. I think if you asked someone whether battery life was a priority, they might say no. And if you asked them to rank tech specs I'm not sure it would necessarily be that high either. But the experience of using a laptop with a noticeably better battery is, for me, quite likely to be one of those things that you didn't know you were missing, even if you just charge it every now and then.
Even if your mom doesn't care about battery life, she will probably buy a product that is also sold to buyers who do care about battery life, and if the product can meet those buyer's needs with a smaller battery, then your mom's laptop will be lighter.
So, does your mom also not care about the weight of her laptop?
The Macbook air with intel CPU weighs 2.75 lbs, the Macbook air with M1 weighs 2.8 lbs. The macbook pro is 3.0 vs 3.1 lbs.
The weight is a non-factor. Quite frankly until you start cracking 5lbs nobody even cares in my experience. Apple's maniacal focus on making laptops skinnier and lighter has done a disservice to the entire product line, which they seemingly acknowledged with the 16" Pro.
Apple had the option of making the M1 Air lighter (by choosing a smaller battery) but decided instead to greatly increase battery life. The point remains that Apple has choices that vendors of laptops reliant on Intel CPUs do not have, which might end up eating into Intel's market share.
the m1 isn’t competing on many of the same axes as an intel or amd cpu, because it’s necessarily packaged inside of an entire computer built around it. that computer is a mac, which might be different from the purchaser’s current os so they decide not to switch, or they already bought software for windows and want to use it there, or they’re married to the microsoft ecosystem, etc.
Seriously, I predict we will see Apple successfully attack the sub $1000 laptop market within two years. They sell the iPhone SE with an A13 for $399 so they could easily do so now they no longer have the 'Intel tax'. And the products will be a lot better than the Windows equivalents.
Most home users might use Office and that's about it. The allure of the Apple ecosystem will be strong especially for iPhone users.
All fair points but I think that with higher margins it tips the balance towards market share growth. Key issues are 1) can they make an acceptable margin on a good $800 laptop and 2) can they genuinely significantly grow market share rather than lowering average selling price - i.e. can they maintain distinction between $800 and $1000 products. Given what we've seen them do on iPhone and iPad I bet then answer is yes to both of these.
Bear in mind too that after a generation or two they can put the last gen M chips in cheaper products.
I think there's an economic principle here (and I don't know the sign), but this is all assuming a frictionless vacuum - in practice, Apple cannot sell 25% more M1 Macs if they lower their price to $800, or whatever, since their marginal costs rise in that case (because TSMC is totally booked!).
It’s the same reason battery life is so critical in EV’s. Smaller batteries need to be charged more often which eats up more of their remaining lifespan. It’s a downward spiral that means a 50% extra lifespan up front can be worth 100% extra lifespan in 3 years.
Laptop batteries are also expensive in terms of money, weight, and bulk which puts Intel into a much larger bind.
My mom would love to be able to use the same apps on her phone and her computer. I've been thinking about suggesting her next computer be an apple one since she got her first iphone and this makes it easier. Your Mom May Vary.
Not sure of the source of your 15% but I'm willing to bet that by value it's more - no Celerons in Apple's line up. Plus Apple wouldn't be going down this route if it didn't expect to grow market share - and although people don't care if it's M1 or i5 they do care if the experience is better.
Then Apple's success with the M1 will spur others - I would not be surprised if Microsoft follow them down the same route.
> Apple wouldn't be going down this route if it didn't expect to grow market share
Marketshare is not what Apple is about. Apple is about profitability and control. Their move to own silicon is driven by improvements in the reliability of their build pipeline (no more waiting for tic-tocs and whatnot) and tighter control / integration of their whole stack (same arch on phones and pc). That these chips happen to perform so well that they are potential market-growers, is a welcome coincidence.
Growing marketshare but profitably and without impairing the brand is what Apple is about. That's why we have the iPhone SE. The M series lets them do that with the Mac now. And more Macs implies more Apple services sales.
It's certainly partly defensive - they were frustrated with Intel - but Apple would only make a move of this scale if it thought it created business opportunities for them.
I think you are vastly underestimating how hard IT departments will kick you if you request or bring a device that cannot properly execute the company-critical legacy Windows x64 software. Like SAP, for example.
(SAP is the largest non-American software company by revenue and does business management, workflow automation, and bookkeeping)
My prediction is that outside of hipster startups, M1 will have no effect on business laptop sales.
But then again, the current version of SAP is S/4 HANA, and unless you are a developer or admin for that, you will be using their Fiori based web clients, so a normal browser is enough. I am a developer in an S/4 rollout project in a Windows-only shop, but for our future system landscape I could see the normal people using the SAP systems using any kind of laptop or tablet. Even we are testing iPads and laptops at least.
>My prediction is that outside of hipster startups, M1 will have no effect on business laptop sales.
This is what everyone always says but the iPhone kicked off a whole BYOD trend that has ended up with many high-value employees caring a lot about what tools they have to use, and a lot of software engineers want Macs.
I’ve used (and developed software for, and helped administer) SAP from my Mac on and off for 20 years. Tends to be easier these days now that everything is mobile and web.
A lot bigger companies than “Hipster startups” use Macs, this tends to start with the C-suite and people follow suit.
My point also wasn’t that everyone was going to switch to Mac. It was that M1 proves you can build a “better in every way” PC with an ARM architecture. Linux ARM is also being pushed by AWS heavily from the server side with impressive price/performance numbers.
Windows ARM has been failing for many years, but I suspect this is going to change. Microsoft has a talented virtualization group, where the HyperV roots go back to the Connectix Virtual PC team that built PPC/x86 emulation for the Mac. I suspect they can pull off something like Rosetta - they just need a chipmaker to collaborate with. Might even be Intel! Pat Gelsinger is an outside of the box thinker.
You remind me of folks that thought the iPhone / iPad would have no impact on Blackberry sales, as real businesses need keyboards.
No, SAP is more like an Operating System for your factories. It contains EVERYTHING, from payroll to inventory management. Think of it more like an Exchange server plus all Microsoft office apps combined. To connect to the Exchange server and get all features, you need Outlook. It's the same with the SAP database and SAP client GUIs.
The official GUI is C++ and Windows only. They do have a Java port for other OSes, and some 3rd party GUIs, but none of that is feature-complete or even halfway there.
> I think you are vastly underestimating how hard IT departments will kick you if you request or bring a device that cannot properly execute the company-critical legacy Windows x64 software. Like SAP, for example.
Many companies have long ago set up some beefy Citrix servers for those application.
I think you are vastly overestimating the "revolution" of what the M1 represents to the industry. Apple isn't selling it to any other PC makers, and corporations aren't pivoting away from Microsoft for a CPU. Every single ARM chip that's been targeted at the Windows world has produced yawn-inducing performance.
That's not what I'm getting at. What I'm getting at is that to say that "even if Apple goes 100% M1, there's the other 85% of the market that Intel is likely far more concerned about." is somewhat far-fetched. Apple can make their own desktop chips because it's an easier problem than making a phone chip (performance/thermal efficiency), but Intel can't make because they've sacrificed thermal efficiency time and time again -- not just this time but a decade ago. Remember Prescott?
I think that is why Intel should be (and probably is) worried about Apple. They will make Intel redundant by having solved a harder problem which their own problem becomes a subset of.
I agree with your market breakdown, but surely not with your assessment.
In the consumer segment, you have regular people trying to make vacation videos with software like Adobe Premiere and Adobe Media Encoder, or Magix. Nvenc quality is bad. AMD is horribly slow. The only fast high quality encode is with Intel's dedicated CPU instructions, which both apps heavily promote to their users.
And the 30% that you mention that run VMs... Wouldn't they be pretty happy if Intel added dedicated CPU instructions to make VMware better?
I agree that for the work that I do, AMD is as good as or better. But people doing highly parallelizable tasks like compiling are the minority.
I think you might over estimate the prevalence of video editing software like this. Adobe don't appear to sell consumer versions anymore, it's only pro subscriptions now. Magix is sold at a "vacation video friendly" price, but doesn't mention Intel in their marketing material.
I just don't think the market for home devices is thinking about their video encoding time when they buy a laptop, but I do think they'll use an M1 Mac and find it surprisingly fast, or hear from a friend or family member that they are really good.
Intel just haven't been optimising for the main user experience seen by these people, or those writing "normal" server software either. They've been pushing AVX512 instead, which looks good for video or things like that, but not for regular use-cases.
Plus, as a user of the software, I can tell you that if you tick the "Hardware Acceleration" checkbox on AMD, a popup will tell you to buy a supported Intel CPU and then turn the checkbox off again.
BTW I'm picking Magix here because in the local electronics store, that's the video software that you can buy as a box and that is featured in bundles with Intel laptops. So if someone clueless walks in there and says they need video editing, this is most likely what they will end up with.
Not sure where you're getting that these days? Absolutely in the days of Bulldozer, but AMD's Zen 3 architecture has taken even the single core lead from Intel, not to mention the multi core lead they've held for several years now.
Uh, those numbers are more than three/five years old at this point. Beyond comments on the test bench not being properly set up, Ryzen has improved significantly since then.
AMD's latest consumer-level chips significantly outperform Intel's chips in both price and performance. When talking about prosumer video editing performance, the Ryzen 9 5900x, the second most expensive "new" chip from AMD is a 3.4% performance improvement over Intel's most expensive "new" chip 10980XE. Additionally, the 5900x retails for $549 USD while the 10980XE retails for about $1,000 USD.
The compiler tricks can only get you so far. I administered a HPC cluster and we have a lot of software dependent on MKL and BLAS. However, with the lucrative performance boost AMD seems to put out, open source libraries like BLIS and open BLAS are attempting to fill gaps. Trust me, no one likes the intel lock-in if there is an alternative that is even close enough in performance.
> I'm my opinion, the secret sauce that makes Intel dominate certain industries is software. And it has been for some years already
Intel's secret sauce is inertia.
The thought that Intel's is not challengeable, and the world doesn't need a company to dethrone it either.
But that assumption is no longer true, and the counter movement is in its full swing.
The future of computing is on not CPU if you ask me. It would move from general computing to heterogeneous computing, and possibly application-specific chips/FPGA. MKL is fast, probably, but GPU and ASIC would be even faster.
> If you want to further reduce latency with parallelism, you need TBB, the Intel thread building blocks.
That's not how latency works and there is nothing too special about Intel's TBB library. It is a big bloated group of libraries that doesn't actually contain anything irreplaceable. Don't be fooled by marketing or people that haven't looked under the hood. It should also work on amd cpus.
> Raytracing? Intel embree.
Embree is a cool convenience, but also doesn't marry anyone to intel cpus.
The CEO of a semiconductor company needs to have an engineering background, IMO. The tech is too complex and too important to the business to have a CEO who doesn't understand the nuances. Wish Pat all the success at Intel. We need Intel to do better.
Disclosure: I worked at AMD for about a decade, although that's a while back now. It is traditional in semiconductor companies (or was, anyway) to have a triumvirate:
1) the "outside" guy (sales, know the customer)
2) the "inside" guy (operations, now the employees)
3) the "tech" guy
Any of these three can run the company, but whichever one it is, they need to have the other two near at hand, and they need to listen closely to them. The problem comes when, as at Intel and perhaps also at Boeing, you have options (1) or (2) in charge, and they're not listening to the person who is position (3) in the triumvirate, or they don't have a triumvirate at all. If the person in position (3) is in charge (as at AMD currently), they will still need to have experts in (1) and (2), and they will need to listen to them.
Gelsinger earned a master's degree from Stanford University in 1985, his bachelor's degree from Santa Clara University in 1983 (magna cum laude), and an associate degree from Lincoln Technical Institute in 1979, all in electrical engineering.
Intel is in shambles. The whole 10nm thing is a fiasco at this point. Who could've pictured 10 years ago that Intel would've even considered outsourcing their fabrication? Intel's core strength was their seemingly unassailable lead in chip design and fabrication.
From various articles over the years it seems that what's happened to Intel internally is fairly typical: internal fiefdoms, empire-building, turf wars and the like. This is something you have to actively prevent from happening.
This is going to take someone with deep experience in fab engineering to figure out, not a bean counter. And it should probably involve a massive house cleaning of middle management.
And no the answer isn't just another reorg. Unless you actively prevent it reorgs become a semi-constant thing. Every 3-6 months you'll be told how some VP in your management chain you've never heard of let alone or met now reports to some other VP you've never heard of or met. There'll be announcements about how the new structure is streamlined and better fits some new reality. And 6 months later you'll go through the same thing.
This is a way of essentially dodging responsibility. Nothing is in place long enough for anyone to be accountable for anything working or not working.
The world really needs a company like Intel to succeed, if only to take some of-- let's say-- the geographic concentration risk out of such a high percentage of the cutting-edge device manufacturers depending on TSMC's fabs.
No harm to VMware on this I think. As it pushes into SAAS it really does need a big shake up. Not that Pat wasn't good, but it is an old company with a lot of inertia, some fresh blood and fresh ideas could really be beneficial.
Hopefully the next CEO is as committed to all the other stuff, like treating the employees very good and the community stuff. The best bit of the company isn't the tech at all (in my opinion)
I'm not sure I agree with that. I've been there now over 8 years and the only difference since Dell bought emc is they are cross selling. In terms of product development, staffing, benefits etc they have no influence. This is in opposition to EMC, who had a lot of benefits and nice things like free coffee cut to match with dell.
Maybe? I don't know what it was like when Gelsinger started. But it's the least credential-driven high-status professional market in the country, and maybe the world. It's hard to start your career with a first job at Google without a top 10 CS degree, but that might be the only example; it can be your second job without one.
Pat was fairly effective at VMW. The fact that he himself is not new to Intel is probably beneficial for the company, but there's a pretty substantial difference between being CTO and actually steering the ship.
I'm not comparing Gelsinger to Steve Jobs in a general sense, but Jobs wasn't new to Apple when he returned -- and yet Jobs' return to leadership was transformative for the company.
You could argue that VMware was slow to move into containers and should probably have better leveraged Pivotal before they drew them in. The former is something of an innovator's dilemma thing. The lack of focus on developers and applications was something of a VMware blind spot going back to Diane Greene days. But "fairly effective" underplays how well VMware has done over the past 10 years or so overall. There are things they should have been more aggressive about, especially with the benefit of hindsight, but they've been a very successful software company/subsidiary.
Can confirm. With multiple other amazing things he brought to the table, he was super excited about things we were doing no matter how small or big. He truly respected everyone. Definitely will be missed.
Great to see a technologist back in charge of Intel. This was a company that was founded and CEO’ed by top-tier engineering elite. AMD’s CEO is an MIT PhD with long track record of technical achievement earlier in her career. Intel needs the same.
People underestimate what it means for Intel when Microsoft and Apple are going to produce their own chips and AWS is pushing their ARM offering. It does not take much for the industry to switch to ARM as default for desktop and server.
They might both be in trouble. AMD is certainly part of Intel's headaches but their loss of relevance also has to do with an overall move away in the market from X86. They never mattered on phones or tablets and Apple just replaced them on their laptops with an in house ARM based design and is apparently outperforming them. That's something that others are likely to emulate. ARM was something for cheap under-powered hardware in the past decade. Not any more. It's now a high end option.
I'd say there's a great opportunity for Nvidia to step up and cut out the middleman (Intel and AMD) and put together some great ARM based hardware running Windows on consumer oriented hardware and Linux in data centers. Given that they just bought ARM, that probably is something they are actively planning to do. Apple just showed us all that the M1 can run circles around Intel and AMD. That's a trick others can pull off too. Certainly Nvidia.
I'd say both Intel and AMD need to make a move away from X86 soon or risk being marginalized. In AMD's case, RISC-V might be a nice lateral move to make. That paired with their GPU as an SOC might be attractive for a wide range of devices. The alternative would be having to license chip designs from their main competitor and making them richer in the process. Sticking with X86 is long term a losing game. People care less than ever about binary backwards compatibility.
You have it backwards. AMD is steadily taking market share from Intel in both the desktop and server space. ARM is only a potential threat at the moment, with a tiny market share outside of mobile and IOT.
Granted, ARM is huge threat to Intel over the longer term, but AMD is taking market share now.
If we (very incorrectly assume) 100% of Intel sales were through Apple and were going to all be replaced by Apple, then, even given current trends AMD and Apple would be at roughly 50/50 before long. Looks like OS X is around 16-17% of desktop operating system share though. So the ~82% of buyers still buying x86 are going to continue choosing between Intel and AMD.
Apple M1 has some people believing that suddenly everyone will stop buying x86 chips and buy Apple unless someone releases a competitive ARM based chip for Windows/Linux. I'd like to see some evidence for that premise, though.
Apple's messaging has always been "better designed hardware + better software experience", and yet they still haven't breached 20% market share. A CPU that increases battery life (and yes performs very well) but still can't be bought with your Windows PC isn't going to rapidly change the market share. It could erode it over time, but this is certainly just conjecture, not proof. Let's revisit the conversation in 5 years and see what Apple, Intel and AMD have done, technologically, and what consumers have decided.
First of all, ARM is already dominant on non-laptop mobile. Secondly, ARM is growing rapidly for workloads in places like AWS and many think there will be a lot of growth on-prem as well. Apple's symbolically important for Arm in the sense that it shows switching to a non-x86 architecture for a laptop is possible but they're fairly irrelevant from a volume perspective.
If one assumes that x86 remains the dominant architecture in the industry then, yes, it's basically a zero sum market share game between Intel and AMD. But lots of people don't think that represents reality in the second half of this decade.
I don't know the history of non-laptop mobile CPUs. I assume they have pretty much almost all always been ARM. Please correct me if I'm misinformed. That hasn't factored into desktop/laptop considerations in the past. Currently, it seems to only factor into Apple's plans for unifying their OS X and iOS stack.
ARM is growing in servers, but AMD is as well. It's not clear though how either smartphone or server architecture will affect desktop/laptop purchasing for consumers en masse.
Of course if we're talking about 2025-2030, I'm sure any predictions I make are a roll of the dice, at best. But right now I don't think there's enough momentum of any players to have absolute certainly about 2025 and beyond. There is a lot of inertia with x86 in desktop/laptop, and so far Apple's Macbook Air/Pro and Mac Mini are the only high performing options on ARM.
I like AMD but I'll be happy to see any technological progress that makes significant improvements to our quality of life.
Just five years ago I didn't consider laptops viable for gaming, and now I do most of my gaming on one. The Macbooks with M1 seem like they are capable of some level of gaming, but not "max setting" 1080p gaming, so it's not yet an option for someone like me to switch. But when I'm working, I do most of that on a powerful but very quiet desktop. The M1 chip would not improve my quality of life on my desktop because the efficiency of the chip won't really change anything for me. There's no compelling reason to swap out of my custom built machine to a Mac Mini.
Anecdotes are very personal. And so are computers. For many consumers where an M1-based machine work, the benefits are all but lost on them anyway, for a variety of reasons. They don't make decisions based on CPU efficiency - just what they are used to and what features they need and want. If a feature is really life-changing for that particular person they might be OK with change, e.g. switching out operating systems.
For a developer, it's either easy to think about switching because you know everything is cross-platform, or it's perhaps impossible to switch because you use exclusive software.
> non-laptop mobile CPUs. I assume they have pretty much almost all always been ARM.
All kinds of things over the years. For example, Qualcomm's Snapdragon line.
And Intel was definitely pushing to get into mobile at one point. They made a big deal about processor compatibility from mobile up through the server. I still remember at one IDF, they made a big deal about how you wanted to run Intel for mobile (this was pre-iPhone) so that Flash would run the same everywhere.
I agree that it's hard to make predictions more than a few years out and certainly x86 has a lot of inertia. On the other hand, there's a lot more abstraction than there used to be and we know there's going to be a lot heterogeneity anyway (GPU, DPU, TPU, FPGA, SIMD instructions, etc.) given the slowing down of CMOS process scaling. So I don't think it's too big a stretch to imagine that we'll see a more varied processor landscape. (I expect ARM to gain share although I don't expect it to dominate on servers--though I have colleagues who do expect that to happen.)
Was Pat the one responsible for the near complete degredation of VMWare support over the last X years? I had enough interactions with it to finally get that they were funneling the low paying support contracts to a completely different set of crappy support people, but if you payed the premium you got something much better. Maybe it was prior to him and things have gotten better though.
This spoils any optimism I may have had for Intel. Reading that this came due to a search for "Strategic alternatives" is damning. Intel is like Boeing. They make one thing and they used to make it very well, silicon. If Boeing told you they were looking into alternatives to making planes would you be optimistic?
It feels like they're throwing in the towel on being the leader, giving up on trying to catch up process wise and will look to maximize their existing revenue. RIP
TFA doesn't say it came due to a search for "strategic alternatives."
It says "Dan Loeb's Third Point hedge fund in December urged Intel's board to explore "strategic alternatives."
That is typical hedge fund pressure attempting to squeeze (short term) money out of their investment. The article has two more paragraphs consisting of the hedge fund's cheap shot quotes.
It isn't clear Intel's board succumbed to the hedge fund pressure. Changing the CEO from a finance-oriented CEO to a technically-oriented CEO (Gelsinger) is taking Intel back to its roots rather than "exploring strategic alternatives." Intel was founded and lead for many years by technically-oriented CEOs.
As far as Third Point's public commentary goes, you're right.  (Who knows what they said in private.)
"Strategic alternatives" often means "split up the businesses." Silver Lake proposed something similar to AMD in 2015. In the end, the Silver Lake deal didn't happen  and AMD stock is up 45x since then.
AMD chose to spin out GlobalFoundries in 2008, several years prior to the 2015 Silver Lake near-deal , becoming a fabless company on the tail end of a 90% slide in AMD stock. AMD share price continued to decline and remained in the low single digits for seven years after AMD and GlobalFoundries parted ways.
My understanding is that Silver Lake wanted AMD to split apart its product segments. They were going to buy 20-25% of the company, but the deal never materialized.
Intel's vertical integration is an asset IMO, especially in a supply-constrained environment like the present.
I don't see much similarity between the story of Intel and the story of Boeing.
Boeing made an entire line of defective airplanes that could autonomously kill everyone aboard under normal usage. Then, a respiratory virus hammered the travel industry.
In contrast, the semiconductor industry is seeing more demand than ever before, and presently undergoing a shortage. Intel has mismanaged 10nm and 7nm, but the company maintains a majority CPU market share overall and an even wider margin for servers.
I think Spectre and related vulnerabilities are directly comparable to the 737 MAX debacle, insofar as they reflect poor engineering decisions made directly against the customer in favor of short-term profit. Of course Boeing’s decision was far more devastating and led to the loss of 346 souls, but the pathology that gave rise to both situations (executive hubris, failure to tackle accumulated tech debt) seems quite similar.
Workarounds for Meltdown and Spectre can impose a nontrivial performance hit for certain workloads, but it's tough to compare a 20-month grounding of 737 Max with patched security vulnerabilities that the average consumer neither understands nor directly observes.
Why not? Obviously the outcome of Boeing’s decision directly led to many deaths, and I am explicitly not trying to compare the two on result. However, the corporate actions which created these two scenarios are quite similar and I think it’s useful to compare them, since we’ll definitely see it again and again in the modern business world.
The thing about the max is the deaths were directly observable and obvious. The problem with Spectre/Meltdown is they might have been exploited for years, leading to wars/economic harm that the general public will never be able to tie back to Intel.
On the balance of probabilities I think the Max debacle ended up doing more harm, but IMO Intel (and AMD/ARM to a lesser extent) did get off easy because of the extremely technical nature of the issue.
Edit: I also agree that the max issue has a much more direct Executive Directive -> Harm line to draw. I don't think the Intel CEO went down to engineering and said anything comparable to "create a new version of a plane that needs no new training while having completely new larger more efficient engines, even if that's physically impossible"
Eh. Some of the vulnerabilities in that class were Intel-specific. But there is a whole class of vulnerabilities that potentially can hit any design that has speculative execution--and people just aren't OK with shutting off speculative execution because the performance hit is so big.
> It feels like they're throwing in the towel on being the leader, giving up on trying to catch up process wise and will look to maximize their existing revenue. RIP
If they were going from a former CTO to a former CFO, I would agree. They are going to a former CTO from a former CFO. How does this make it seem like they are looking to maximize their existing revenue rather than trying to get someone in to "fix" their issues?
When McDonald Douglass reverse acquired Boeing, the balance changed. Emphasis on share price, gutting wages while doing stock buybacks, shady business practices (bribes for defense contracts, gutting oversight), and so forth.
Of course, there's always more to the story. Like I have no idea how much to blame Clinton Admin's push for consolidation and monopolies (removing competition). Or how to explain the quixotic quest to outsource and offshore core competencies.
So as casual observer, it seems like Intel similarly lost its way.
where does Intel go from here? we are in the tech wave of propriety chips vertically integrated. cloud providers building their own CPU designs for their service offerings, consumer computer providers integrating cpu, hardware, and software (Apple). Isnt Intel going to continue to get squeezed out of the market?
He didn't become CEO of VMware until 2012 which is 6 years after AWS launched. Even if VMware had decided to directly go after AWS, they would have started way behind. Also, for good reasons and bad, it's unclear to me that would have been a great move. IMO, you can look to other trends they could have latched onto more quickly that would have been better fits.
Gelsinger was pushed out as CTO after horribly failing to address AMD's competitive threats. I have no idea how he wormed his way back in, but this does not bode well for Intel's future: Gelsinger is proof of the "Peter Principle", being promoted too high.
EDIT: I was a bit harsh, toned it down.
EDIT 2: This is probably petty, but I can't ignore the fact that there was a significant hubbub at Intel regarding him using the "Dr" prefix. He scrubbed it from his bio and internal pages when it was pointed it out didn't come from an accredited university and that it was honorary. He also caught flak internally for having the pope bless a wafer for Intel's future success. It was very weird, especially given the high percentage of Muslim engineers at Intel, and its focus on neutrality.
As an Intel shareholder, (only bought when it tanked after quarterly earnings this year), I can say this is exactly what I was waiting for.
I'm not super sensitive to exactly who replaces Bob, though Pat seems like a decent choice having read about him now.
I will say I was ready to unload the shares if Bob was replaced with another MBA though. Having a non-engineer lead an organization like Intel was a disastrous choice and seriously makes me question the boards judgement.
Isn’t it a bit petty to deny him that? It seems a hell of a lot harder to me to get a honorary doctorate than a normal PhD, and I can think of a lot of PhDs who deserve their doctor title a lot less than Stallman.
Kanye West, Ben Affleck, Jon Bon Jovi, etc, all have honorary doctorates. Why does nobody call them Doctor? For the same reason you shouldn't call Stallman Doctor - because you don't do that for honorary doctorates.
Well that’s just social convention, and I don’t place much emphasis on that. I reserve the right to address anyone I choose as “doctor”, and if Stallman prefers to be addressed that way I’m happy to oblige. He has certainly earned it in my book.
Same goes for Benjamin Franklin btw, whom I see you left out of your list.
I don't know. It's the "appeal to authority" that makes it silly in my eyes. It's sort of the opposite of the scientific mindset, and that bothers me. Stallman or Franklin calling themselves "Doctor" I can definitely live with.
If no one (including MIT) decided they could grant him a PhD on the basis of publications (rather than the more traditional thesis route) I don't see why we should use the honorific. Also, I know plenty of distinguished engineers who don't have a PhD, and they seem fine with it. It is the extremely vain who travel around being feted for faded laurels who I have no time for.
This article doesn't do him justice, he was one of the key people behind the 386 project which saved Intel during its first major crisis. After that, he went on to become the lead of the 486 project and kept moving up on the management ladder until the Larrabee fiasco.
That is so typically Wikipedia. 10,000 word articles on trivial subjects and basically a stub listicle for an article on the longtime CEO of one of the largest (but "uncool") software companies in the world.