The very first paragraph :
> “ Taiwan Semiconductor Manufacturing Co. is working with Google and other U.S. tech giants to develop a new way of making semiconductors more powerful.”
The title is unfortunately misleading as it makes it look like only Google and TSMC are “pushing the boundaries” whereas it also includes other tech companies.
Do you have numbers on that? Google was one of the largest manufacturers in the world for a couple years there. It wouldn't surprise me if they were still up there.
Absolutely. They were reportedly the largest player in the server space around 2012 [1] (and consumed the entirety of their own production). Since then they've had multiple high-sellers across multiple product categories. The Google home devices ship tens of millions of devices a year on their own. Google may be better known for software and it's not all on the leading edge nodes, but they produce a lot of HW.
I wouldn't be surprised if Google's TPU's pushed them far over the edge on this. I'd bet TPUs running inference for Ads models can basically print money for the company.
They've also already been building their own network chips for some time. So, given the scale of their datacenters, I think it's entirely reasonable that they'd outpace nvidia.
I think you may have too high expectations of how much TPUs Google really needs, and how much of those Google is using for training Adsense models.
There are such an insane number of processors being made, we have Apple, Amazon, AMD, Nvidia, Google all buying their stuff from TSMC. I would be very surprised if Google was the biggest of them all.
Around 2012ish-2014ish, Google was producing massive amounts of hardware for their own internal infrastructure. They didn't stop (and actually added popular consumer devices), but the overall market has simply exploded in size even faster.
I’m naive on this topic but what makes TSMC pretty much the only fab on the world that can do this? Do they have technology the west doesn’t? Know how? Why isn’t there a fab in the US, it can’t be just labor costs?
EUV is hard and expensive. You need massive clean rooms and a huge laser. Afterwards you need to scale up and actually become profitable, which is even harder. One EUV machine costs 145 million dollars, and you are not going to buy a second one before the first one works well enough. That is why TSMC has now half of all the machines. Many manufacturers gave up making fabs and went to TSMC for production.
I do think the proximity of Taiwan to China is a huge help for both fast turnaround of tools/machines as in workers. China themselves are banned from having the latest chip technology. Taiwan is often used as a proxy for western countries to be near China.
$145 million dollars is pocket change to Big Tech. It must just be that they don't think it would be profitable enough to compete when they can buy. If they ever stopped being able to buy, I'm sure they'd get to work real quick.
I could also see Apple in-housing their chip fab at some point sooner, just for the sake of continued vertical integration. It doesn't hurt that they have the deepest pockets too. The interesting question is whether or not they'd ever take orders from third parties.
145mm is only the machine cost. Then there is people, raw materials, IP, design, technology, software, maintenance, repair, upgrade, integrating, licensing, power, much more.
I'm guessing the split here is 20/80. 145mm is the machine cost, and then it takes 800mm or more over the lifetime. That's a billion dollars right there. A billion dollars revenue is unicorn level. But here we're talking just about putting aside a billion in operational expense just for the chip. Then there is the cost of building the actual product on top of this.
Then there is the useful life of the machine. No product wants to be on the same chip for 5 years. So they have to recoup the 1 billion fast.
The only way would be to generate extra revenue out of the machine by sharing it with other companies. Ah but now you have to cater to their demands as well. Better have a profitable model around this. But well, that's what TSMC is...
As of their most recent balance sheet, Apple holds $90Bn in cash and cash-equivalent assets, plus another $100Bn in marketable securities. They could probably raise another several tens of billions of dollars with bond market issues if they wanted.
If Apple goes into manufacturing their own semiconductors, they could absolutely bootstrap the whole shebang and have tens of billions of dollars left over.
> $145 million dollars is pocket change to Big Tech.
That's the Taiwan price. What's the domestic multiplier on that price? What is the time to market price to pay if you're trying to pull this off far from the competitive suppliers and experienced tool makers in Asia?
Andy Grove explained all of this 10 years ago[1]. Today we see Intel toying with farming out high end fab work; another milestone in the decline of US technology that Andy wrote about.
> $145 million dollars is pocket change to Big Tech. It must just be that they don't think it would be profitable enough to compete when they can buy. If they ever stopped being able to buy, I'm sure they'd get to work real quick.
Even if they have money, I doubt they have the capability.
They may have billions USD for pocket change, but they can't make a freaking working video call app, and a web browser which eats less than 1GB of memory just to draw a few pictures, and a text layout.
Feels like exceptionally short term thinking especially given Taiwan’s proximity to China and precedent around China flexing its muscles around expanding its borders with Hong Kong, South China seas, border with India etc.
> I’m naive on this topic but what makes TSMC pretty much the only fab on the world that can do this? Do they have technology the west doesn’t? Know how? Why isn’t there a fab in the US, it can’t be just labor costs?
Just steadily keeping doing what it did for 30 years straight without compulsion to chase "the next big thing"? A rather unglamorous industry, and for most of their history, they did not chase the bleeding edge, letting more moneyed players to bleed each others.
I read some of those very extensive market research reports (or better to say market research books) which go to semiconductor companies for few thousand bucks a pop. It's mind boggling how much consideration, calculations, and planning goes into a decision to sign on a $10B+ fab spending. Forecasts, and technology analysis goes for at least a decade forward, and they dive further than simply tech, into things like social trends, and etc.
So, it is, the industry is very hard, very obscure, and works like an ant repellent on "next big thing" chasers from Silicon Valley culture.
It is just the nature of all technology sector. Once the capital cost of entry and catch up to market far exceed any company could risk to be involved, they will consolidate to a single entity.
The same could be said with Operating System in the 90s from Microsoft. Or not just technology but the cost to be involved and Unit Cost Economics.
Intel needs to step up their game if they want to stay relevant. I am sure their execs know it so I can't wait to see what they will offer in the next 5 years.
The trial balloons currently being floated by Intel forgo the pursuit of cutting edge process nodes in favor of third party fabs. Intel's existing foundries will serve older nodes while external foundries will build the high margin devices.
They've been carefully dropping hints of this since July. Through the first week of November the headlines were "Intel to decide soon" whether to outsource, with a decision supposedly appearing sometime in January. This looks like the sort of precision expectation setting one would expect of an experienced zombie corp CEO like Swan.
So don't bet on Intel making any great comeback in fab tech; they've decided they can be 'relevant' without it.
There are essentially 3 leading fabs left: TSMC, Samsung, Intel.
Samsung has yield issues, Intel has yield issues, TSMC somehow has no issues. GlobalFoundries has already dropped out of the race. TSMC is selling every wafer they make, and continues to invest heavily to solidify it's lead.
ALL of these fabs depend on ASML machines. IIRC last year they delivered just over half of the machines that were ordered.
Chip foundries are extremely complex processes and it's hard to get introspection into exactly what's going on wrong at the other vendors. And if it were clear as day and night, those vendors probably would have fixed their problems and we wouldn't be having this conversation in the first lace.
But if you'll allow your question to be answered with pure rumortown and supposition:
Intel has engrained management issues that don't allow for reorganizing to systemically address their 10nm and under process issues as the little fiefdoms that control different parts of the process are spending more time throwing each other under the bus rather than collaborating.
Samsung isn't far off from TSMC (about a single node). They fell a little behind as TSMC grabbed a bunch of their contracts (most notably Apple) which left them with not as much to capital to invest in their newer process nodes at a rather critical point. At a bare minimum they'll be kept alive by TSMC's customers as a viable second source (if you squint hard enough) as a negotiating tactic (see Nvidia's 30 series on Samsung as a shot across TSMC's bow).
GloFlo canceled 7nm and under R&D. I wouldn't be surprised if they bounce back at some point when all of the EUV gotcha workarounds are more or less public knowledge and the equipment doesn't have prices on the order of a small country's GDP. If we hit fundamental limits at ~1nm and Moore's law really dies, then they should be able to jump back in with relatively less work than a newcomer and be the budget option once leading edge chips become a commodity.
I think the ASIC design industry is only going to start getting serious when Moore’s law finally slows down enough that we can get some breathing room. Current HW design is more akin to shipping an Electron-based CRUD, than some finely hand-crafted assembly.
I think in 50 years we will find out that various state security services have been interfering with IC fabrication tech. Things like deliberately moving critical things by a few nanometers so entire production runs are broken...
Funny you say that, if my memory is correct ASML was planning on sending EUV machine to SMIC in china.
Right before shipping the machine the ware house caught fire and the machine was destroyed.
Then US regime came in and convinced ASML to implement a export ban for Chinese companies.
>> I think in 50 years we will find out that various state security services have been interfering with IC fabrication tech.
That would not surprise me at all. I was hired on contract once to help a system (not ICs) that was having issues and my work was directly obstructed (a coworker said sabotaged) by the guy (foreign) who designed one subsystem. At the time I though maybe it was his ego not liking that his boss brought me in to help and I was succeeding. When it became clear from multiple incidents that he was subtly sabotaging me I started to wonder this very thing. I showed evidence to our boss who said "my hands are tied" because of the corporate structure at the time. My work did make it into production and I moved on for other reasons.
ASML machines are just a small part of the complete equation. Even if all of them use the same working ASML machines, the other parts can have the issues
Intel's revenue is big enough that they must have some kind of warchest to fall back on. Obviously money isn't everything but it doesn't hurt.
They can probably still hammer AMD on both their software and documentation, i.e. roughly a quarter of their sales are to data centres.
It will be interesting to see what happens to X86 post-M1 (i.e. ARM coming of age), I suspect not much but I think the world has changed enough to have another stab at VLIW (for example) even if the cost/benefit is pretty marginal.
The current CPU's already are VLIW in a fashion. Its just that they have a hardware jitter for it. That's what superscalar OoO CPU basically is, a piece of HW that generates VLIW instructions based on the normal instructions that come in. A superscalar execution port is basically just one subinstruction in VLIW. Explicit VLIW only helps you to save that piece of silicon from the chip. And it's not that much in the grand scheme of things. It used to be, but not anymore.
Static compile time VLIW means one cannot really make it wider anymore, or narrower. Dynamically doing it on runtime means a cheaper and smaller core can just be narrower. Remove an instruction slot for one integer ALU? Fine, no issues. Everything still works, albeit slower by that much. Make a beefier chip? Perfect, it got faster by the amount of instruction level parallelism that was available.
In addition a compiler cannot really see across function boundaries (except if it's statically determinable). A jitter can. Modern chips have reordering window of hundreds of instructions, M1 apparently goes up to over 600. That's quite a lot of stuff there that it can dynamically reorder across. A compiler might not have noticed that due to some weird dynamic call that was not visible during compilation time there is now an FPU instruction that could be inserted here, an OoO processor can.
Due to that explicit VLIW is basically dead outside of some highly specific applications, like DPS and whatnot.
In a similar fashion ridculously wide vector units were obsoleted by the approach pioneered by GPU's. Just add few things to allow masking based on branches and you get SIMT approach. Write as if it was scalar code and it'll run on HW that has vector lengths going from none at all up to whatever, NVidia has 32 wide vector units as an example.
Even more importantly, and I'm surprised you didn't mention it, static compile time VLIW doesn't know what to do for memory access. A load might take any number of cycles depending on whether the line is in L1, L2 or L3 cache, with static compile time VLIW then the compiler has to guess, if it's optimistic the whole CPU stalls, if it's pessimistic then you're much slower than you would otherwise be.
I believe this is the real thing that make superscalar OoO (ie. JIT to VLIW in silicon) win.
That's a great point to emphasize. Because memory access times are inherently more or less nondeterministic.
I consider it to be roughly as important as being able to reorder instructions across non statically determined branches and function calls. And both of these expose the fundamental weakness in explicit VLIW, if it's essentially nondeterministic it cannot be taken advantage of.
Then we naturally have some other benefits, like hyperthreading. Which basically is just compiling two instruction streams together on the fly.
The performance on particularly memory-bound workloads is why I chose it as an example rather than a prediction.
For it to work statically (although it really could be halfway in between), it would probably require a complete paradigm shift away from the current way we think about cpu caches
Regardless of whether it'll work or not, I'll be very happy if the mill ever makes it onto a chip.
You're using terms to mean things that they usually do not mean. Calling superscalar OoO scheduling a jit-ed VLIW is misleading at best. The whole point of VLIW is that you don't need the control logic and, more importantly these days, the power cost of scheduling each instruction by itself.
From a single thread performance standpoint, the important part is that OoO scheduling is able to dynamically schedule around cache misses to effectively keep more memory accesses in flight, extract more memory level parallelism. In principle you could make an OoO VLIW CPU, but that would negate most of the power benefit while hamstringing the scheduler with unnecessary dependencies. Where in-order VLIW shines is when memory accesses are predictable, like DSP code. There you get an order of magnitude power efficiency gain.
GPUs are effectively still in-order CPUs with large SIMD instructions, some useful instructions to make masked execution simpler and a specialized language and compiler to hide this model from the developers. GPU manufacturers calling these separate data lanes threads is just misleading marketing BS. There is no independent instruction pointer for each lane.
That's why I tried to use it more as a methaphor. Because the whole point of explicit VLIW (EPIC was what Itanium folks called it) was to save that scheduling HW. But nowadays that piece of HW is relatively minor part. So it's no longer worth to save it in a general purpose CPU. As this thread is about general purpose CPU's for direct consumer use (not a controller in hard drive or whatnot, but a full fledged CPU you run arbitrary programs in) we're talking about chips like Itanium when it comes to VLIW.
I do not disagree that VLIW is a great for things like DSP where the power consumption is of the essence. One can get ridiculously high perf/watt by going explicit VLIW. I just don't see any way we'd see that approach in general purpose CPU's again.
GPU's do not need to reorder instructions to hide memory latency. It just happens on a different level. While a single threadgroup (as in that single instruction pointer that controls the SIMD unit) will not get reordered at all, one has multiple threadgroups in flight. So if one group stalls at a memory load the unit will just schedule a different threadgroup. Because one generally has tons of them in flight. It's all about throughput. One could think of this as an in order CPU (from viewpoint of a single thread) but with ridiculous amounts of hyperthreading (one thread stalls, we can pick an instruction from another thread but never one from the stalled thread).
I’ll second this. My M1 Air is a freak of nature. I’m so excited for this next wave of computing. Lots of things that need setup Dev tooling wise and much work underway to get it supported but my god man... and it’s passively cooled.
I have a beast i7 gaming rig too and my M1 beats it at everything but graphics performance.
What I think Intel will continue to work their enterprise business until this “arm wave” starts to pull up. Engineers at Intel probably have some ideas.
I genuinely don't think M1 is a new wave of computing (yet), both in the sense that it's apple so it's not up to you what you do with it but also because ARM and X86 aren't all that different these days. Ultimately M1 is "Company with enough money to buy the US Navy, follows existing path to logical conclusion, succeeds".
If a company other than apple makes a fast arm chip that'd be very interesting, though. Nvidia have probably thought about it.
Time will tell if you're right. I do think that what's happening under the hood of the Apple chip is real and is causing more than a few executives in the silicon world to lose sleep.
This reminds me of when RIM got their hands on an iPhone for the first time.
I think it the hints were there if you were in the industry. CPU scaling is TPU scaling and apple's chips were showing leading single core performance compared to android / qualcomm chips for many years now with very low power usage. After that you could see how they could scale that up relatively easily.
Also apple built in some special instructions that makes macOS software better, like core operations of objective-c, so some of this might be very apple runtime specific optimizations.
The first iPhones used generic Samsung processors and the power is quite similar to the hardware in BlackBerrys contemporary products.
They probably just didn’t realize that one day of battery life is enough for most use cases and is less important than a real browser and the other features brought by the iPhone.
I feel like you're glossing over a bunch of things here.
1) the display on the iphone was gigantic compared to any other device on the market. The fact the iPhone had the battery life it did was amazing.
2) the software on the iphone destroyed anything RIM had ever put forward.
3) the iPhone was a much better piece of hardware aesthetically and performance-wise than anything RIM had ever put out. Their closest competitor was the Pearl and it was garbage compared to the iPhone.
4) Lastly, RIM had 10 year development cycles. Pivoting to attack the iPhone meant ripping apart 10 years of planned development much of which was already executed. Pivoting that hard is just not something companies at the scale or RIM are designed to do.
RIM should've seen the writing on the wall, shred their hardware business, and gone all in on supporting BES on iPhone. I digress...
I don’t know if there was a future for them anyway. What could they have done that Apple could not?
Did they even have a way for third parties to make apps? Even though the iPhone originally didn’t have that the jailbreak community pretty quickly showed that was the way forward.
We know that some components of a chip benefit more from die shrinkage than others. Intel is having yield problems with die shrinkage. Chiplets are a thing.
I can’t for the life of me figure out why Intel hasn’t already hedged their bets and started working hard on hybrid chips. Especially since this now appears to be a long term industry direction. If the next step blunts the consequences of your current problems, why not jump on it?
Intel has EMIB [0], which they've used in i7-8809G [1]. And they seem to be going all-in on chiplets heavily... for 7nm [2]. At their current pace it'll be a few years before that.
Generally caches, especially those using the full 8 transistors per cell instead of those trying to get by with 6, don't use all that much power. One obvious idea is to design a chip such that the hot logic layers are on top, right next to the heat sync, and the logic layers are underneath. 3D topology means that everything can effectively be closer together for lower latency and lower information transport energy but you can make the caches bigger at the same time.
For true 3D chips that are more cube-like people talk about putting liquid cooling channels into the structure but that would be a technology for quite a ways off.
Yes and no. You do run into thermal limits, but you can control how much power you draw by adjusting voltage and clock speed. Somethings are more "transistor" hungry than "cycle" hungry.
That said, I expect the early uses might be in mixed mode (linear and digital devices in the same package) and things that benefit from a huge cache or pre-programmed ROM.
There are also things like diamond substrates that improve the thermal transfer characteristics as well which helps you to draw more heat from the package than just silicon. This was a feature of "silicon on sapphire" processes and perhaps we will see some "silicon on diamond" parts.
Under some constraints [1] [2] if you halve the frequency and double the number of execution units, the overall power consumption drops.
Therefore, you could gain performance or reduce power by stacking layers of silicon.
This is also true for things like power LEDs; within a certain range of their operating curve (current vs. output) you can reduce current by X% and lose less than X% of output. Put down two LEDs, then, and you get more output at the same current.
[1] architectures that scale efficiently to more execution units, like GPUs
[2] you're in a suitable region of the frequency-power curve
Yes, the last 10 percent increase in clock speed causes a 30 percent increase in power, or something like that. If you care about total performance instead of single thread it's probably better to add more cores.
I've always wondered why advances in 3D chips haven't taken advantage of nano-fluidics. One radical new chip design could be very micron-sized coolant pipes for wicking away heat. The "coolant" could perhaps be gallium for rapid heat transfer. Micro-valves could open/close as parts of the chip draw more power, pumping more fluid through them, and cooling them faster.
Every problem has a solution, we must be creative enough to discover & build it.
Microfluidics is one of the most complex fields to commercialize a product in, and is littered with dead startups. The short answer is that the scale change changes the behavior of everything - pumps and flow just plain don't work the way they do at larger scales, and the forces created by surface tension and hydrophillicity become large relative to the strength of the mechanical parts.
Nanoscale fluidics are much more efficient at transferring heat or mass. The issue is that we have just started developing an understanding of the field, so our usual understanding of those concepts have to be developed from scratch. As you mentioned, the forces become large enough, and some variables can't just be ignored. But in my experience (working with microscale perfusion reactor arrays), they actually deliver much more and reliably compared to a conventional system.
As in any cutting-edge technology: enough willpower and finance to explore the design space until a design with sufficient potential is found and its technical challenges overcome. Academia often has already explored the design space. The industry then tries to commercialize one of these approaches. Often, they turn out to be a dud: the challenges are too large, or they are solvable but the technology is ultimately impractical for some reason, or they are a dead end and thus won't be relevant for long enough to build a company on it. And then there are various non-technological human failure modes that can befall a company...
To get a 10* speed up you would need 10* the power. Your 100w cpu would become 1kw! Its not a long term winning strategy to chase after better thermal dissipation. When we have 2d devices that running as fast as they can go and drawing 1w, then it's time to go vertical.
One could imagine to use 1 layer while the other is cooling, then switching when the current layer get too hot? But I guess the heat from 1 active layer would quickly propagate to the whole chip.
Or maybe using more transistors allow to run them at a lower power to complete the same work, which reduces the dissipated energy ?
With a 2-d chip, you can't do much better than a big-old heat sink, and circulating a fluid. For 3-d structures, I wonder if actively moving heat to the outside with Peltier junctions would help.
A small outline integrated circuit (SOIC) is a surface-mounted integrated circuit (IC) package which occupies an area about 30–50% less than an equivalent dual in-line package (DIP), with a typical thickness being 70% less. They are generally available in the same pin-outs as their counterpart DIP ICs.
So what I take away from this is that one company, TSMC, is at the forefront of chip manufacture (with Apple M1 and now Google), whilst being a few miles away from a Chinese administration that sees a recalcitrant province not an independent country.
Our societies are like inverted pyramids- balancing on surprisingly small foundations that can tip over with more ease than we care to admit.
Intel has fabs in the US (and Israel), but they have some catching up to do with TSMC - a lot of things have gone wrong for Intel the last few years. I hope they'll get their shit together and bounce back.
China has its own fabs (SMIC) and has been investing heavily into this, but as they are now banned from buying the EUV machines it will take a while, possibly up to a decade until they get to smaller process nodes. Long term China will be able to fab cutting edge chips itself, and possibly surpass others - looking at their tech trajectory and state funding. The US tech sanctions against China are just buying time.
could absolutely happen sooner. It is of high strategic importance to China - to their whole electronics industry, and the AI dominance race, which also depends on specialised chips.
State level AI dominance race is one of the main drivers of the US/China tech sanctions, I suspect
I remember when people here were talking about Intel's "manufacturing edge" and that they should go into the business of making chips for other chip companies only a couple of years ago.
The idea seems laughable now (actually it was laughable to me at the time, too).
Why can't America build anymore? Is this a cultural problem, or a failure of public policy?
Healthcare, rail, aircraft, automobiles, shipping, pharmaceuticals, and now semiconductors. We can't make any of these things at scale anymore.
They turned GE into a financial institution. They sold Bell Labs off in pieces. Boeing can't safely update 1990s vintage air frames to accommodate modern engines. And yet, the market is on a tear. This is not sustainable.
The overarching reasoning for why manufacturing moved to China appears to be they have way more readily available skilled workers & that "The entire supply chain is in China now":
> "You need a thousand rubber gaskets? That's the factory next door. You need a million screws? That factory is a block away. You need that screw made a little bit different? It will take three hours."
> Apple had originally estimated that it would take nine months to hire the 8,700 qualified industrial engineers needed to oversee production of the iPhone; in China, it took 15 days [1]
Tim Cook on why Apple makes iPhone's in China:
> The number one reason why we like to be in China is the people. China has extraordinary skills. [2]
Taking 9 months to diversify manufacturing of a $2T business is probably worth it.
I don't buy time being the problem. Once manufacturing is in the US, then what? It probably costs 10x+ what it would cost in China or India. That's the bigger problem.
> Taking 9 months to diversify manufacturing of a $2T business is probably worth it.
It's not clear why the most valuable company in the world should abandon their logistical strategy and industry envious high margins to risk their focus and war chest on a gamble that would almost undoubtedly make themselves more uncompetitive with lower margins, increased prices and less units sold.
As for diversity, iPhone parts are sourced from multiple countries, whilst most are assembled by Foxconn in China, they're a Taiwanese multinational manufacturer with factories in India, Thailand, Malaysia, the Czech Republic, South Korea, Singapore and the Philippines.
Apple already manufactures their larger more expensive Mac Pro and iMacs products in the US but I don't see them manufacturing any iOS devices unless it's mostly automated by robots.
> Why can't America build anymore? Is this a cultural problem, or a failure of public policy?
America can't build anymore because America's executives have chosen to steal every piece of wealth they can take without leaving anything behind to build for the future.
American business culture has become far more interested in zero-sum rentierism and financialization that rapidly concentrates existing wealth in the hands of the wealthy than in technological innovation that creates new wealth over the longer term.
The collapse of American industry is the entirely predictable consequence.
This is a large part of the problem if you ask me.
I did some research a few years back into (somewhat unrelated) process management practices. What kinda stood out to me is that in the 50s-60s many businesses transformed their leadership. They went from having engineers that grow into their leadership positions to having dedicated managers. With business degrees.
Just speculation, but I feel this shift in management culture coincides with the loss of a lot of the technical production capabilities of the west. And is closely followed by the money-grab culture.
> engineers that grow into their leadership positions .. to having dedicated managers. With business degrees.
this also happens in software companies i feel.
The underlying issue, i suspect, is that engineers are not "people persons" - less able to manoeuvre politically, and "play the game". But in any societal organization, those who can play the political game can win.
Thus, the dedicated managers end up in those positions. They play the political game, and they get rewarded for it - because they control the reward scheme when they get to those high positions.
Meritocracy is an illusion that gets used by those playing a political game to make engineers feel they are not part of the game.
If investment (real investment, based on profit) is not worth it, and only the central bank wants a piece of the overpriced action, then we have essentially a centralized economy. This breeds stagnation.
This doesn't make sense - zero interest rates make investment worth more.
10% interest rate => "why invest in this new factory for a 10% return when I can just keep money in the bank?"
0% interest rate => "well if I want to make money, I need to invest"
Feel free to blame other things - maybe 0% inflation rate (higher inflation => more opportunity cost of not investing) or QE (which is similar to 0% interest rates, but still different - a 0% interest rate environment persists since early 2000s whereas QE only started after 2009) which is much more problematic as it floods the market (but not the economy) with money and pushes equity prices through the roof (despite shitty fundamentals).
This explanation just reworks the question into one of why western interest rates are so low.
Western central banks can't raise interest rates above the lower bound without triggering unacceptable unemployment or even deflation. Indeed, the highest safe rate has fallen in the wake of every recession.
To me, this suggests major structural problems in the western economies. It's hard to think of a single explanation that applies to all (pre-pandemic) zero-interest rate western economies, however. The economic foundations in Australia, Canada, the UK, the US, and the EU are different enough that there is no obvious single structural fault common to all of them.
> The economic foundations in Australia, Canada, the UK, the US, and the EU are different enough that there is no obvious single structural fault common to all of them.
There is one though: changing demographics - the median age of the population is going up and the percentage of the working population is going down.
The reason is class conflict, or more precisely its absence.
> They turned GE into a financial institution. They sold Bell Labs off in pieces. Boeing can't safely update 1990s vintage air frames to accommodate modern engines. And yet, the market is on a tear. This is not sustainable.
You very well see what's wrong going here. The US economy, and government institutes seem to be overran by a class of self proclaimed "value adders:" heavy hitter "pro-managers," financial "engineers," and, of course, everybody's favourite — lawyers.
It's very natural to conclude that an engineering company like GE shouldn't have been given to bankers, to be turned into a... bank, and Boeing shouldn't have been entrusted to outsourcing managers, to be turned into an outsourcing management company, and dozens electronics companies shouldn't have been given to lawyers, to be turned into patent litigation services companies, and so on, and so on.
Yet, US — one of few countries affording such high level of employee control of their companies, and quite militant unions ends up with workplaces whisked away from under the nose of their employees.
I see a simple explanation: Americans completely prematurely decided to bury the axe of class warfare, and traded peace for progress.
No conflict — no progress.
I am not advocating for violent revolution right away, certainly not that. You do not kill people over the ownership of green paper, that's morally wrong to do so. But you do not let such people simply live comfy life without opposition.
Take a look at other countries, even though they may well lag behind US on worker rights, and don't have a culture of union militancy, and overall worse off compensation even for high skill work, yet you don't see factories turning into banks, or if they do, they quickly see workers voting with their feet.
From my experience, I'd say even in China you do see factory workers changing workplaces when they feel "malaise in the air" in the company, and don't wait for company's malaise turn into (their) financial trouble.
Your analysis is good, but your synthesis seems absurd. Class conflict would be good for productivity? Clearly no. You made your case for the failure of a society run by financial parasites, but so far it seems they’ve defended against class conflict by converting a huge number of people from a productive and capable working class into financial dependents of income redistribution from the middle class. This has been the character and the result of all such ‘class conflict’ so far, and it only makes things worse.
I am not at all advocating for income redistribution at all bankers, MBAs, lawyers are free to earn, and hold to their money as they are, but you do not let every job, and position in the government given to them just for them being such.
I'm rather advocating for fighting the massive loss of common sense, where you get every nook, and cranny in the society/companies/government being stuffed with those of inappropriate class, and being firm, and forceful with that, when, and if needed.
Depends. SpaceX pretty much owns the global launch market, and has out built and out innovated every other country's aerospace companies.
Speaking of Musk, Tesla is now worth more than most automakers, is on a tear, and most likely will be outproducing everyone at making batteries.
I don't think this is purely an 'access to labor' problem. It's a problem of vision and risk tolerance. Musk is willing to try new approaches, even if they fail (eg trying to make a 3D almost 100% tesla factory before having to retreat to using humans)
Silicon Fabs are one of the first industries to be almost 100% automated. So clearly the issue isn't access to labor, but for Intel, it's more like they made a bad bet, and didn't "fail fast", they've been doubling down on bad bets and not willing to be more dynamic.
When you look at Aerospace: SpaceX, Sierra Nevada, Rocket Lab, Relativity Space, it's clear, small focus teams can pull off amazing things, even in high-capex high-risk high-regulatory industries.
The failure of GE and owners is due to bean counters being put in charge instead of missionaries. Take GE's Nuclear division, why are they still putting money into BWRs & PWRs? Decades went by, they are not dropping any money on pebble beds, molten salt, thorium, etc. And why wait for MIT's SPARC to limp along? If they had an Elon Musk figure, he would have put them on a race to build a prototype, even if it failed, in a year, not 5 years.
Monopolies, and access to cost+ government contracts I think have killed a lot of innovation.
And if the big 3 automakers want to compete with Tesla, they need to replace their management with hardcore EV geeks who have passion and LOVE the space, and give them the resources to spin up a new division with all new people and processes. Otherwise, they're going to shamble along, and continue to try and milk their existing business lines until they die.
This is a management problem, not a labor problem. You can't solve this problem by shoveling more STEMs straight outta college onto it. There's a tendency to think China's massive stem graduation firehose will magically mean leadership, but that's million man-month thinking. It's not simply about access to labor that's the problem. Companies with 100 employees outcompete companies with tens of thousands all the time (take WhatsApp vs my employer, Google, in the messaging space)
My phrase to explain it: Engineering is not a "cost center" it's an investment in the future. Do you want to cut investment or go big on the right ones?
Most big companies just want to collect rent rather than make investments.
the economic incentives for a hired management is not aligned with innovation.
A hired CEO has incentive to make the company continue to be profitable during his/her tenure. This means conservative thinking and business continuity. Not taking big, risky bets that pay off multiple 100x in 10 years.
A new company, owned by the CEO level people, is not going to fall into this trap.
For healthcare we make plenty of it and lots of people from elsewhere in the world come to the US for operations. Our problems there are all with healthcare billing, not healthcare production.
US freight rail is actually pretty good. For why our passenger rail is terrible that comes down to high construction costs and this https://bikeeastbay.org/rail/fra.html. The costs are a combination of the rest of the world inventing techniques that the US considers to be Not Inveted Here and a penchant for regulation by lawsuit rather than regulation by beaurocracy.
The US and EU are the two places you can get really good aircraft, it's a major manufacturing export center for the US. China, for example, still can't make modern jet engines and while the fusilage and electronics of their newest combat jets are fine its speed, acceleration, and fuel efficiency are well behind US jets for that reason.
The US is a major pharmaceutical exporter.
The US is also a major semiconductor exporter, we're one of the three countries in the world along with Samsung and Taiwan that are still in the race while something like a dozen companeis have dropped out of the race as capitcal costs keep going up.
Shipbuilding, yeah, US shipbuilding can't compete on the global market because US laborer are relatively expensive.
Basically high wages mean that the US can only compete in manufacturing in high value industries like the ones you mentioned. Things like aircraft, pharmaceuticals, and semiconductors. But we're not going to be a textile exporter until the rest of the world gets to be as wealthy as we are now.
What obstacles would you face starting a competitive new big factory in US or Europe. Think it through and you'll figure out some of the answers. And watch "American Factory".
It's not that bad - Samsung and Intel are at most 1-2 years behind TSMC. An eternity in terms of product cycles for sure, but society would not fall apart if Apple had to redesign their A15 and M2 chips for Samsung 5nm next year.
Intel 7nm line has non-economical yields and is effectively a failure. They’re backporting their 7nm designs on to older processes at huge penalty to buy time for a new approach while putting out a couple flagships on the 7n line at a loss for appearances.
I think this is the 10nm process you are talking about here.
That's had a bunch of issues and is only now producing their premium mobile CPUs.
The 7nm issues aren't great but if it weren't for the 10nm disaster they aren't really more than normal slippage at this point. If it keeps slipping then they have problems.
Are they not delaying it year on year? I am pretty sure every time I read about 7nm of Intel over the years, it always says that it is just around the corner or it is delayed. I'd say the Intel 7nm doesn't exist nor is being worked on apart from slides.
If Intel promises a deadline, it will likely take at least 6-12 months longer. I doubt they will be able to make 7nm chips in any reasonable numbers, if at all, by 2023. Maybe mid-2023.
It's like East and West Germany, or North or South Korea. Whether the opposing sides accept each others as separate states the key remains that they are split countries that will strive to reunify. In the case of China the split is massively unequal, which might give more ideas to the stronger side.
If anything the semiconductor industry is a result of this. Taiwan/ROC needed to develop to survive and it made its industry vital for the world, which make them important instead of simply a puny island off the coast of mainland China.
Or not. If they do it properly that is all chinese has equal rights. Just like giving equal rights to woman in japan. Instead of who has the gun abd be more violence got it. Even if a dynasty survived the transit is not.
It is not the edge that is the matter, as their boundary move well inside USA if you have not woke up. It is the internal that is the problem.
Letting china to get asymmetry (can go out but nothing go in) with such a rubbish system that struggled for thousands of year.
In fact, your smartphone probably already exceed all ITARed chips by computing power, and it's a big irony that all of them go through China before reaching the US.
If China wanted to put some into their ICBMs, they would've been just better off wiring a smartphone to the rocket.
It would make designing advanced CPUs for foreign manufacture illegal so further advancements couldn’t be stolen and ensuring on shore chip fab capability.
“export” controls doesn’t mean just goods, but also designs, specifications, even basic information
How would you ensure the advancements couldn't be stolen? I thought we learned over the years that if hostile power wants a technology, sooner or later they will get it.
It’s about adding friction, not achieving perfection. Chips are so complex and fabs so specialized and elaborate that it would seem relatively easy to keep development secrets rather well. They already have to build them in high grade clean rooms.
Remember when they intended to put an export ban on the PlayStation2 to North Korea, because the chips were so powerful that they could be used for missile guidance?
As long as someone reasonable is holding the office of the US President, I think it's safe to assume that the good old "mutually assured destruction" theory still holds and China knows that they'd get flattened by nuclear bombs if they dared to attack Taiwan.
If any country nuked another without a direct attack that couldn't be repelled through non-nuclear means as provocation it would be economic suicide - no country on Earth would continue to trade with them. They would have to pay crippling reparations. Sanctions would be put in place within hours. That alone is enough to stop any country using nuclear weapons in anger. The deterrent aspect of "mutually assured destruction" isn't really necessary any more.
A few years ago, that assumption would have been sound. After seeing it slightly tested in the past four years though... I don't think such an assumption really holds as an absolute certainty anymore.
We have seen developed nations continue trading with (and in some cases even increase their trade with) countries that have received international condemnation that should have seen unilateral embargos.
To name the first three that come to mind:
- Russia-Ukraine (land grab) - EU and Germany in particular increasing trade with Russia, particularly in terms of gas procurement when Russia should have been sanctioned into the ground
- Russia-Britain (killing on foreign soil) - same as above
- North Korea violating nuclear and ICBM testing treaty - most countries ceasing to trade openly, but continuing "off the books"
The US, murdering people with drones without due process and often by sloppiness and mistake. The land-grab in Guantanamo bay and some parts of middle-america. Supplying nuclear weapons to the likes of Erdogan and Merkel. Supporting an autocratic regime in Saudi-Arabia and a lot of other regions. Violating the Iranian nuke treaty unilaterally.
It's not US policy to use nukes to defend any country outside the declared US Nuclear Umbrella[1][2]; ie NATO Countries, South Korea, Japan and Australia.
The US (non-nuclear) defence policy on Taiwan is deliberately ambigious, and the US has no defense treaty (beyond arms sales) with Taiwan.
To quote Wikipedia[3]:
"The Taiwan Relations Act does not guarantee the USA will intervene militarily if the PRC attacks or invades Taiwan... America's policy has been called "strategic ambiguity" and it is designed to dissuade Taiwan from a unilateral declaration of independence, and to dissuade the PRC from unilaterally unifying Taiwan with the PRC."
FYI: Chen Shuibian has proclaimed independence like 10 times during his tenure, yet nobody even blinked.
This act is bull\w{4}, like many other defence treaties. The final word is always behind the executive power everywhere, and whether they have balls to go to war.
This is why the recent weapon deal finalized by the current POTUS was so important. A lot of people there are disappointed by the result of the US presidential elections, since the probable soon-to-be president have a very ambiguous stance on the matter.
It would be nice if other powers such as France were to offer arms deal, particularly ships and aircrafts (it did in the past).
Taiwan is a very nice place with a lot of talents, influences, languages and a working democracy. It would be a real shame if all of this were whipped by China.
Individually, European countries (including, but not limited to France) are too small to act against China. This is why the EU is so important: EU can (still) stand up to China, in ways France simply cannot. It would of course be even stronger if UK was still part of it... But in any case the EU cannot sell weapons.
I read quite a few French defense forum where netizens are wondering how France could sell weapons to Taiwan, without being singled out. One idea is to sell expertise, parts and maybe even blueprints rather than weapons. "Made in Taiwan" enough not to brush China the wrong way. As long as it looks local enough, China might agree to look the other way. This is of course a rather pessimistic line of thought, as the idea here is that China simply doesn't care of a handful of frigates, destroyers and fighters in the hand of the Taiwanese army. They simply don't want a slap in the face from a foreign country.
To put things in perspective, by 2030 the Chinese vessel fleet is expected to be at least 425 ships strong, versus a target of 355 for the US. [1] With production capabilities vastly superior to that of the Western world. In that regard selling 5 destroyers to Taiwan is probably not a big deal. But maybe it's not all that useful.
That’s an interesting method I didn’t think of, however I understand why the officials would be wary of it: in a way its similar to selling the source code instead of the finished product. The French military complex while less powerful than the US one probably don’t want to share how the sausages are made.
All China would need to do is ban exports of Chinese goods to the US in retaliation and the US would be completely crippled and unable to function as a country within 3 months. There is literally no scenario where the US would ever nuke China.
If that was so important, then why don’t Trump just give them nukes? Why don’t Taiwan just declare independence? Let’s get this world war 3 started already.
Taiwan is already a sovereign and independent state, it does not need to "declare independence" from anyone. Nukes are very powerful and strategic, it’s easy to understand why nobody wants to sell to anybody else.
Because giving nukes to someone is dangerous (more dangerous than giving stingers to the taliban) and subject to approval that Trump won't get. Taiwan is afraid to declare independence, they even have a mainland chinese governor to help the PRC keep up the pretense.
The title is unfortunately misleading as it makes it look like only Google and TSMC are “pushing the boundaries” whereas it also includes other tech companies.
[1] https://www.wired.com/2012/07/google-server-manufacturing/
They've also already been building their own network chips for some time. So, given the scale of their datacenters, I think it's entirely reasonable that they'd outpace nvidia.
There are such an insane number of processors being made, we have Apple, Amazon, AMD, Nvidia, Google all buying their stuff from TSMC. I would be very surprised if Google was the biggest of them all.
I do think the proximity of Taiwan to China is a huge help for both fast turnaround of tools/machines as in workers. China themselves are banned from having the latest chip technology. Taiwan is often used as a proxy for western countries to be near China.
I could also see Apple in-housing their chip fab at some point sooner, just for the sake of continued vertical integration. It doesn't hurt that they have the deepest pockets too. The interesting question is whether or not they'd ever take orders from third parties.
I'm guessing the split here is 20/80. 145mm is the machine cost, and then it takes 800mm or more over the lifetime. That's a billion dollars right there. A billion dollars revenue is unicorn level. But here we're talking just about putting aside a billion in operational expense just for the chip. Then there is the cost of building the actual product on top of this.
Then there is the useful life of the machine. No product wants to be on the same chip for 5 years. So they have to recoup the 1 billion fast.
The only way would be to generate extra revenue out of the machine by sharing it with other companies. Ah but now you have to cater to their demands as well. Better have a profitable model around this. But well, that's what TSMC is...
If Apple goes into manufacturing their own semiconductors, they could absolutely bootstrap the whole shebang and have tens of billions of dollars left over.
There are many chips stick to old process except latest performance chips like smartphone SoC, GPU, TPU.
That's the Taiwan price. What's the domestic multiplier on that price? What is the time to market price to pay if you're trying to pull this off far from the competitive suppliers and experienced tool makers in Asia?
Andy Grove explained all of this 10 years ago[1]. Today we see Intel toying with farming out high end fab work; another milestone in the decline of US technology that Andy wrote about.
[1] https://www.bloomberg.com/news/articles/2010-07-01/andy-grov...
> But what kind of a society are we going to have if it consists of highly paid people doing high-value-added work—and masses of unemployed?
Even if they have money, I doubt they have the capability.
They may have billions USD for pocket change, but they can't make a freaking working video call app, and a web browser which eats less than 1GB of memory just to draw a few pictures, and a text layout.
Just steadily keeping doing what it did for 30 years straight without compulsion to chase "the next big thing"? A rather unglamorous industry, and for most of their history, they did not chase the bleeding edge, letting more moneyed players to bleed each others.
I read some of those very extensive market research reports (or better to say market research books) which go to semiconductor companies for few thousand bucks a pop. It's mind boggling how much consideration, calculations, and planning goes into a decision to sign on a $10B+ fab spending. Forecasts, and technology analysis goes for at least a decade forward, and they dive further than simply tech, into things like social trends, and etc.
So, it is, the industry is very hard, very obscure, and works like an ant repellent on "next big thing" chasers from Silicon Valley culture.
TSMC is building a plant in Arizona.
The same could be said with Operating System in the 90s from Microsoft. Or not just technology but the cost to be involved and Unit Cost Economics.
They've been carefully dropping hints of this since July. Through the first week of November the headlines were "Intel to decide soon" whether to outsource, with a decision supposedly appearing sometime in January. This looks like the sort of precision expectation setting one would expect of an experienced zombie corp CEO like Swan.
So don't bet on Intel making any great comeback in fab tech; they've decided they can be 'relevant' without it.
There are essentially 3 leading fabs left: TSMC, Samsung, Intel.
Samsung has yield issues, Intel has yield issues, TSMC somehow has no issues. GlobalFoundries has already dropped out of the race. TSMC is selling every wafer they make, and continues to invest heavily to solidify it's lead.
ALL of these fabs depend on ASML machines. IIRC last year they delivered just over half of the machines that were ordered.
But if you'll allow your question to be answered with pure rumortown and supposition:
Intel has engrained management issues that don't allow for reorganizing to systemically address their 10nm and under process issues as the little fiefdoms that control different parts of the process are spending more time throwing each other under the bus rather than collaborating.
Samsung isn't far off from TSMC (about a single node). They fell a little behind as TSMC grabbed a bunch of their contracts (most notably Apple) which left them with not as much to capital to invest in their newer process nodes at a rather critical point. At a bare minimum they'll be kept alive by TSMC's customers as a viable second source (if you squint hard enough) as a negotiating tactic (see Nvidia's 30 series on Samsung as a shot across TSMC's bow).
GloFlo canceled 7nm and under R&D. I wouldn't be surprised if they bounce back at some point when all of the EUV gotcha workarounds are more or less public knowledge and the equipment doesn't have prices on the order of a small country's GDP. If we hit fundamental limits at ~1nm and Moore's law really dies, then they should be able to jump back in with relatively less work than a newcomer and be the budget option once leading edge chips become a commodity.
https://www.reuters.com/article/asml-deliveries-idUSL8N1Y817...
That would not surprise me at all. I was hired on contract once to help a system (not ICs) that was having issues and my work was directly obstructed (a coworker said sabotaged) by the guy (foreign) who designed one subsystem. At the time I though maybe it was his ego not liking that his boss brought me in to help and I was succeeding. When it became clear from multiple incidents that he was subtly sabotaging me I started to wonder this very thing. I showed evidence to our boss who said "my hands are tied" because of the corporate structure at the time. My work did make it into production and I moved on for other reasons.
They can probably still hammer AMD on both their software and documentation, i.e. roughly a quarter of their sales are to data centres.
It will be interesting to see what happens to X86 post-M1 (i.e. ARM coming of age), I suspect not much but I think the world has changed enough to have another stab at VLIW (for example) even if the cost/benefit is pretty marginal.
Static compile time VLIW means one cannot really make it wider anymore, or narrower. Dynamically doing it on runtime means a cheaper and smaller core can just be narrower. Remove an instruction slot for one integer ALU? Fine, no issues. Everything still works, albeit slower by that much. Make a beefier chip? Perfect, it got faster by the amount of instruction level parallelism that was available.
In addition a compiler cannot really see across function boundaries (except if it's statically determinable). A jitter can. Modern chips have reordering window of hundreds of instructions, M1 apparently goes up to over 600. That's quite a lot of stuff there that it can dynamically reorder across. A compiler might not have noticed that due to some weird dynamic call that was not visible during compilation time there is now an FPU instruction that could be inserted here, an OoO processor can.
Due to that explicit VLIW is basically dead outside of some highly specific applications, like DPS and whatnot.
In a similar fashion ridculously wide vector units were obsoleted by the approach pioneered by GPU's. Just add few things to allow masking based on branches and you get SIMT approach. Write as if it was scalar code and it'll run on HW that has vector lengths going from none at all up to whatever, NVidia has 32 wide vector units as an example.
I believe this is the real thing that make superscalar OoO (ie. JIT to VLIW in silicon) win.
I consider it to be roughly as important as being able to reorder instructions across non statically determined branches and function calls. And both of these expose the fundamental weakness in explicit VLIW, if it's essentially nondeterministic it cannot be taken advantage of.
Then we naturally have some other benefits, like hyperthreading. Which basically is just compiling two instruction streams together on the fly.
For it to work statically (although it really could be halfway in between), it would probably require a complete paradigm shift away from the current way we think about cpu caches
Regardless of whether it'll work or not, I'll be very happy if the mill ever makes it onto a chip.
Got really confused at first.
From a single thread performance standpoint, the important part is that OoO scheduling is able to dynamically schedule around cache misses to effectively keep more memory accesses in flight, extract more memory level parallelism. In principle you could make an OoO VLIW CPU, but that would negate most of the power benefit while hamstringing the scheduler with unnecessary dependencies. Where in-order VLIW shines is when memory accesses are predictable, like DSP code. There you get an order of magnitude power efficiency gain.
GPUs are effectively still in-order CPUs with large SIMD instructions, some useful instructions to make masked execution simpler and a specialized language and compiler to hide this model from the developers. GPU manufacturers calling these separate data lanes threads is just misleading marketing BS. There is no independent instruction pointer for each lane.
I do not disagree that VLIW is a great for things like DSP where the power consumption is of the essence. One can get ridiculously high perf/watt by going explicit VLIW. I just don't see any way we'd see that approach in general purpose CPU's again.
GPU's do not need to reorder instructions to hide memory latency. It just happens on a different level. While a single threadgroup (as in that single instruction pointer that controls the SIMD unit) will not get reordered at all, one has multiple threadgroups in flight. So if one group stalls at a memory load the unit will just schedule a different threadgroup. Because one generally has tons of them in flight. It's all about throughput. One could think of this as an in order CPU (from viewpoint of a single thread) but with ridiculous amounts of hyperthreading (one thread stalls, we can pick an instruction from another thread but never one from the stalled thread).
I have a beast i7 gaming rig too and my M1 beats it at everything but graphics performance.
What I think Intel will continue to work their enterprise business until this “arm wave” starts to pull up. Engineers at Intel probably have some ideas.
If a company other than apple makes a fast arm chip that'd be very interesting, though. Nvidia have probably thought about it.
This reminds me of when RIM got their hands on an iPhone for the first time.
One of many articles: https://appleinsider.com/articles/10/12/27/rim_thought_apple...
Also apple built in some special instructions that makes macOS software better, like core operations of objective-c, so some of this might be very apple runtime specific optimizations.
They probably just didn’t realize that one day of battery life is enough for most use cases and is less important than a real browser and the other features brought by the iPhone.
1) the display on the iphone was gigantic compared to any other device on the market. The fact the iPhone had the battery life it did was amazing.
2) the software on the iphone destroyed anything RIM had ever put forward.
3) the iPhone was a much better piece of hardware aesthetically and performance-wise than anything RIM had ever put out. Their closest competitor was the Pearl and it was garbage compared to the iPhone.
4) Lastly, RIM had 10 year development cycles. Pivoting to attack the iPhone meant ripping apart 10 years of planned development much of which was already executed. Pivoting that hard is just not something companies at the scale or RIM are designed to do.
RIM should've seen the writing on the wall, shred their hardware business, and gone all in on supporting BES on iPhone. I digress...
Did they even have a way for third parties to make apps? Even though the iPhone originally didn’t have that the jailbreak community pretty quickly showed that was the way forward.
I can’t for the life of me figure out why Intel hasn’t already hedged their bets and started working hard on hybrid chips. Especially since this now appears to be a long term industry direction. If the next step blunts the consequences of your current problems, why not jump on it?
Sunk cost fallacy?
[0] https://www.intel.com/content/www/us/en/foundry/emib.html [1] https://en.wikichip.org/wiki/intel/core_i7/i7-8809g [2] https://www.anandtech.com/show/16021/intel-moving-to-chiplet...
Would they be competitive? Or are apple so much further ahead on other things too?
We barely can keep 1 layer whithin operating temperature.
For true 3D chips that are more cube-like people talk about putting liquid cooling channels into the structure but that would be a technology for quite a ways off.
That said, I expect the early uses might be in mixed mode (linear and digital devices in the same package) and things that benefit from a huge cache or pre-programmed ROM.
There are also things like diamond substrates that improve the thermal transfer characteristics as well which helps you to draw more heat from the package than just silicon. This was a feature of "silicon on sapphire" processes and perhaps we will see some "silicon on diamond" parts.
Therefore, you could gain performance or reduce power by stacking layers of silicon.
This is also true for things like power LEDs; within a certain range of their operating curve (current vs. output) you can reduce current by X% and lose less than X% of output. Put down two LEDs, then, and you get more output at the same current.
[1] architectures that scale efficiently to more execution units, like GPUs
[2] you're in a suitable region of the frequency-power curve
Every problem has a solution, we must be creative enough to discover & build it.
Or maybe using more transistors allow to run them at a lower power to complete the same work, which reduces the dissipated energy ?
They picked a name for it that is already a common term in chip packaging. Face meet palm.
A small outline integrated circuit (SOIC) is a surface-mounted integrated circuit (IC) package which occupies an area about 30–50% less than an equivalent dual in-line package (DIP), with a typical thickness being 70% less. They are generally available in the same pin-outs as their counterpart DIP ICs.
https://en.wikipedia.org/wiki/Small_outline_integrated_circu...
I suspect companies do these make collisions intentionally.
https://3dfabric.tsmc.com/english/dedicatedFoundry/technolog...
Our societies are like inverted pyramids- balancing on surprisingly small foundations that can tip over with more ease than we care to admit.
https://www.tomshardware.com/uk/news/tsmc-arizona-fab-invest...
Intel has fabs in the US (and Israel), but they have some catching up to do with TSMC - a lot of things have gone wrong for Intel the last few years. I hope they'll get their shit together and bounce back.
China has its own fabs (SMIC) and has been investing heavily into this, but as they are now banned from buying the EUV machines it will take a while, possibly up to a decade until they get to smaller process nodes. Long term China will be able to fab cutting edge chips itself, and possibly surpass others - looking at their tech trajectory and state funding. The US tech sanctions against China are just buying time.
Their optoelectronics/optics tech is getting to the state of the art in a short period by their pace of progress from the rumors that I heard.
I'd wager 3-4 years before they have EUV themselves while avoiding conflict supply chain. Not 10 years time as you say.
They got some of the best minds on the planet working, so I wouldn't think that they cannot achieve it rather quick.
Zeiss and ASML are going to have real competition soon.
FYI: Zeiss itself been buying low-end stepper lens work from Chinese for at least a decade, or two.
State level AI dominance race is one of the main drivers of the US/China tech sanctions, I suspect
By which time they will be shipping 3nm from Taiwan. https://www.anandtech.com/show/16024/tsmc-details-3nm-proces...
And given the previously promised obscenely sized financial incentive may not come now, this is not a given now as well.
What is stopping them from making their own?
The idea seems laughable now (actually it was laughable to me at the time, too).
Healthcare, rail, aircraft, automobiles, shipping, pharmaceuticals, and now semiconductors. We can't make any of these things at scale anymore.
They turned GE into a financial institution. They sold Bell Labs off in pieces. Boeing can't safely update 1990s vintage air frames to accommodate modern engines. And yet, the market is on a tear. This is not sustainable.
> "You need a thousand rubber gaskets? That's the factory next door. You need a million screws? That factory is a block away. You need that screw made a little bit different? It will take three hours."
> Apple had originally estimated that it would take nine months to hire the 8,700 qualified industrial engineers needed to oversee production of the iPhone; in China, it took 15 days [1]
Tim Cook on why Apple makes iPhone's in China:
> The number one reason why we like to be in China is the people. China has extraordinary skills. [2]
[1] https://theweek.com/articles/478705/why-apple-builds-iphones...
[2] https://www.inc.com/glenn-leibowitz/apple-ceo-tim-cook-this-...
I don't buy time being the problem. Once manufacturing is in the US, then what? It probably costs 10x+ what it would cost in China or India. That's the bigger problem.
It's not clear why the most valuable company in the world should abandon their logistical strategy and industry envious high margins to risk their focus and war chest on a gamble that would almost undoubtedly make themselves more uncompetitive with lower margins, increased prices and less units sold.
As for diversity, iPhone parts are sourced from multiple countries, whilst most are assembled by Foxconn in China, they're a Taiwanese multinational manufacturer with factories in India, Thailand, Malaysia, the Czech Republic, South Korea, Singapore and the Philippines.
Apple already manufactures their larger more expensive Mac Pro and iMacs products in the US but I don't see them manufacturing any iOS devices unless it's mostly automated by robots.
America can't build anymore because America's executives have chosen to steal every piece of wealth they can take without leaving anything behind to build for the future.
American business culture has become far more interested in zero-sum rentierism and financialization that rapidly concentrates existing wealth in the hands of the wealthy than in technological innovation that creates new wealth over the longer term.
The collapse of American industry is the entirely predictable consequence.
See also
https://www.forbes.com/sites/stevedenning/2011/11/18/clayton...
for a similar but slightly different take.
I did some research a few years back into (somewhat unrelated) process management practices. What kinda stood out to me is that in the 50s-60s many businesses transformed their leadership. They went from having engineers that grow into their leadership positions to having dedicated managers. With business degrees.
Just speculation, but I feel this shift in management culture coincides with the loss of a lot of the technical production capabilities of the west. And is closely followed by the money-grab culture.
this also happens in software companies i feel.
The underlying issue, i suspect, is that engineers are not "people persons" - less able to manoeuvre politically, and "play the game". But in any societal organization, those who can play the political game can win.
Thus, the dedicated managers end up in those positions. They play the political game, and they get rewarded for it - because they control the reward scheme when they get to those high positions.
Meritocracy is an illusion that gets used by those playing a political game to make engineers feel they are not part of the game.
https://tradingeconomics.com/united-states/interest-rate
If investment (real investment, based on profit) is not worth it, and only the central bank wants a piece of the overpriced action, then we have essentially a centralized economy. This breeds stagnation.
Contrast to China, who had healthier yields, and only lowered them for COVID: https://tradingeconomics.com/china/interest-rate
10% interest rate => "why invest in this new factory for a 10% return when I can just keep money in the bank?"
0% interest rate => "well if I want to make money, I need to invest"
Feel free to blame other things - maybe 0% inflation rate (higher inflation => more opportunity cost of not investing) or QE (which is similar to 0% interest rates, but still different - a 0% interest rate environment persists since early 2000s whereas QE only started after 2009) which is much more problematic as it floods the market (but not the economy) with money and pushes equity prices through the roof (despite shitty fundamentals).
0% interest rate -> I'll take my money and leave for greener pastures (other countries such as emerging markets).
Western central banks can't raise interest rates above the lower bound without triggering unacceptable unemployment or even deflation. Indeed, the highest safe rate has fallen in the wake of every recession.
To me, this suggests major structural problems in the western economies. It's hard to think of a single explanation that applies to all (pre-pandemic) zero-interest rate western economies, however. The economic foundations in Australia, Canada, the UK, the US, and the EU are different enough that there is no obvious single structural fault common to all of them.
There is one though: changing demographics - the median age of the population is going up and the percentage of the working population is going down.
The Western world has been relatively stagnant in this regard for two generations already.
> They turned GE into a financial institution. They sold Bell Labs off in pieces. Boeing can't safely update 1990s vintage air frames to accommodate modern engines. And yet, the market is on a tear. This is not sustainable.
You very well see what's wrong going here. The US economy, and government institutes seem to be overran by a class of self proclaimed "value adders:" heavy hitter "pro-managers," financial "engineers," and, of course, everybody's favourite — lawyers.
It's very natural to conclude that an engineering company like GE shouldn't have been given to bankers, to be turned into a... bank, and Boeing shouldn't have been entrusted to outsourcing managers, to be turned into an outsourcing management company, and dozens electronics companies shouldn't have been given to lawyers, to be turned into patent litigation services companies, and so on, and so on.
Yet, US — one of few countries affording such high level of employee control of their companies, and quite militant unions ends up with workplaces whisked away from under the nose of their employees.
I see a simple explanation: Americans completely prematurely decided to bury the axe of class warfare, and traded peace for progress.
No conflict — no progress.
I am not advocating for violent revolution right away, certainly not that. You do not kill people over the ownership of green paper, that's morally wrong to do so. But you do not let such people simply live comfy life without opposition.
Take a look at other countries, even though they may well lag behind US on worker rights, and don't have a culture of union militancy, and overall worse off compensation even for high skill work, yet you don't see factories turning into banks, or if they do, they quickly see workers voting with their feet.
From my experience, I'd say even in China you do see factory workers changing workplaces when they feel "malaise in the air" in the company, and don't wait for company's malaise turn into (their) financial trouble.
I'm rather advocating for fighting the massive loss of common sense, where you get every nook, and cranny in the society/companies/government being stuffed with those of inappropriate class, and being firm, and forceful with that, when, and if needed.
no of course not, that would be terrible pr
Speaking of Musk, Tesla is now worth more than most automakers, is on a tear, and most likely will be outproducing everyone at making batteries.
I don't think this is purely an 'access to labor' problem. It's a problem of vision and risk tolerance. Musk is willing to try new approaches, even if they fail (eg trying to make a 3D almost 100% tesla factory before having to retreat to using humans)
Silicon Fabs are one of the first industries to be almost 100% automated. So clearly the issue isn't access to labor, but for Intel, it's more like they made a bad bet, and didn't "fail fast", they've been doubling down on bad bets and not willing to be more dynamic.
When you look at Aerospace: SpaceX, Sierra Nevada, Rocket Lab, Relativity Space, it's clear, small focus teams can pull off amazing things, even in high-capex high-risk high-regulatory industries.
The failure of GE and owners is due to bean counters being put in charge instead of missionaries. Take GE's Nuclear division, why are they still putting money into BWRs & PWRs? Decades went by, they are not dropping any money on pebble beds, molten salt, thorium, etc. And why wait for MIT's SPARC to limp along? If they had an Elon Musk figure, he would have put them on a race to build a prototype, even if it failed, in a year, not 5 years.
Monopolies, and access to cost+ government contracts I think have killed a lot of innovation.
And if the big 3 automakers want to compete with Tesla, they need to replace their management with hardcore EV geeks who have passion and LOVE the space, and give them the resources to spin up a new division with all new people and processes. Otherwise, they're going to shamble along, and continue to try and milk their existing business lines until they die.
This is a management problem, not a labor problem. You can't solve this problem by shoveling more STEMs straight outta college onto it. There's a tendency to think China's massive stem graduation firehose will magically mean leadership, but that's million man-month thinking. It's not simply about access to labor that's the problem. Companies with 100 employees outcompete companies with tens of thousands all the time (take WhatsApp vs my employer, Google, in the messaging space)
Most big companies just want to collect rent rather than make investments.
A hired CEO has incentive to make the company continue to be profitable during his/her tenure. This means conservative thinking and business continuity. Not taking big, risky bets that pay off multiple 100x in 10 years.
A new company, owned by the CEO level people, is not going to fall into this trap.
For healthcare we make plenty of it and lots of people from elsewhere in the world come to the US for operations. Our problems there are all with healthcare billing, not healthcare production.
US freight rail is actually pretty good. For why our passenger rail is terrible that comes down to high construction costs and this https://bikeeastbay.org/rail/fra.html. The costs are a combination of the rest of the world inventing techniques that the US considers to be Not Inveted Here and a penchant for regulation by lawsuit rather than regulation by beaurocracy.
The US and EU are the two places you can get really good aircraft, it's a major manufacturing export center for the US. China, for example, still can't make modern jet engines and while the fusilage and electronics of their newest combat jets are fine its speed, acceleration, and fuel efficiency are well behind US jets for that reason.
The US is a major pharmaceutical exporter.
The US is also a major semiconductor exporter, we're one of the three countries in the world along with Samsung and Taiwan that are still in the race while something like a dozen companeis have dropped out of the race as capitcal costs keep going up.
Shipbuilding, yeah, US shipbuilding can't compete on the global market because US laborer are relatively expensive.
Basically high wages mean that the US can only compete in manufacturing in high value industries like the ones you mentioned. Things like aircraft, pharmaceuticals, and semiconductors. But we're not going to be a textile exporter until the rest of the world gets to be as wealthy as we are now.
Haha. Samsung is a whopping 17% of Korea's GDP, but they haven't renamed the country yet. :)
Everyone also seems to forget that every time AMD has caught up to Intel in the past either AMD has itself stumbled and/or Intel again jumped forward.
Intel's 7nm process (which is equivalent of TSMC's 5nm, which just began production work) has recently been delayed into 2022[1]
[1] https://www.allaboutcircuits.com/news/intels-7nm-process-six...
At least that’s the word on the street.
That's had a bunch of issues and is only now producing their premium mobile CPUs.
The 7nm issues aren't great but if it weren't for the 10nm disaster they aren't really more than normal slippage at this point. If it keeps slipping then they have problems.
https://steveblank.com/2020/06/18/the-coming-chip-wars-of-th...
If anything the semiconductor industry is a result of this. Taiwan/ROC needed to develop to survive and it made its industry vital for the world, which make them important instead of simply a puny island off the coast of mainland China.
On the Chinese perspective of view, Chinese people can create wonders if politics are kept aside.
https://en.wikipedia.org/wiki/Taiwan_Miracle
It is not the edge that is the matter, as their boundary move well inside USA if you have not woke up. It is the internal that is the problem.
Letting china to get asymmetry (can go out but nothing go in) with such a rubbish system that struggled for thousands of year.
Good luck to humanity.
China: Hires TSMC engineers[1]
[1] https://asia.nikkei.com/Business/China-tech/China-hires-over...
[0]: https://www.raptorcs.com/
If China wanted to put some into their ICBMs, they would've been just better off wiring a smartphone to the rocket.
Why am I reminded of the plot of Iron Sky 1?
“export” controls doesn’t mean just goods, but also designs, specifications, even basic information
How would you ensure the advancements couldn't be stolen? I thought we learned over the years that if hostile power wants a technology, sooner or later they will get it.
http://news.bbc.co.uk/2/hi/asia-pacific/716237.stm
https://www.pcmag.com/news/20-years-later-how-concerns-about...
https://en.wikipedia.org/wiki/Adjusted_Peak_Performance
https://www.intel.com/content/www/us/en/support/articles/000...
We have seen developed nations continue trading with (and in some cases even increase their trade with) countries that have received international condemnation that should have seen unilateral embargos.
To name the first three that come to mind:
- Russia-Ukraine (land grab) - EU and Germany in particular increasing trade with Russia, particularly in terms of gas procurement when Russia should have been sanctioned into the ground
- Russia-Britain (killing on foreign soil) - same as above
- North Korea violating nuclear and ICBM testing treaty - most countries ceasing to trade openly, but continuing "off the books"
The US (non-nuclear) defence policy on Taiwan is deliberately ambigious, and the US has no defense treaty (beyond arms sales) with Taiwan.
To quote Wikipedia[3]:
"The Taiwan Relations Act does not guarantee the USA will intervene militarily if the PRC attacks or invades Taiwan... America's policy has been called "strategic ambiguity" and it is designed to dissuade Taiwan from a unilateral declaration of independence, and to dissuade the PRC from unilaterally unifying Taiwan with the PRC."
[1] https://en.wikipedia.org/wiki/Nuclear_umbrella
[2] https://www.defense.gov/Explore/News/Article/Article/1822953...
[3] https://en.wikipedia.org/wiki/Taiwan_Relations_Act
This act is bull\w{4}, like many other defence treaties. The final word is always behind the executive power everywhere, and whether they have balls to go to war.
It would be nice if other powers such as France were to offer arms deal, particularly ships and aircrafts (it did in the past).
Taiwan is a very nice place with a lot of talents, influences, languages and a working democracy. It would be a real shame if all of this were whipped by China.
I read quite a few French defense forum where netizens are wondering how France could sell weapons to Taiwan, without being singled out. One idea is to sell expertise, parts and maybe even blueprints rather than weapons. "Made in Taiwan" enough not to brush China the wrong way. As long as it looks local enough, China might agree to look the other way. This is of course a rather pessimistic line of thought, as the idea here is that China simply doesn't care of a handful of frigates, destroyers and fighters in the hand of the Taiwanese army. They simply don't want a slap in the face from a foreign country.
To put things in perspective, by 2030 the Chinese vessel fleet is expected to be at least 425 ships strong, versus a target of 355 for the US. [1] With production capabilities vastly superior to that of the Western world. In that regard selling 5 destroyers to Taiwan is probably not a big deal. But maybe it's not all that useful.
[1] https://www.meta-defense.fr/en/2020/09/17/the-355-ships-targ...
There is something call MAD, that keeps the United States scared of launching nukes against China.
Carrier strike groups, on the other hand, are used and will be used.
And how would China fare without all those US dollars?