The author may have let his preconceptions get in the way of reasoning a bit too much --- x86, or at least x86 compiler output, is easy to recognise in hex/ASCII mostly because you'll see things like function prologue/epilogue sequences (55 8B EC, 8B E5 5D C3) and NOP (90) or INT3 (CC) padding everywhere. ARM, MIPS, and Z80 (the other 3 I can recognise by sight) all have their distinct "textures" too.
this is awesome…
I'll be the first to comment on the apparently misplaced bounds check(!) in the fragment of code above it; it reads a parameter from the stack, compares it with 8A, then makes it an index into some array of 8-byte elements and reads the two values from memory before deciding whether it's valid or not --- and seems to put -1 into eax if it's not.
Not really a problem if this is running in realmode (or "unreal mode") with no memory protection (it will just read 8 bytes from somewhere in the address space, and probably ignore them), but it could crash if it was in protmode (which the lgdt in the preceding fragment suggests) set up with restrictive segment liits, and the memory address was not valid.
Then again, the check could be completely superfluous if that function would never be called with an out-of-bounds value...
Can someone explain why it's a big deal they are using an x86 chip? It seems ARM is the standard in the mobile world, but I'm not sure what the motivations might have been for the change or if this has drawbacks that make it so surprising.
There isn't a big deal. Hell, AArch32 has something like 1400 instructions, it's just as complicated as an embedded x86. And as someone who's ported a kernel to both, it has just as many weird parts of the architecture built up over decades (ARM is about as old as 32 bit x86).
The motivation for the change is that Intel's making the baseband instead of Qualcomm now, so it's not an ARM/Hexagon like it once was.
The author is just bent around a decade plus out of date view of chip architecture.
Yeah, this is at most just kind of geeky interesting. But otherwise it has no particular meaning whatsoever, particularly in an iPhone. Apple is one the few/only (I think Pixel too maybe? tptacek or other security folks would be up to date) phones to totally isolate the baseband, essentially treating it like a USB peripheral. Whether it uses USB or SDIO or PCIe it has no DMA to the application processor (Apple uses an IOMMU, all discusses in their iOS Security Guide) so while a hack of it I guess could still be irritating purely in terms of messing with cellular access, simple location privacy, or the wider network perhaps, it's not going to inherently leverage into access to the system. Baseband has its own secure bootchain of course as well.
Additionally given that this is a very low power low level embedded system device for a specific function it seems most likely that it's also, like Intel's old Atom chips, a refined derivative of older simpler x86 chips with in-order execution, no speculative execution, no µop transforms or other stuff like that. Would be kind of interesting to learn some of the details, but at any rate that means whole classes of security issues people are already bringing up in this thread (like Spectre/Meltdown) are entirely irrelevant. If some arch simply lacks speculative hardware period then that's that. Simplicity in general can have major performance impacts sure, but it's also lower energy and more secure.
You're not necessarily safe, but obviously no matter how isolated you make the cell baseband, you can't get past "crappy glue code can still bone you". The reality is that the HSIC interface between the AP and the baseband is --- conceptually, at least --- about the best you can possibly do.
The basic USB protocol does not allow for direct peripheral mastering of DMA (unlike IEEE 1394 or PCIe for example). So, barring a protocol exploit in the USB host controller, which is certainly a theoretical possibility given the enormous size of the specification, fencing off the USB host controller using the IOMMU is just an additional layer of protection, rather than a necessary boundary like it is for a PCIe or 1394 device which has access to memory natively.
A USB-C/USB3 physical port makes things a bit more complicated, as it also could be attached to a controller supporting Thunderbolt, which is PCIe.
The unsafe part of plugging in a USB device is probably at the OS level rather than the protocol level - in my opinion you're much more likely to be owned by something like connecting a device with a buggy or compromised driver, mounting a filesystem containing an FS exploit, or reading a file containing an application exploit than by a USB-level protocol exploit.
Also perhaps it should be noted that if it is a USB-C connector it may be a Thunderbolt 3 port in addition to being a USB port, which may support PCI-E and allow direct access to memory by a peripheral even if there is no driver installed for that peripheral.
SATA doesn't allow for DMA from the target device. SATA controllers can DMA, but that's not different from how NICs and USB controllers typically typically DMA. The target addresses are ultimately controlled by the device on the host side. Like USB it feels more like a traditional network protocol than a traditional bus protocol.
PCI-E does DMA, and AFAIU was inspired by Infiniband (another RDMA protocol).
That’s old news mostly. Intel has been making iPhone basebands for several generations, the new news is its exclusively Intel. Given Qualcomm and Apple’s very public fights, not surprising, but I’m curious how Intel overcame Qualcomm‘s patents on CDMA. Did they expire? Did Intel somehow get a license? I’m definitely curious and will post back with what I find assuming someone else doesn’t first.
Guess Intel got the CDMA assets via its purchase of VIA Telecom assets in 2015. Wonder why it’s taken this long to come to market?
So, I just want to throw out there that your arstechnica uarch articles are one of the things that pushed me into computer engineering rather than straight CS. And Inside The Machine I consider to be up there with Patterson & Hennessy. Thanks for all of that! : )
Is it? It's 3 pages of overly emphatic text revealing that an Intel chip is based around an x86 CPU. By the author's own admission is their conclusion: "Nothing really, I just found this funny and wanted to share".
Also I'm a bit baffled that they wrote a tool to measure the entropy of the machine code, tried hand-disassembling and considered that it might have been an encrypted binary format before guessing that an Intel chip could be running an Intel CPU. But they did try "EVERY POSSIBLE RISC ARCHITECTURE [they] KNOW" because apparently nobody ever used CISC on embedded devices. Nobody tell him about the GameBoy.
Of course I'm a bit harsh, it's easy to mock in hindsight but it's still not very interesting technically.
obviously you're not into baseband reversing, otherwise you would have known that for the past 10+ years, basebands were almost always RISC cpu and almost always ARM...
moreover, all previous iterations of Intel basebands were custom ARM cores based around Infineon IP acquired by Intel to be competitive in the baseband market...you did not even read my document, because I said this about the old baseband version
moreover, by the nature of baseband itself, it requires a CPU capable of real-time or near real-time processing, as a matter of fact other vendors are using Cortex-R CPU, which is an ARM cpu made for real-time os, giving you predictable timings, especially interrupt processing and memory access
for example, Cortex-R gives you a special kind of memory, called TCM (Tightly-Coupled Memory) memory, which gives you predictable memory access timings, something that you cannot obtain with a simple cache
by the way, Cortex-R is also used in WiFi chipsets, because the type of processing required is very similar (check the excellent writeup done by Google's Project Zero about this)
so yes, it is interesting to see how Intel managed to implement this kind of features in an x86 CPU, which was never designed for such kind of requirements
I suggest you take a look at the References in my document, they might provide some useful information on the matter
of course if you're not interested in baseband reversing, then I guess you're right, it's not technically interesting material
The article mentions that the old Intel baseband processor was running ARM. It seems like Intel is slowly trying to migrate all their processors to x86, given that they migrated their Management Engine from ARC to x86 a few years ago and now this.
oh not really, as I already said, Intel basebands were ARM cpu...iPhone was using the Hexagon platform only for CDMA versions, you can check it by downloading a random ipsw from previous years iPhone models
they were Hexagon only just for few models (iPhone 5 and 5s I think), before that, they were using the Infineon baseband, which guess what...it's what Intel bought :)
btw, for the last 10+ basebands were mostly ARMs, with very few exceptions (the already mentioned Hexagon), check also Mediatek and Huawei basebands
and yes, I don't like having an x86 as an embedded CPU, but that's my problem, I guess...
Baseband usually refers to the firmware running the CPU/chip dedicated to operating the radio(s), generally the WAN radio (LTE, 3G, etc.) for a cellular connection, although some are integrated with other types (WiFi, Bluetooth, NFC, etc.).
I was under the impression that x86 is not energy-efficient enough to be used on a phone. But I guess that applies to the modern variant of x86 with a quite bloated instruction set. Who knows if this version of x86 has a more restricted instruction set.
Cortex family chips take compressed Thumb instructions and translate them into ARM. PentiumPro and up does roughly the same with x86 -> uOps. There is no telling what the internal instruction representation is on these parts and how CISCy it actually is. Though Intel have been using updated low power 486 cores as microcontrollers for the past few years it could just as readily be Atom.
The Quarks (rip) had a reduced and improved x86-compatible instruction set and they were quite low powered... they really failed to catch on because Intel didn't think to put them in a friendly package to be integrated into anything - they thought teeny tiny pitch BGA was fine for the Maker community that still loves their throughhole components. (And they were kinda buggy chips, but they had roughly shaken them down well by the next silicon spins). I had kinda hoped the Quark would stick around, just so Intel could start sundowning some of those terrible old instructions and execution modes and start properly decrufting x86 - it's 2018, we don't need 32-bit Real Mode anymore, we can emulate it a thousand times on a PC and still have processor power left to play video games.
But, that being said, these things probably have a "Mobile Core" processor (i.e. an Atom), which are quite low powered still, and they probably don't run them at all that high of a clock rate either, saving more power.
People like to get in hot fights about this all the time, and I could be taking the bait, but there is an overhead to instruction decoding and fetching that changes with the complexity of the encoding, and there is also a cost of implementing all the instructions.
Simpler instruction set means fewer gates means less power draw, with some hand waving. The ARM 1 had 25,000 transistors and the ARM 2 had 30,000. The contemporaneous Intel 386 had 280,000 and the 486 had 1,200,000. Intel had the engineering resources to design powerful chips that people bought in droves, ARM had a very small number of engineers and had to design something smaller but as a consequence the ended up with very low power consumption, which wasn’t important until people put it in phones. Since then Intel has optimized for power consumption but they were catching up for a long time.
In most modern CPUs, other factors dominate and instruction set is less important. Nowadays we have plenty of gates to spare and the question is how much can you accomplish at a given price and power envelope. And it’s irrelevant to talk about power consumption of a baseband processor unless you’re also talking about the power consumption of the radio, since they get used together.
"The result demonstrates that the decoders consume between 3% and 10% of the total processor package power in our benchmarks. The power consumed by the decoders is small compared with other components such as the L2 cache, which consumed 22% of package power in benchmark #1. We conclude that switching to a different instruction set would save only a small amount of power since the instruction decoder
cannot be eliminated completely in modern processors."
Yep, that pretty's pretty much it. Back in 1985 when the 386 and ARM 1 came out, neither had any cache at all.
But I think the paper is a bit narrow in scope, relative to the discussion here. For any given application, you want to find a cheap part that can run that application. If you can run the application on an 8051 with a few K of ROM, then you can save a lot of money and reduce power consumption by switching to the 8051. If you need a powerful DSP to do some SDR for your cell phone radio, you're going to pick a different instruction set and pick a part that draws a lot more power.
I think the paper is taking as fixed the part functionality and considering how the encoding can be changed, but for a core running code that is not user accessible, the engineers are free to choose a core with functionality that suits their particular needs (which you can't do with the main CPU).
What's surprising to some people is that Intel has successfully scaled down their x86 cores so you can use them as embedded cores in larger ASICs, essentially. You can reuse a successful core design like the Pentium 4 and adapt it to a modern 14nm or 10nm process and you end up with something cheap and easy to use. 10 years ago that wouldn't have worked. Even recently it was much more common to use dedicated DSPs everywhere, but these days I feel like people are ditching the DSPs for cheap and ubiquitous general purpose CPUs and microcontrollers.
Yes, and they also have a bunch of old x86 microarchitectures and core designs lying around which they can adapt to new process nodes and drop into random ASICs, plus the accompanying expertise. Intel had StrongARM and then XScale until they sold it off back in 2006.
It depends on the regime you're operating in. In a processor that's superscalar but in order the higher cost of decoding two x86 instructions versus decoding two ARM instructions is actually significant. But if you're reading instructions in at one or two bytes per clock tick x86's more complicated instructions make a lot of sense since they save on instruction stream size. And if you're dealing with a modern wide issue out of order machine the decoding costs are just lost in the noise.
EDIT: Oh, and load-op-store instructions make the level below superscalar a bit harder to design at least. And x86's strong versus ARM's weak memory ordering guarantee have effects even in OoO-land though which one is better is a very complicated issue I'm not going to try venturing an opinion on.
There's give and take here. Instruction set density is correlated with variable length instructions (which makes sense from an information theory sorta huffman encoding perspective). Even most modern RISC instruction sets designed for code density are variable length (looking at Thumb2 and RISC-V C here). So your choices are either you pay the power cost on the increased I$ size, you pay it on the decoder, or you take a hit on perf.
A more complex instruction set needs more silicon to decode, and thus is less power efficient. However, I'd imagine that Intel are pretty good at decoding x86 at this point; they've had plenty of practice, so it might balance out.
This is just baseband. It doesn't need an Atom, hell no. It could be like a 486 core manufactured on a ten year old process and it'd still be ridiculously overpowered for the task. More likely it's a P5 core because Intel have repeatedly used that core to produce things -- Bonnell and Larrabee. Anyways it probably runs at like 100 MHz or such and consumes a few tenth of a watt. Maybe an ARM consumes less but compared to the main CPU it's negligible anyways.
I believe that all of Intel's mobile networking products were purchased from infineon. So it wouldn't be surprising if they used ARM until now given that it takes time for these things to change. They even manufactured these chips at TSMC for a while.
On a 200M Unit shipment and average $0.2 per unit, that is $40M saving per year for Intel. Of course I assume any R&D cost of this tiny x86, ( assuming it is new ) could also be used in many other places.
I presume they mean so; they bought StrongARM, later replaced it with Intel designed IP (marketed as XScale), and sold on that team. I don't actually know that the StrongARM team and the XScale team are one and the same.
They were - I was on that team. StrongARM came to Intel when it acquired DEC. The CPU was developed thanks to an 'architecture license' from ARM. Intel renamed the processor XScale and later sold it to focus on embedded x86. https://en.wikipedia.org/wiki/StrongARM
To be fair that was a year or two before the iPhone and ARM processors took off like a rocket. At the time it was used in PDAs and smartphones, both of which sold in pathetic numbers relative to what was about to happen.
It's IMHO not a big deal, but still interesting. Intel has seemingly invested less and less into their small embedded stuff in the past few years, is known to use ARM in various parts, so it's interesting to see that they've made this switch here and are using their embedded developments in their integrated products.
Definitely interesting.. I mean a 486dx class processor with modern mfg, bigger cache and higher clocks than in the late 80's would totally sip power and be very capable... even the p2/3 designs could work well in a lot of embedded scenarios.
Because x86 is symbolic of the peecee world, which Apple supposedly historically opposed. Beyond that, Apple fans seem to get a raging hardon from Cupertino gaining more and more control of every aspect of the supply chain, from silicon to bits to services, and Apple still depending on third parties for their baseband -- let alone Intel, of the dread Wintel alliance -- just seems so... tacky.
Besides, it's CISC, and everybody knows RISC architecture is going to change everything.
unlikely. intel's baseband originates from infineon which intel bought a couple of years back. Hence it probably was a pure ARM baseband until intel purchased them and insisted on using x86 cores?! very weird move, particularly as i'd assume it also still has ARM cores in the baseband, making it a mix of x86/ARM cores.
There isn't anything to really indicate that this processor has many more instructions than a 8086, befitting its role as an embedded device. I'm not aware of any difference in x86 and ARM semantics in embedded that might cause a problem.
Now, the fact that an x86 instruction stream isn't self-synchronizing can represent a danger in theory. That is, you can craft a sequence of x86 instruction such that if you start executing form 0x0000 they're safe and friendly but if you start executing from 0x0001 you'll see a different, equally valid, stream of instructions that might do something malicious. But that doesn't seem like a credible attack vector in this case given the embedded nature of the code.
Given how the i386 instruction encoding works that is not significant as in 16b mode mov ax, bx has same encoding as mov eax, ebx in 32b mode (in reality it is somewhat more complex, but this is the gist of it) and thus how it gets disassembled depend on the configuration of the disassembler. One thing that would certainly point on the code being for 32b mode would be stores into control registers (eg. mov cr0, eax instead of smsw ax, which have different encodings, and slightly different effect, with the second one being somewhat nonsencial in 32b mode).
On the other hand the 32b mode code in the disassembly snippet containing lgdt looks reasonable, if it would be 16b code disassembled as 32b it would lead to non-sense as it combines 16b and 32b instructions in meaningful way.
I'll throw out there that unaligned, unintended instructions can give an attacker more options from an ROP perspective. But it's only a small benefit for the attacker. And for something like this 30MB binary it's six of one, half dozen of another practically speaking.
Memory-aligned means that instructions can only exist on certain addresses. (It also implies a minimum instruction length.) On most RISC architectures, instructions are also a constant size, with the same alignment. For example, a MIPS instruction stream consists of 4-byte instructions aligned to every fourth byte in memory. Memory alignment and constant instruction size ensure that any given instruction stream can only be executed as itself, or part of itself. Note that this is not self-synchronization - if you removed a byte from an instruction stream, you would get a wildly different instruction stream.
(I said MIPS because ARM instruction streams can be THUMB, which permits a different instruction interpretation for the same stream, and thus defeats the security advantages of memory alignment requirements.)
Self-synchronizing means that each subdivided unit (usually byte) of a stream also indicates if it is the start of a new symbol within that stream. For example, UTF-8 is an example of a self-synchronizing byte stream. In UTF-8, encoded codepoints have variable-length representations. However, the upper bits of each byte clearly indicate not only if the byte is the start of a new codepoint, but how many bytes follow to complete the codepoint. This means that a deleted or altered byte will only delete or alter the symbol it's a part of, and not any other part of the stream.
Self-synchronization and alignment requirements address the same problem - ambiguous instruction streams - by different methods. Memory alignment prohibits you from reinterpreting the stream; while self-synchronization makes doing so less valuable.
Ohh, thanks! Now I recall I'd heard the term regarding UTF-8 at some point but completely forgotten about it.
It reminds me very much of prefix-freeness. Is it fair to say that that's what we really need (without alignment)? It seems self-synchronizing is a bit stronger (no pun intended) but not necessarily necessary to ensure you can't jump into the middle of an instruction?
But do any self-synchronizing instruction sets exist? (And are in commercial silicon?) It seems like it would be very annoying for density and constant-encoding reasons with 8 bit code units, and none of the variable 16/32 bit encodings I can name do it either.
The PIC18 instruction set has this property. Most instructions take up a single 16-bit instruction word, but the few ones that take up 2 words all have the 0b1111 in the high bits of the second word, which would execute as a NOP if branched into.
This encoding is presumably a consequence of the instruction set containing conditional skip instructions. That is, the skip instructions don't have to parse the instruction stream for 2-word instructions but can always just skip a single instruction word and let the second word execute as a NOP.
Your source that Meltdown only afects x86 CPUs literally starts with "Meltdown is a hardware vulnerability affecting Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors." and that iOS needed to be patched.
Also worth noting AMD x86 CPUs were not affected. Meltdown was definitely more a "how good was your implementation" question not a "are you x86" question.
Can you describe a situation where an adversary would be able to run user code on a phone's baseband processor? This is a serious question - I don't know anything about smartphones. Do apps have access to the baseband processor? As an uninformed bystander I would think not...
This comment is down-voted as other archs are vulnerable to Spectre/Meltdown not just x86. True.
Yet, as of right now Intel as not fixed in silicon all spectre/meltdown issues.
Given the R&D / QA time on a specific arch, I'd bet it is still hardware flawed if they took an of the shelf in house x86 design. Atom based maybe?
So I do not feel confident for a chip that runs the broadband and moreover one that is not auditable by the end user (unlike a main CPU would be).
I would be supper interested in knowing if the reverse engineering show traces of retpolines.
I was working as a Postdoc at Intel around 2005 when they decided to sell their X-Scale business (the ARM CPU they got when they bought DEC). My manager said that embedded x86 will make its way into smart phones. It was a long long shot back then. I admire folks at Intel for their persistence!
Yes, Intel has been a baseband chipset on iPhones for a while now (at least since iPhone 7, can’t remember if earlier), just not exclusively.
And as the original iPhones used Infineon chips (which Intel acquired), depending on your perspective, you could twist it and say they’ve been there since the beginning. Bit disingenuous to me, but I could see someone making that claim.
Well it's a battery powered phone. Decoding x86 instructions and converting them to the processor's internal microcode, unless it's not really the full x86 ISA, is not energy efficient. This means there may be a noticeable battery life difference between the GSM and CDMA versions of the new iPhone.
> Decoding x86 instructions and converting them to the processor's internal microcode, unless it's not really the full x86 ISA, is not energy efficient.
That's a very antiquated view of modern CPU design. Unless you're designing a coin-cell powered CPU the x86 decode stage is effectively free and can be disregarded. Besides: most ARM (and nearly ever other modern RISC CPU) does the same thing, decoding the instructions to internal micro-ops. x86 variable-length instructions vs RISC multiple instructions can be thought of as different instruction compression schemes and which one treats your L1-I cache better depends on workload.
CPU ISAs are effectively an ABI for hardware. Except for the extremely low end, no one directly executes the ISA anymore and hasn't for years.
I suppose if anyone were designing new ISAs they'd design them with that assumption, to avoid baking-in temporary implementation details they'd regret later (see: branch delay slots).
Wait what. Where is this article suggesting conversion of x86 to arm assembly prior to execution? Stop making things up. Unless otherwise proven, it's x86 executable running on x86 intel cores within the modem or related compute.
Modern x86, for the past 25 years or so, doesn't implement the instructions directly. There's an internal intermediate, proprietary reduced instruction set. So outwardly it's a cisc processor, with variable length instructions and a lot of different instruction modes, but internally it's not.
But a 486dx with more integrated cache could be incredibly efficient with modern manufacturing processes. A modern i5-8250 uses under 15W power... we're talking something less than 1/1000 the complexity.
they're not standard ELF files, more likely they're using the ELF format just to have a list of "load address-size-data" stuff assembled with some custom linker script, and they did not bother to change it, probably because of integrity checks or sanity checks along the assembly line
would have been much more fun if they switched to PE format though, like they did with EFI/UEFI :D
In the closed box that this chip is in, as long as it meets power usage spec then it doesn't really matter what Arch it is. Not sure what the big deal here is other than some personal bias the author has against x86
I wonder if the Dual SIM Dual Standby feature has anything to do with this, even as one of the minor reasons to switch to x86. Even though standby mode itself is usually the least demanding, and so it will just mean doubling of memory...
Seems very unlikely, but from product perspective it’s one (and maybe only) of the new features that’s related to baseband.
Doubtful, as Qualcomm baseband chipsets have been in dual sim phones for ages. It’s more likely two reasons:
1) Intel somehow is doing CDMA now, that’s a major reason previous generations were split between Qualcomm and Intel
2) The major reason Intel has a seat at the table is due to Apple and Qualcomm‘s very public fight. Intel and Apple don’t have an entirely happy relationship, but it’s a far better relationship than Apple and Qualcomm
Could this is a preeminent move to produce more parts in the US by Apple to avoid the new tariffs being imposed?
How many years of R&D are required before production of a new phone these days?
I hope this change will result in new low price SoC board PCs running x86 cores to compete with RPIs entering the market soon.
I suspect the new baseband processors are powerful enough that you can use one part for everything, rather than a different baseband processor depending on your network, which makes things cheaper. Tariffs might be a factor but Qualcomm works with Global Foundries which has fabs both in the US and elsewhere.
Arm scares me far more than Intel or Amd. The way they license their chip designs is what's created the fragmented mobile ecosystem we have today. I remember watching an interview a while back about arm chip technology with someone from their company about how their revolutionary virtualization technology could be used to run any OS on their chip...then the spokes person laughed and said how this would never happen and would insted be used by licensees to lockdown their processers even further. I'm really not a big fan of ARM or the company in general. Intel does some shady things but arm is a whole different beast altogether. It's designed with arbitrary software lockouts in mind and their licensing scheme is not conducive to open development.
So while I am sure the author checked this, it bears mentioning that disassembling CISC is more of a black art than RISC. You can feed any binary file into a disassembler and get x86 code out, even if that code is invalid.
I am, I am not faulting the original author just pointing out you can get disassemblers to come up with x86.
From the comment right under the picture I took
> yeah, pretty typical function prolog, what's the question ?
Except we know it is not.
I am more saying to people be careful pushing any old binary blob through capstone without considering what it might produce, I get this at $DAYJOB where people disassemble VAX from things that are just data.
Since Qualcomm and Intel are the baseband/modems manufacturers and Apple has little or may not have anything to do with developing these x86 baseband modules, this should be mostly Intel/Qualcomm responsibility to tighten up the security, no? It's like we can't fault Boeing for the plan crashes if it's a CFM56 engine failure, right?
They’ve been in embedded with Atom processors for a while. VIA/Centaur beat them to it with C3, etc.. Id have assumed Intel would be in a mobile eventually if they weren't already. Wonder why x86 is that surprising.
Also, early Nokia 9000 Communicator had x86 CPU. I think it was a 386. Mobile returning to x86 instead of going to x86.
Until 2015 my phone was A Motorola Razr i, which used an Intel cpu with x86 architecture, so it's not that uncommon.
Also the PS4 uses today x86 for its main cpu. It's not mobile, but comparing it to Z80 is exaggerating a bit.
Reading the spec sheet on it has me feeling in awe for the antenna designer. This chipset claims to be able to simultaneously tune in on 850 / 900 / 1500 / 1700 / 1800 / 1900 / 2200 / 2800 / 3500 / 5000mhz. Having gone through the “black art” of antenna design just for a few of those frequencies before, I can’t imagine trying to cover all of them well, but I also know if anyone does it well, it’s XX’s team (not outing the person as I don’t know if it’s well known who leads that team at Apple).