This post lifts out the key parts of the paper  and is a good summary. I think the paper is an accessible read as well.
Not too much discussion two weeks ago when the paper was posted on HN , so I will raise a point I've made before  and is consistent with the recommendations of the paper (and another post in this thread ): this is an opportunity to improve the terminology, mental models, and formalisms of observable state, and its implications for information hiding, privilege separation, and computer system design.
This conversation needs not only to occur among (e.g.) computer chip designers and cryptography experts, but also among higher-stack users of that technology, so that the information leakage aspects and trade-offs can be analyzed together with other performance indicators of the system.
It seems as if the haphazard, ad hoc way that chipmakers and system architects dealt with this issue have contributed to an environment where Spectre could occur: and such timing attacks were never a secret, but resistance to them in various levels of mainstream computing appears to have been fitted in a patchwork of hasty fixes and well-meaning but informal caution. The conversation around this topic could use an upgrade, and the paper's authors agree.
I don't know if I'd look at it with that lens. Clearly, knowledge and analyses of this topic are resident in academia and have been long present at various levels of industry, but other factors like performance and power consumption have received generous affordances by hardware designers, and makers of system software and application software had to step in where the hardware provided little help for leak-proof sharing. In retrospect, the market may have rewarded hardware makers' direction of pursuit -- until now, but the survivors of those past pursuits remain.
Well let's think it through. It would seemingly have to follow the Disruptive Technology arc. That means it would start off as a crappy computer that had really great information leakage control. And it would be marketed to a small but enthusiastic market who really values that, and doesn't mind that most everything else is a PITA.
Yeah, this is sounding like a good angle to start with. Target users are people who need to squeeze as much performance out of hardware as they can without leaking information while running untrusted code. So, cloud providers seems like exactly where to start.
Cloud providers who target security-sensitive users who also don't have a lot of money (to just pad the hardware budget) would be ideal.
Nah, cloud providers are under enornlis pressure to optimize resource usage (real estate, electricity, labor). A hardware design without competitive performance has a snowball's chance in hell of being adopted in that space. The economics of cloud datancenter mean that security only needs to be good enough, not perfect.
I think that this is wishful thinking on your part unless the drastic performance per power drops are negated.
I personally think that these performance drops leads to a horrific environmental impact because every performance drop means that more hardware needs to be provisioned and powered to counter it. So this directly results in more toxic waste from these elecronics and a higher carbon dioxide output into the atmosphere (cloud data centers are a major consumer of electricity). Compared to the long term impact of that, a extra few security breaches sound like the lesser evil to me personally.
It saddens me that we're collectively going to spend a lot of effort trying to patch out a problem that we've imposed upon ourselves. We were making such great progress in terms of processing speed until someone came along and decided that we need to have multiple tenants share the same hardware, and they should have no way of knowing anything about each other. The vast majority of consumer hardware will _never_ be exposed to this category of attack, but will pay the performance penalty regardless.
Fundamentally, the need is for a completely different model of computation to abstract away time-channel leaks. This cannot be fixed by patching existing software and hardware, and we're going to go through a lot of pain and anguish trying. As another comment points out, the well of possible timing attacks is infinitely deep (attached hardware, network performance measurements, etc.).
The two options are performance or security, pick one. It seems the industry is trying to pick both, and it's going to take us a long time to realize that we're going to get neither.
For clarity - my proposal is segmenting hardware and software products between the two categories of "general purpose, trusted computing" and "safe for shared hosting." The 2nd category is so small compared to the first, it seems unfair that its domain-specific problems should hamper the rest of us.
 Thanks to a combination of reasonable software mitigations (unprivileged lower-resolution timers) and that most of these attacks require arbitrary code execution in the first place
Alternatively, run them at an isolated context with no
websockets, limited access to timing (second or two precision - a lot sampling needed), limited CPU and memory utilization, no sound, no GPU acceleration [likely another large side channel surface], etc. Ahh yeah and delete all their cookies while we are at.
Running somebody else's code is putting a ton of trust in them. Maybe someday we'll have great sandboxing, but for now, it makes no sense to let random sketchy origins to just run whatever code they please on your device.
Make these web pages run on a crappy VM. Yank most of the JS bloat and the JIT compilers out and you're good. Maybe insert arbitrary nondeterministic delays into the interpreter as well for good measure. Compared to sacrificing total desktop performance, this is a perfect tradeoff.
Yes, so? Unless i'm misunderstanding what you are talking about, these are exposed to (untrusted) web pages through the web browser, the pages do not have a means of accessing it directly. So the browser can take measures of avoiding issues (e.g. decreasing the exposed timing resolution, like browsers now do).
Then you can give your (trusted) native applications all the available resources without artificial constraints.
(of course this assumes you see most native applications as trusted, but personally i see any alternative as too Orwellian to get behind)
> The vast majority of consumer hardware will _never_ be exposed to this category of attack
> Thanks to a combination of reasonable software mitigations
That is time protection. Restricting access to system timers wasn't enough here; mitigations also need to prevent user-created high resolution timers, so useful features like SharedArrayBuffer had to be disabled, to prevent the creation of synthetic timers.
> The two options are performance or security, pick one.
That is not the findings of the paper: "Across a set of IPC microbenchmarks, the overhead of time protection is remarkably small on x86 [1%], and within 15% on Arm."
I think if there's a segmentation to be made, it's "general purpose, untrusted computing" and "trusted high performance computing". The second category would be the reserve of such projects as physics simulations and render farms.
> The second category would be the reserve of such projects as physics simulations and render farms.
or smoothly scrolling, 60fps rendered canvas apps in browsers. I think there can't be an apartheid between untrusted and trusted, because developers would push the user to make their software trusted to get the max performance (and the users would just agree).
> developers would push the user to make their software trusted
I don't see it going this way. This is comparable to virtual memory and MMUs. When there is support in hardware, the speed benefit of not using it is negligible (as shown by the 1% difference demonstrated in the research).
When it is not needed, there is a benefit of not implementing it in hardware, and saving power and die area. For example, GPUs (traditionally) and crypto mining hardware do not employ MMUs.
> someone came along and decided that we need to have multiple tenants share the same hardware, and they should have no way of knowing anything about each other
Isolation between apps on mobile phones is really important. That's a huge part of the computing landscape in terms of number of devices deployed, and falls within the 2nd category. I don't think it's realistic to dismiss so easily.
Not the GP, but I think this complaint is about workatation and on-premises server performance. There, the software fixes intended for cloud servers are already throwing the baby out with the bathwater for little to no security benefit. All these changes are accomplishing in that space is waste time and energy by ruining hardware performance.
One issue I see here is that time protection would need to extend to anything shared, not just CPU micro-architecture. For instance, if a hard drive has a DRAM-based cache, that could be used as a timing channel, and the complexity of flash file systems opens up all kinds of potential leaks. In the case of two processes sharing network access, one process could conceivably estimate another's network access patterns implicitly by measuring latency through shared switches or drops due to buffers being filled. Mitigating this would require some kind of coloring support that goes as far as your ISP's switches, which seems impractical.
> One issue I see here is that time protection would need to extend to anything shared,
We might see this issue as an opportunity. That is, by thinking about a concept called "time protection" we expose all these things subsystems are doing and make them easy to argue about. We can now say "Oh good, XYZ improves best-case speed, but sadly it also compromises time protection".
Having such a language means the industry can slowly start improving these things rather than sweeping them under the rug. It will not stop the improvement from being slow and difficult.
I think there needs to be a little bit of contribution by both hardware and software sides. A "sufficiently stupid" application is not possible to protect. We need a set of best practices and guarantees that you wont leak if you follow them.
And there may even need to be a third part. An understanding that nothing can be fully protected.
As an alternative to this approach, I wonder if it's possible to push all sensitive computations into a few small components, and rewrite those components carefully to obscure any information that could be obtained from timing?
Branchless code equivalents can be written for practically anything, and you can force-prefetch memory regardless of branch(its an intrinsic in most C compilers), though this loses performance and branch prediction benefits.
It would require doing this for any new/existing code exposing timing information, the current timing fixes/isolation patches are much smaller.
It would make more sense in user code, like browsers/media players/etc to remove influence of branch prediction regardless of host OS, just like GCC does with retpoline insertion.
I wonder if high resolution timers were privileged, we could get by with lower resolution timers. I'm not sure any timing attacks would work with second or even millisecond resolution timers.
I don't see handling this at the programming language could help, and I think that whether timing is privileged or not is built into CPUs so there's nothing we can do about it, but this seems plausibly acceptable to deal with speculation. Permit speculation, but make it privileged to detect if speculation occurred.
I don't think so. I think the whole point is that the OS provides time protection in the same manner it provides memory protection: completely transparent to the application. Just like your typical user-mode application does not need to worry about virtual vs physical addresses, I'd say the typical user-mode application would not need to worry about the effect of time as well.
Just a thought, but can't the OS prevent applications from knowing anything about other applications? Rather than isolating apps by flushing/coloring everything, couldn't the apps not know what else is running? Two apps can't communicate, or one app spy on another, if an app doesn't know what else was/is running.
(Not sure why this is wrong, but confident it must be.)
These exploits work on a hardware level, and do not require the malicious app to know what else is running. For example, a VM does not know what other VMs are running on the same host in AWS, but Spectre/Meltdown still affected AWS hosts. They are reading data other apps have written to memory.
Application vs system level. I think the parent is saying that since the application is in a unique position to know what information is privileged it would be better to make available a library of constant-time functions that are resistant to timing attacks then constantly pay the performance cost of blunter system-level boundary enforcement.
However, I'm not sure how much merit this argument has since very few applications even bother with this level of protection but need it.
That works fine when an application just wants to protect a private key in memory. But if you want to build, say, an application where a user enters via keyboard information that you want to protect from another application, you have to worry about keystroke timing attacks. That means the application needs to hide whether it did anything at all in a given time slice, which can be inferred from the micro-architectural information discussed in the article.
In the famous Intel case, the ability to provide kernel addresses in user space instructions opened the door to exposing kernel data. So, don't permit Kernel addresses in user space instructions? Trap that instead of getting in the way of efficient code execution everywhere.
I would like to see how Boeing and FAA decided to pretend that all is fine after the first crash, there were enough clues that MCAS has issues and I would like to see how was decided that is safe to fly while the software fix was not deployed.