Amazing. Did they need to jailbreak or physically open the phone to find all this stuff? They talk about reversing binary images and using their "Legilimency" toolkit; I wonder if a vanilla phone was enough to research all this and propagate through Wi-Fi.
I'm guessing there must be other jailbreaks involved to be able to observe and experiment on the ios kernel side of things while developing the wifi chip exploit; going in all blind from the wifi side only sounds impossible. The question now is, are they sitting on 0day jailbreaks for current iOS versions or did they have to do all the tests on legacy iOS versions?
It looks like that setup work for their research environment was all covered in part 1 (all the parts are really interesting and worth a read if anyone hasn't already incidentally). Specifically, the reason they mention at the end of part 3 that
>The exploit has been tested against the iPhone 7 running iOS 10.2 (14C92).
was because iOS 10.2 has a known kernel exploit developed by Ian Beer , and they used that as part of the basis of subsequent research. Presumably they either found some iPhones still running 10.2 (which stopped being signed a long while back) or like many well funded researches just keep a set of different iPhones loaded with major iOS versions so they're ready to go for research if an exploit is found after signing stops (dedicated jailbreakers sometimes to the same thing if they can). And of course security patches themselves are handy for reverse engineering old exploits from whatever bugs Apple fixes.
In part one read under "Kernel Memory Analysis Framework".
I think it’s safe to assume that most people turn off wifi when there is a wifi network that sucks and they want to switch to cellular. This is by far the most common reason, and it’s also what they think they accomplished.
What they instead achieved up to iOS 10 was:
* worse location data in maps
* airdrop does not work
* AirPlay might not work (doesn’t work across networks)
* Handoff doesn’t work
* phone call and sms forwarding doesn’t work
* applications don’t auto update in background anymore
* system updates are not downloaded in background anymore
* might waste their data plan
I think it’s impossible to have people know and be aware of all these side effects. It’s much better to change the UI: have the common button do what people think of and know it does: get off a network. And have the more comprehensive shut down button a couple of taps deeper in settings.
No, what pressing the wifi icon button on every single wifi-capable phone ever made until iOS 11 is to turn off wifi. Not just temporarily, but specifically until the user decides to turn it on again.
This is how it was even before smartphones. Android keeps wifi location scanning and various other things running, even if you turn off wifi, so it actually accomplishes what people want: to turn off wifi networking until they turn it on again.
But that's just not how human goals actually work. Nobody - except maybe a wireless radio engineer - wants to "turn off wifi". Nobody has that as their actual goal - turning off wifi is a way to accomplish some goal. That goal might be "make the internet work better (by using LTE instead)" or maybe "stop distracting me with notifications from the internet" or something else. But "turn off wifi" doesn't make sense as a goal in and of itself, and so Apple is trying to do something that better maps to what people want.
Now, whether they've done so correctly - both from the perspective of what actually happens, and how it is communicated to the user - that's certainly an issue and it's clear they haven't executed this well.
my main sorrow with the clearly misleading wifi-switch is to fall prey to those nasty mac-adress tracker in shops.
the other reasons you list are minor issues, who needs constant update possibilities, phone call forwarding to your Mac, airplay and handoff on the way?
yes, I also forgot switch off wifi and burned through my volume, but we shouldn’t dumb systems down. People need to understand cause and effect, especially in IT.
On Android 7.0, the cellular icon in the status bar makes it very clear when you are not on WiFi. The icon serves as a reminder for me since I usually switch off WiFi in the morning and re-enable it at home. I don't recall ever forgetting cellular on.
Regardless, I think Apple could have come up with a more user-friendly solution. This just looks like a lazy hack to be honest.
I'm not defending nor evangelising Apples current solution.
I'm lead to believe it's not iPhone users specifically, but people in general.
I've worked in IT, but qualified as a tradesmen nearly a decade before, and I occasionally forget to turn wifi back on when I get home. I currently work for a large steel fabrication company. One of the project managers here doesn't even use email.
It's way too easy for the average person to forget to turn wifi on and blow all your mobile data / get slogged with overage.
In a similar fashion, it's not hard to see and feel when a / the tyres on your car need a bit of air, but we mandate tyre pressure monitoring systems.
We are, for good or bad, reluctant to regulate software system. So, I guess, as always, if we think of a better design we should probably make a demo or promote it, maybe iOS / Android will pick it up along the way.
On my 6+, the sequence is: press home button, tap Settings, Wi-Fi, Off. That's 3-4 depending on whether you count the home button.
On a 6s or newer, with 3D touch, you can cut it down to 2-3: unlock, force touch on Settings and toggle WiFi from the menu that appears. (That might be 1-2, I forget whether you can force touch and drag to what you want to activate, or whether it has to be a separate tap.)
A better way would be to: (1) use either GPS or cell towers to specify some sort of geographical area (with user input of course), and then (2) allow the user to specify what happens to WiFi/3G when entering and/or exiting that area.
Llama on Android has been doing this for years, but the UX was a mess in my opinion. Apple could streamline the setup process, throw in some "amazing"s and "revolutionary"s, mix that with Touch/FaceID, and voila, you have a viable solution.
Android does the same thing, more or less, now. For quite a while Wi-Fi has not been truly off unless you access a buried setting—it's scanning for access points for location data. Now in the latest release the Wi-Fi will turn itself back on after having been turned off once you're near a known access point. I never really had an issue with this, as you say it is not that hard to notice the icon in the status bar, but I will admit it is a nice touch. Granted, I have already signed my soul over to Google so I have little to care about.
Many apps will prompt before doing a large download over data. Spotify has separate settings for mobile data and Wi-Fi streaming quality. One could imagine a video app would prompt before streaming on mobile data. I'm pretty sure this is the solution—perhaps the Android or iPhone media framework itself could implement something that would warn people if app developers are often forgetting to add this feature?
No, it does not if the Wifi Chipset is disabled, because it uses wifi for location services (GPS would use way too much battery). That's exactly the problem: The wifi chipset is used for much more than just connecting to the internet.
It did, but I'm not sure how well it really works. In my own experience, I still see lots of networking failures if I'm far enough from my house for the network to be dodgy but not so far that it disconnects, or if I connect to crappy public WiFi.
Cellebrite got into that phone. A presenter from the firm told us so. Apparently 300 devs work fulltime on mobile devices in Isreal to develop iOS/Android exploits, mostly for Law Enforcement or despots.
He talked quite a bit about what you can get off the devices, but not much on the how to get into there. Apparently Android-encrypted phones are the safest though. They didn't have an exploit for them 2 months ago.
> Apparently Android-encrypted phones are the safest though.
That's odd. I guess the implication is that iPhone hsm is broken (or they can get past a short pin via an exploit that allows brute forcing - typically an hsm should (be possible to configure to) permanently destroy the keys after N attempts).
I suppose it demonstrates that secure encryption requires the user to memorise something equivalent of 96-128 bits of entropy, that will be used for key derivation.
[ed: i suppose it's conceivable that there's an attack against how the iPhone generates symmetric encryption keys, but I would guess that's less likely]
The iPhone encryption from San Bernardino had a 4-digit pin + a long salt, and the long salt is in the iPhones secure enclave. However, the phone would erase itself (don't know if it's the salt or erase everything) after 10 tries. If they were able to image the phone and get the long salt, the keyspace is only 10000, which is trivla to do on a cheap computer today. I believe you can input a long passphrase for iPhone security, and them you'd be back to the problem of a complex passphrase.
Android gives you the option to input a secure passphrase for key derivation, but you can also use a 4 digit PIN/similar non-secure passphrase, and be just as vulnerable. I am not as familiar with additional security measures Android has (I think it does have a similar measure where too many incorrect passphrases will cause it to erase itself).
As far as I remember, they were able to do copies of the iPhone. (I guess, similar to a nandroid backup on android devices. Explicitely asked if that needs root, and he said they don't need root or any modified bootloader stuff at all.)
They also had jailbreaks/exploits for 10.2 (or the latest version at ~2 months ago)
There's also a relatively low attack value and attack surface for encrypted Android phones vs encrypted iPhones. Everyone who runs an iPhone has it encrypted, while relatively few people running Android devices have them encrypted. In terms of attack surface, the SecureEnclave has many APIs, some of which have had vulnerabilities in the past and it's quite possible to envision a scenario in which others were found and they're able to dump keys from it. It's also quite common on iOS to have weak PINs and similar low security measures, even just bypassing the mitigations against bruteforce attacks could allow them in to a huge number of device. On the other hand, people turning on disk encryption on Android are likely paranoid people who'll set giant passwords. So in terms of a numbers game, even a more basic exploit against iOS would look much more valuable.
In the Android case, often times you need to power off the device to really be protected as the key is just sitting in RAM. But if you've got a powered off Android device that's been encrypted, chances are you have a good challenge on your hands - there's nothing but the encrypted data on disk to work with unless you were to go to an active attack.
Also encryption by default and much larger user base mean there is more focus on iOS than Android (like the old windows versus mac virus argument) the difference I see is that you are much more likely to get compromised by an application on Android than iOS. And since Google has been very friendly with the USG I would find it much more likely that Enclave or not that it will be NSA weakened crypto that will be the demise of your Android rather than exotic exploits of your wifi. And if your paranoid you carry a Nokia 7715 and extra SIMs or you back something Debian based like Purism.
> weakened crypto that will be the demise of your Android rather than exotic exploits of your wifi.
I don't think there's any truth to this - if the crypto were weakened you'd see it broken by that quite quickly - but it's quite strong and follows well accepted stands in the cryptography community, have a look yourself if you like. It's using dm-crypt and dm-crypt is fairly heavily tested and reviewed. Debian and likely Purism use the exact same, so certainly wouldn't be any better in that way.
What is the story with Project Zero? What is the strategy here?
If you think about it, pointing out flaws in competitors' products is actually unusual for businesses, especially large ones. It raises questions of motives, of trust (are they drumming up business in a negative way? Can I trust what company X says about their chief rival? Are they exaggerating or spinning it?), and it looks unsavory: You don't win in the court of public opinion by insulting the competition, right or wrong; you just look like a jerk. Also, there's a liability risk, which adds legal costs to otherwise free blog posts - 'can't you guys just find Linux bugs?'.
On the other hand, it might improve security for everyone if Apple and Google started competing to publicize each other's flaws. :) (But I'd bet the noise of accusations and counter-accusations of errors in analysis, misleading statements, etc. would soon drown out the technical info, and then the lawsuits would begin ...).
I don't think Project Zero ever analyzed something that isn't used at Google (for example with the Apple stuff: somebody at Google has to build the Google iOS apps).
Wanting to know what's going on on the corporate network is the job of a corporation's IT security unit.
The publications serve to force vendors to fix their mess. Microsoft already complained that the 90 days limit by Project Zero is unfair (and got a 14 days-to-next-patchday extension). And there are other experiences from researchers adhering to "responsible disclosure" schemes where the vendor only became active once publication was a real threat.