IMHO unlike the tone of the article, this is cause for celebration, like every other time DRM is broken. Now all the proprietary firmware in those otherwise useless/insecure IoT devices etc. can be more easily reverse-engineered and replaced, possibly driving more hardware reuse and reducing e-waste.
DRM can be used for good as well as bad; it is about control. Most DRM is used for corporations to control you. But you can use things like this readout protection and Restricted ("Secure") Boot to get control over your own devices. If the BIOS lets you use your own keys (many do), Restricted Boot prevents attackers from booting unauthorized software on your computer. Similarly readout protection just hides the code on the device; this is useful for anyone who wants added security (security through obscurity is not perfect but it is always helpful).
In that regard this news is bad since it means a security tool has been broken. But it is good in that the security tool was very often used by evildoers.
Of course both of these examples rely on you trusting the bios & rdp implementation - ideally they would be open source.
> Similarly readout protection just hides the code on the device; this is useful for anyone who wants added security (security through obscurity is not perfect but it is always helpful).
Trying to hide your code is a stupid thing to do. Trying to hide cryptographic keys is more useful (though still often only used for DRM applications) but preventing people from dumping your firmware is misguided.
> Trying to hide your code is a stupid thing to do
That's not what I hear from reputable reverse engineers, at least for IoT devices.
Even though security by obscurity should be frowned upon, and being understood that hiding code might give a false sense of security, most RE workflows assume availability of firmware, and it is a giant pain to start breaking platforms where the unknown firmware must be manually extracted first, especially the boot loader.
I've said before on this site, obscurity is a totally valid tactic to impose additional costs on attackers. It's best to think of it more as a preemptive strike than a defensive layer. Thinking of it as a defensive layer can lead to complacency, but thinking of it as an opening gambit is totally fine.
One shouldn't be obsequious to heuristics like "never use security through obscurity," one should understand the systems they're building and make considered choices.
Additionally, preventing people from dumping your firmware is usually not about security as much as it is preventing some fly-by-night company from reversing your product & selling it as their own. Why engineer a product when you can steal someone else's IP?
It also gives security engineers who are looking for bugs in your code more reason to hate you, and heaven forbid that they have less perseverance than a determined attacker. Also, what’s so special about your slow and buggy libc and statically linked in crypto?
If you hire those security engineers, you can give them access to the source code. If you didn't hire them, there is no way to tell benign attackers from other attackers, and they can deal. It probably won't give them much trouble anyway. :)
> If you didn't hire them, there is no way to tell benign attackers from other attackers, and they can deal.
The better way to deal with this is to make your firmware secure regardless of whether someone can pull it off your device. Making security engineers’ lives difficult just means that you’ll find out about bugs from the news when they’re being sold to authoritarian countries to suppress dissents rather than the paper on firmware security that a graduate student was going to write before they decided to move to a much nicer platform.
The first is that my thesis is explicitly that this is not a defense and is not an excuse for poor memory handling etc. in your firmware. (And the more I invest in creating a robust firmware, the more I stand to lose if someone rips off my product & undercuts me - security risks are not the only type of risk.)
The second is that the notion that I should rely on the charity of unpaid graduate students to discover bugs in my firmware is both inequitable and unsound.
> obscurity is a totally valid tactic to impose additional costs on attackers
But there's the rub: you'll only impose additional costs to the least sophisticated/determined adversaries. While that works to keep random scriptkiddies/scans out, I'd argue it has little to no effect if you require serious security guarantees.
It imposes costs on all attackers. The value of that cost is skill dependent, but no one has unlimited time on their hands. In other contexts, like hiding an admin login page, shutting out low skill attackers means your log files have better signal to noise, and you can focus more resources on the more significant threats.
The reason I say to think of it as a preemptive strike rather than a defense is that you still do need strong defensive layers.
This is basically just setting a compiler flag. It's free for you and costs something for the attacker.
As someone who is currently working on a firmware reverse-engineering project (with others who actually know a lot more about what they’re talking about!) pulling tricks like these is just a massive annoyance that we’ll usually get around anyways; we’ll just curse you the entire time we’re doing it.
To be fair, "restricting" flash readout while allowing hardware debug access always seemed like a minefield, and I would hope that anyone with a security sensitive application would have seen this from a mile away.
You could have a completely bug-free, constant-time, constant-power cryptographic library running on one of these microcontrollers, and debug access would allow you to reliably extract encryption keys just by examining the execution path.
The amount of processor and system state that you have access to with a hardware ARM debugger is crazy, but that isn't really the problem -- you can extract a ton of state with a minimal debugger too. Just a log of instruction pointer values would get you 90% of the way there.
I think it's reasonable to assume that microcontrollers with exposed debug interfaces simply cannot be made secure, just as people generally assume that it's game over once someone has physical access to a computer.
Yup - this exactly. The JTAG fuses should be blown on all devices that need to secure their flash (or secrets).
Working on these specific processors around 5 years ago, we implemented a serial port based "unlock" that would generate a challenge/response from the device that if correctly acknowledged, would unlock the JTAG whilst the chip has power (it locks again when it looses power). This worked great - we spent a lot of time on the UART driver to make sure it was super simple and robust during the period when it could listen to incoming bytes (no interrupts etc...).
I've talked to Johannes Obermaier in the past... very nice guy. It's not their first bypass, and hopefully not the last either.
PS. I actually have yet another STM32F1 RDP bypass in my archive, waiting to be published. It used a technique where the MCU writes its own debug registers... pretty crazy stuff. If only I had some free time to write a proper publication about it...
I think you can thank this sort of hack for the widespread availability of cheap cloned "ST-Link" debuggers. They use STM32F103 or F102 chips inside, with firmware that was probably lifted from the debuggers on ST's evaluation boards.
As recently as a few years ago, it was unusual to see standalone debugging hardware in the $2-20 range. Sometimes I wonder if ST bristled at the...reuse...of their IP, but it probably did more to promote STM32s as a learning platform than anything that ST did in that time period.
but it probably did more to promote STM32s as a learning platform than anything that ST did in that time period.
...and thus drive further product sales in the future. If you think about it, sales of development hardware are not going to be frequent nor recurring, while sales of the actual product dominate their profits.
I'm personally glad that companies are starting to see the advantages of freely available documentation and cheap development hardware, and the days of 4/5-figure development boards with secret NDA documentation are slowing passing; ST was (and in some ways still is) one of the notoriously closed ones.
I am not sure if it was always the case, but at least with ST and NXP/freescale you can download the firmware for their debugger from the website for free. I suspect that it was a strategic decision by ST to release their dev kits for cheap (<$10 for a stm32 dev board with programmer!) to drive developer/hobby/edu interest in hopes of people using their chips in production.
Come to think of it, I think it was actually TI and the MSP430 that started the trend with the $4.30 kits with a socketed msp430 micro and onboard programmer. ST was the first to try it with an ARM as far as I know...
There's actually even more than this to low price. I have seen knock-off ST-Link dongles with STM32F103C8 MCUs that are not supposed to have enough flash memory for the stlink firmware, yet they functioned.
How? Turns out they physically have more flash memory, but the accessible flash area is limited by programming tools and documentation to 64KiB (most likely price segmentation, but maybe there's a flash page remapping mechanism that would allow to bin devices based on manufacturing yield).
tl;dr: The processor protects data accesses to the internal flash while the hardware debugger is connected so people with hardware access can't read out the code and config. But this protection only applies to the data side of the Harvard architecture buses. The instruction bus is used by the hardware to fetch the reset vector on a hardware reset. But the vector table is under software control. So by changing the reset vector to point to an arbitrary address in flash, then resetting the CPU under the debugger, you can get it to load your desired word from memory into the PC.