If you have seen electric wiring inside houses from 1920's, that's the state of IoT and internet connected devices today. Bad things must happen first for standardization and regulation to happen.
Consumer devices and gadgets are not my main concern. Internet connected building automation is in similar sorry state. Someone will do large scale apartment automation systems attack, maybe just single manufacturer is targeted and as a result 5-15% of apartments go nuts at once. Just messing with the air conditioning can kill old and sick people until things get fixed.
Messing with the air conditioning can affect a lot more than just the people who have the vulnerable IoT air conditioner, if it manages to bring the grid down.
The grid is able to cope with fluctuations in demand, but that's a totally different ballgame than switching massive loads on and off, synchronized to within something like 20 milliseconds (a single 50 Hz cycle), in a controlled, intentional and malicious way - and potentially worse, the attacker could observe the grid and react to the countermeasures (e.g. to detect how quickly the grid reacts, and trigger oscillations in some system never designed to deal with something like that).
I personally see Brickerbot as the IoT version of Shodan. Port scanning and probing is considered illegal and immoral in many countries around the world yet Shodan has made the Internet more secure, by opening access to port scanning and making it impossible for companies to just "hide" new tools.
I'd prefer to see ISPs/governments taking action against dangerous IoT devices in their network (sending probes to vulnerable devices and blocking Internet access until the owners of the devices have secured their shit or provide proof of running a honeypot). We can't really expect such measures from companies right now, but a tool like Brickerbot might kickstart a movement.
I'd prefer the bot to just change the password and disable vulnerable services though (which still might brick devices if their web servers are vulnerable). Still, I do believe that why device fallen to Brickerbot would have fallen to any other botnet within days anyway, so the botnet is not inherently bad in my opinion.
This problem should stary disappearing as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market. Until then, steps have to be taken to stop DDoS attacks as they are getting worse and worse.
> Port scanning and probing is considered illegal and immoral in many countries
Do you know of a country where this is true? I know there are some broadly worded laws in England against the use of "hacking tools" but there are so many legitimate uses of port scanning that it'd be hard to explain why a port scanner is any more of a hacker tool than traceroute is.
In the UK it's not illegal per se, but if you go by the letter of the law you could be convicted under the Computer Misuse Act for it. But at the same time you could be convicted for sending a HTTP GET request to a server you don't have permission to access - this law is rather broad.
I'm not sure if there's even been a case to test it though, and as the general population becomes more tech savvy it seems unlikely such a conviction would be made for port scanning on it's own.
There's a bunch of cases denoted over at nmap.org [0]. Also, regardless of law, if you try to do a full probing port scan of certain military IP ranges in many countries, you might get a visit from a couple of not-so-friendly people regardless of legislation.
>as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market.
AKA the plausible end of open source. Once writing any program and sharing it can get you sued, people will stop doing that.
That depends on the way legislation is written of course. If you provide an open source package for free, there's no real transaction so you can't claim responsibility from anyone. Same if it's a free closed source product. But, if you sell software and neglect security issues within the warranty of said product, you should be held accountable, open source or not.
Such legislation should not be there to allow (class action) lawsuits but should be upheld by a government body, responding to complaints from the general public.
Problems with open source can also be solved by requiring companies who do not wish to take responsibility to give users to either sign a waiver (explicit, no TOS bullshit) or return the product immediately in exchange for money back. With open source software, there is no money given, so no problem. With closes source software, this highlights the vendor's behaviour regarding security support and might make consumers think twice before going with certain vendors.
Another way to do this would be to require vendors to put a clearly visible, standardised sticker/tag/image on their products detailing the support life cycle (warranty / software updates / security updates), similar to the nutrition information found on many food products. That way, consumers can shop around or hold a company responsible of their smart thermostat suddenly stops working because the company behind it got bought out by Google.
There are tons of variations of bases for legislation, but I don't see why physical and digital goods are that different.
If my CCTV system short cirtcuits and causes a fire, the company behind it can be held responsible for mot recalling the decices if the flaw was well known. If my CCTV camera has a known flaw that let's hackers in without authentication to record my alarm code so that they can break in, suddenly we're in the wild west of software support where you're on your own. Why is there such a difference?
If they can be bricked, perhaps they should be bricked. Maybe this is something we should encourage going forward to encourage security for IoT? Maybe we should hold a Brickcon where security researchers try to develop ways to brick any insecure devices before they become a threat.
No vigilante justice required. Just make the companies liable for the damage they cause when their products turn into a botnet.
Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things? Declare that the manufacturer has taken reasonable steps to make the device secure and is liable for damage if that turns out to be untrue.
We're moving in that direction in the UK. There's no legislation around it yet, but the government recently published the snappily named Code of Practice for Consumer IoT Security[1], which if rumours I've heard are correct they've basically published saying manufacturers can either voluntarily comply, or they can deal with it become legislation in the future.
When I first heard about it I was pretty dubious given government's track record on regulating technology, but its actually a really solid document, covering 13 guidelines which are specific enough to be useful, while not going deep into technical detail which will go out of date:
1. No default passwords
2. Implement a vulnerability disclosure policy
3. Keep software updated
4. Securely store credentials and security-sensitive data
5. Communicate securely
6. Minimise exposed attack surfaces
7. Ensure software integrity (this is probably my least favourite guideline, as it basically says you should check signatures on all firmware, by extension shutting down people's ability to control their own hardware with custom firmware)
8. Ensure that personal data is protected
9. Make systems resilient to outages
10. Monitor system telemetry data
11. Make it easy for consumers to delete personal data
12. Make installation and maintenance of devices easy
Oh yes. I'd push back hard on 7. Otherwise, it's a really solid list of guidelines and I love it.
WRT. 7, forcing secure boot is an overkill and pretty anticonsumer, IMO. There really needs to be a provision allowing for user-initiated software changes. If you're from UK, please let them know via e-mail to: securebydesign@culture.gov.uk.
I was worried about 10 (I don't really like the vendor collecting any telemetry on my IoT devices), but the actual document is more reasonable than the headline makes it sound - it's "if you're collecting telemetry - and keep in mind point 8 - then monitor it for security anomalies".
I'm going to be very cynical and say that 7 is the only reason this legislation is being pushed, and that's the reason it's in the middle, surrounded by boringly reasonable stuff.
More cynicism says that it will require IoT devices to be closed source in order to get a signature, and require government access and audit on running devices (to confirm integrity.) That may secretly be the backdoor clause. I'm probably wrong, but the UK government is fully committed to total surveillance, and the opposition either has no position or tacitly supports it.
I recently discovered your Code of Practice for Consumer IoT Security. After reading through the entire PDF, I'd like to commend
you. It's a great document, and a great initiative - nicely striding the line of being specific enough to make a difference,
while not constraining manufacturers and service providers too much in technology and business model choices. It's great
that such a good document is taking lead on this issue.
That said, I'd like to strongly object to the point 7, "Ensure software integrity", in its current form. I strongly believe
this would have a negative impact on both consumers and IoT security.
As a tinkerer (or "maker") and a leader in a community of tinkerers, I value the right and ability to flash alternative
software on devices I own; software both made by myself and sourced from the world of Free/Open Source developers.
It's what enables people like me to derive more value from our purchases, to innovate by experimenting with them,
and most importantly - to help our families, friends and random strangers with less interest in technological minutea
to derive more value from their own devices, including extending their usable lifetime way past the end of manufacturer's
support.
Secure boot would prevent all of that, by removing the ability of end-users to modify software on devices they own.
This goes against both the interests of end-users, IoT security and society at large for many reasons, including the following:
- Software provided on IoT devices is typically closed-source. Homegrown/community software is almost universally
open-source, which means many more skilled professionals took a look at the code to ensure it is secure against
attacks and does not secretly siphon off data, personal or otherwise.
- The ability to flash your own software means the IoT device lifetime is no longer determined by the lifetime
of its manufacturers. When the original vendor decides to EOL the device, the community of users can still
continue to provide timely security updates and feature improvements.
- Extending the lifetime of devices through the ability to install custom software also means the devices
take longer before they end up on a landfill, thus reducing their environmental impact.
I kindly ask you to please reconsider the point 7 of your document. While its intentions are noble, its particular form is,
in my opinion, counterproductive to the overall goals of the document. Please help create a future in which companies
minding their users' interests can thrive, in symbiosis with a healthy community of tinkerers.
> Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things?
Because its a moving target. A light switch that isn't going to burn my house down when I buy it will still be safe in 10 or 20 years. A "secure" piece of software of even minimal complexity almost certainly has many severe bugs yet to be discovered.
I worry that the effect of legislation like this would mean you could no longer buy a $25 router to hack around with or put OpenWRT on - the legal liability would make such products non-viable, leaving only expensive enterprise grade stuff for purchase. Maybe that's for the greater good in the long run, but it would still be something of a loss.
> No vigilante justice required. Just make the companies liable for the damage they cause when their products turn into a botnet.
The hope would be that the former leads to the latter. "Making companies liable for doing a shoddy job" is rarely a thing that happens on its own, without pressure from below that's usually a reaction to incidents that hurt people.
The problem is, most consumers won't know if their "smart" lights are causing havoc somewhere across the Internet. If you brick the device, the consumer will notice their device stopped working, and will (hopefully) start claiming warranty. Without end-user pushback, secure IoT will remain a pipe dream.
It is a prove a negative vs prove a positive. "This device cannot broadcast with enough power to interfere with radar" is provable. "This device has no security holes" is often not provable even if they try to secure it.
"This device uses [x,y,z] techniques for avoiding [interference/hacking]" is provable. List of required techniques should be known in advance (like ["no default password", "no raw tcp password sending", ...]). I have seen such requirements when delivering devices for some big companies which care about security of devices reselled by them, I think it was even some publicly defined standard, but can't find it now. Such things exist already, but are just not enforced like interference requirements.
Because your wifi router is limited to a minuscule physical range. The rest of the world's wifi routers don't continually threaten to interfere with your local air traffic control radar - if they did, we'd probably have much stronger controls around it than we currently have.
Because if you screw with ATC radar, people with the force of law will kick down your door and take your equipment. The FCC and FAA won't tolerate ATC radar being screwed with, no one cares about IoT hardware until it stops working.
Before you do this, please make a law that forces the manufacturers to replace the devices free of charge when they get bricked due to a security problem.
Having a known security bug unpatched for longer than x days should be grounds for a warrenty refund. That should get companies moving when there is a real world cost for not dealing with security bugs.
Might be somewhat ironic comparison. Apparently the problem of garbage companies doing garbage devices exists in medical space as well; see e.g. [0].
Also, it's not about making wi-fi light bulbs cost thousands of dollars, but if yours were on the cheap end, they are most likely garbage products with highly intentional planned obsolescence, and subsidized by data collection app you have to install, which is the whole point of making them in the first place. If such business model were to become unprofitable, I believe it would be a great win for the society (and the environment).
It's all part of a trade-off. Does the world need wifi-enabled lightbulbs that increase the size of criminal botnets? If not, then our options are to either stick to non-wifi-enabled lightbulbs, or ensure that our wifi-enabled lightbulbs cannot become part of botnets.
Manufacturers already have to replace devices when they're bricked by a hardware problem. They have various quality control processes in place to try to prevent this from happening. I don't see why it should be any different for security - or make a huge difference to the cost.
Or perhaps they could be fixed? There are plenty of examples of worms released with the express intent of patching a particular vulnerability (https://en.wikipedia.org/wiki/Anti-worm)
That is awesome. Probably still illegal, but having viruses and worms that patch the vulnerabilities that they use to spread, is a million times better than viruses and worms that use those vulnerabilities to do more damage.
The original title would be good enough for this story imho.
Worked in a bug bounty program for a spell, there are some young folks out there with borderline scary levels of talent and tenacity. Making this about the age of the person doesn't really add anything. (This is coming from a relative dinosaur, so maybe I'm just age-sensitive haha)
In high school and middle school you also have a lot more flexibility to devote a substantial amount of time to this type of stuff. You are right, it can result in really technically literate individuals.
I'm going to suggest that maybe this is a good thing. If the worm is really good at its job, it can take out all those zombie IoT devices that are being used for botnets. And it maybe it will act as a wake up call to consumers, regulators, and, perhaps, companies.
Not that I condone destroying people's property to accomplish that, but at least there is a potential upside.
I recently read the Shockwave Rider [1] in which the word 'worm' was first coined, thanks to the discussion on HN [2] on Stand on Zanzibar. Can recommend both books for those into SciFi and would like to thank the community.
Great article, but for the Iran smearing. Come on zdnet, you know better. Out of the thousands of ip addresses related to this attack, including a command and control center etc, whilst knowing that an attack from a VPN in Iran does in no way prove that the attacker is indeed from Iran or even Iranian, still you managed to insert a whole paragraph titled "Attacks carried out from Iranian server" in there, even though the researcher says the ip only "appears" to be from Iran, and he describes one (1!) attack from that ip. This wasn't even the command-and-control server.
And that 14-year old is living in Europe, not Iran.
Please leave the propaganda out of your tech news, zdnet.
Like I said, the 14 yo lives and originates in Europe. Cashdollar told so. Yet you fail to mention that and, despite the whole world knowing how explosive the political situation between the US (where YOU originate) and Iran is at the moment, you selective only mention Iran and no other country.
So yeah, factually correct maybe, but very biased by selective editing. That's textbook propaganda. If you don't see that you're part of it.
Regardless of the importance of the Iranian IP to the operation, it is of very little significance to the story. Mentioning it only really serves to act as clickbait for people who don’t understand that it’s possible to stand up a server anywhere in a few minutes, and who are predisposed to think that an Iranian IP makes everything extra ominous.
The Iranian IP is important because it means its unlikely that we'll ever get any data from the server. If it was in the US or EU, its much more likely we could subpoena it and figure out more about whats going on.
I read the article a few times and I don't think there is any anti-Iran sentiment in it at all.
Interesting that you interrupted it in this way. Maybe that speaks more of your own biases than it does of those in the HN community? :)
Anyway, my immediate thought (before the article got to the teen) was that this attack might have be spillover from an attack by the US on Iranian systems. Or at least an hint at how exposed Iran's digital infrastructure might be to future attack from the US in light of recent events. Or maybe how this will get Iran to tighten up it's digital infrastructure before such an attack from the US could happen.
I also immediately didn't think that the origin of Cashdollar's attack had anything to do with the origin of the creator or perpetrator.
I did a few stupid things when I was around this age, but Internet was nearly non existent so it was mostly within LANs.
I agree that IoT is a real problem, and I fear people will realize too late, with an accident or something, but this kid should have known better, depending in how it unfolds he could be in trouble.
I’d love to read this article, but it’s so covered with ads and shit I can’t. Reader mode is usually my saviour in this situation, but it’s not working in this site.
"It's using known default credentials for IoT devices to log in and kill the system"
basically its like knowing a computers root password with remote access apparently from anywhere. it looks pretty simple but effective and many iot devices are known for their lack of or lacking in security measures.
I do wonder... manufacturers aren’t liable if you allow your device to be bricked by third parties. But if such attacks were so extremely normative and carried out in such a way that they informed the user why they are possible... could the manufacturers become liable to address it?
Would be way better if the worm reported back which devices were vulnerable (for naming and shaming) as it found them instead of ruining peoples' stuff just to make a point (or worse, plain lulz)
So what would that look like? You found some vulnerable webcam, great now you can add its IP address to your public shaming list. That'll teach that random farmer somewhere in Iowa. Listing the device name and count wouldn't do anything; the vulnerability is already known otherwise the worm wouldn't exist... And either the manufacturer doesn't care, or there's an update and nobody cares.
If otoh you brick the device in a way that requires flashing via jtag and suddenly have hundreds of people all over the country return their broken webcam to WalMart you make a little more impact. The thing is, it would have to keep happening for stores to start noticing a pattern and start caring. If it was a one time thing it might be cheaper for then to just throw them in the trash and hand out new ones or refund.
>It's expected that some owners will most likely throw devices away, thinking they've had a hardware failure without knowing that they've been hit by malware.
I always asked myself this question: does this kind of attacks try to authenticate to every single IP address (for loop, excluding ranges from Google, Microsoft, Apple...) or does it penetrate the victim network first by executing some random file from the internet that the victim downloaded?
Which IoT devices does this affect? Is it just devices that run Linux and have an exposed SSH? Wondering if things like the Particle.io / Obniz / ESP8266 are affected.
One of the selling points of things like smart thermostats is the you can remotely control them; you can set them to your preferred temperature as you are traveling home so it will be nice when you arrive.
A quote from not the current worm's author, but the author of the worm that may be the inspiration for the current one:
>The BrickerBot author argued that it would be better if the devices were destroyed, rather than sit around as cannon fodder for DDoS botnets, and haunting the internet for years.
... yea, broadly I'd agree. IoT vendors are causing a tragedy of the commons, inflicting quite a lot of damage without feeling any of the pain because it hits others.
It's the sort of thing that should be addressed by legislation of some kind, but absent that (which includes nearly all international cases)... what else can you do to stop the worst offenders?
In Europe the we have the laws that protect consumer rights.
The thing missing is somebody to define that severe security vulnerability is considered as a defect in the product. Once this is done, consumers can either demand the seller to fix the problem, replace product or give refund. This should pretty quickly create a financial incentive to sell products which get updates.
This would be a huge improvement for made-and-sold-in-Europe, and would at least make a sizable dent... but the rest of the internet still exists, and doesn't care where the botnet devices are made or installed, only that they exist.
Still, yea, this is exactly the kind of thing that needs to become commonplace. Hopefully it will be, though this is far from the first time that something like this has happened and here we still are.
Powerful countries have the ability to apply their law extrateritorially, especially law concerning companies... e.g. UK stipulates that bribery is illegal everywhere in the world, so a UK company employing people in China and those Chinese employees bribing Chinese officials is illegal (in the UK), so the UK parent company could be responsible...
Easy to do the same thing here, as long as the company has (and wants to keep having) presence in the EU that's enough to enforce standards worldwide.
A lot of companies want to sell world wide. It is a lot easier to have and support one product for the world than multiple products for each country. I have to regularly attend training which amounts to "Australian customers really want this product but we are still working on getting certification so be careful not to ship one accidentally. (It is a radio transmitter)
You have to start somewhere. See also: GDPR - it's already made enough waves that other nations and even some US states are considering similar legislation.
What are your experiences of acquiring information in Poland under GDPR, especially if you were either denied or came across any hurdles in exercising your rights; whether or not the subsequent disclosure was to your satisfaction?
I'm not temporal, but haven't had many bad experiences that I chased up. I didn't request some data from Revolut that they didn't provide, but I didn't email their compliance officer to request it. I think they would have provided it if I had.
Still, even just the fact that companies are now thinking about this and providing ways to download/delete your data makes me love the GDPR.
I haven't pursued any GDPR requests yet (though I considered it in some cases). For now, I'm just happily reaping benefits, such as having an option to download my data, or discovering that some off-line desktop games (like Kerbal Space Program, of all things!) did track me - past May 25, 2018, they suddenly started showing popups informing about this, and (this is the best part) I could deny it and keep playing.
On second thought.. It might be the law already covers this.
It would be just the question of somebody testing it. Take a relatively new() device with unpatched security hole, try to get it replaced/refunded and if necessary proceed to court to get a decision on the matter.
Although I wonder if any vendor actually wants to go to court arguing that a product with severe security issue is actually working as intended.
() Safe bet should be a device purchased less than 2 years ago, since that seems to be the time the vendor is at least responsible for defects.
Speaking EU/Germany-centric, the problem with suing a company is that the company can always make the case go away by replacing the faulty product with either a repaired product, a very similar product of equal or greater worth or refunding a customer for the product and any losses incurred. A court will readily accept this even if the plaintiff customer might not want to accept it. And it's better to just silence a single suit this way (or a few) than to get a sentence against you on the books that will encourage everybody else to demand replacements or even sue too.
Exactly. I've returned products due to firmware problems in the past, although they were not security related. In one case I was asked to give a written description of the problem before I got a refund. The other store returned the money without questions. It's not really any different from a product that broke under warranty in my experience.
Speaking as a European (for the time being at least) that is fine by me. If a company makes unsafe rubbish, they can feel free to not try to sell it to me. I'm sure someone else will happily take their place.
What would be really nice is if a worm like this patched the crappy security of those IoT devices, but I guess that's too hard. With the question being either bricking them or allowing them to become part of a botnet, I guess the world might indeed be better off with vulnerable IoT devices getting bricked. But boy, it's really a marginally lesser evil.
Let's hope this forces manufacturers to improve their security. That's the main good that needs to come out of this.
They have generally had mixed success, from the ones I've read about. Pretty often there are side effects that aren't good (e.g. bricking some devices, excess traffic due to being too successful, that kind of thing)
But yea, in this case I think it's clearly too complex. There are thousands of different things hooked up to the internet, and you can't fix all of them at once.
These things would be most positive if they were more common, so a vulnerable device would get bricked while it's still in the store return period, and people would just take it back.
If they wait too long, then the result is more sales, which increases profit for the whole ecosystem that produces these devices. But if it's more common and results in returns and exchanges, the result is less profit, which might finally put some pressure where it needs to be.
Patching is just a subsidy for the manufacturer. This malware just soft-bricks (the device can be restored by reinstalling the firmware), so it inflicts a cost without generating extra garbage. Sounds great to me.
The attack that crippled Akamai DNS a few years ago actually did close up security holes, though only to cement the attacker more firmly in place and keep other attackers out.
>IoT vendors are causing a tragedy of the commons, inflicting quite a lot of damage
It's an interesting problem in that the value of hackale/wormable IoT devices stems from their sheer number. In a way the culpability of any singular vendor for any singular sale is low by and of itself; it's only large in aggregate of all the vendors' products ever sold & still deployed.
This is a reverse of, but comparable to, the low value of personal data being gathered&processed (or stolen) - for every singular individual, the value/loss is exceedingly low. The value lies in the aggregate of the data; the whole is much more than the sum of its parts. Thus the prosecution of things like the Equifax hack, or the insecure IoT devices is pretty spotty at best.
Perhaps we need a new, specialized legislation & judiciary for cases where network effects dominate.
There aren't many "network effects" (the business term) in IoT, despite vendors' best efforts. The value of the network to the user does not increase with the number of nodes in the network.
That said, I agree there's a sort-of similar pattern here, which I feel underlies many of the biggest problems of our era - including climate change. The pattern is that there's a lot of entities - individual, small and large companies - engaging in transactions, and each transaction has a small negative - some personal data stolen here, some toaster joining the botnet there, some trivial amount of carbon emitted elsewhere. The negatives however add up, and ultimately manifest as huge and international problems.
We definitely need to develop legislative methods that would combat this pattern at the structural level, regardless of the business domain it shows itself in.
That's an interesting perspective. It brings to light some personal responsibility we might be glossing over. Is it the responsibility of the bottled water seller to not sell plastic bottles because they fill landfills, or the consumer to make sure to dispose of the plastic bottle responsibly?
That doesn't entirely match the current situation, but there probably is a some responsibility that lays with consumers for buying crappy devices from companies with no track record for keeping them updated. It's not always simple to reason about, but it is something to consider next time there's a decision to buy some cheaper device from some no name company or from some well known company (that hopefully has some sort of support lifetime at least).
Which is part of what causes tragedies of the commons, yes. I totally get your point, and it's valid, but it's being demonstrated right now that the current appropriate strategies have failed rather spectacularly.
When appropriate measures have massively failed, what are the alternatives? Vigilante actions are one option, as is "watch the whole thing burn down", and probably an infinite variety of stuff in the middle and along different axes. What's your preferred option?
edit: also, it's not quite "may [do] something worse". Insecure IoT things are used as botnets. Frequently. It's not a hypothetical threat at all, it's just a question of scale / frequency.
In a society where people refuse to have food inspectors and some are being harmed by food poisoning, I think one could argue that there isn't something wrong with that level of action.
I think it's more like firearms. These devices can be and are used as weapons to hurt 3rd parties on the Internet. It's more like firearm owners leaving their weapons in front of their barely-sturdy windows. Then, folks needing weapons keep breaking the windows before firing them at others. The homeowners and homebuilders keep letting it happen.
So, vigilantees concerned about damage to innocent people keep breaking into the windows, stealing the guns, unloading them, and tossing them into landfills. I'd be like: "Stop putting your guns in front of the windows. Be a responsible gun owner." Enough broken windows and stolen guns might incentivize them to do that.
Soft-bricking is hardly comparable to burning something down. More like adding an extra lock and throwing away the key. Inconvenient, but not destructive.
Food safety inspectors have an official remit to do this that is enshrined in statues and local laws. You don't get to just decide for yourself you will shut down some restaurants on a freelance basis.
You'll find it in tons of large-scale (e.g. governmental) decisions at the very least. "is it worth more to prevent X than to deal with the fallout" is a decision that has to be made at some point, and human life / injury are part of that equation.
So, in a way, yes. I do. So do lots of people when they go to urgent care rather than the ER, knowing that the ER could bankrupt them, and they'd rather risk the delay. I don't have numbers off the top of my head, but I don't think it's as uncommon as you seem to think it is.
You're comparing urgent care to ER to somehow argue people view both DDoS and food poisoning in dollars? Just because some policymakers put rough numbers on the value of human life in the aggregate in very specific situations with very particular interpretations, you think people would support your cause of measuring everything in dollars and going around destroying their property because they're "just as dangerous as food poisoning"?
Like, seriously? If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down? Come on.
> you think people would support your cause of measuring everything in dollars and going around destroying their property because they're "just as dangerous as food poisoning"?
"People", in general, are dumb. Individuals, when explained, might actually support it, but I guess you'd have to start teaching the following in school:
If you want to compare things, you need to express them with the same unit. It so happens that there is universal unit already in use to compare everything to everything else - money. Which, internationally, means dollars. Moral caveats apply, but this is how you correctly compare arbitrary things when you lack a more suitable common unit.
> Like, seriously? If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down? Come on.
Sure, why not.
Unsanitary practices: you weigh some people likely ending up in the hospital with food poisoning against shutting down a business employing a dozen or more people for unspecified amount of time, which may or may not have pretty bad secondary effects (e.g. waiters aren't exactly the kind of people who can afford asudden job loss).
IoT botnet: you weigh bricking a bunch of (at this point in time) non-essential trinkets, thus inconveniencing innocent consumers and making it worse for garbage companies who shouldn't be in business in the first place, vs. enabling moderate-probability inconveniencing events for thousands to millions of people, such as DDoSing random websites, and occasional low-probability high-impact event, such as Maersk getting pwnd and disrupting worldwide shipping.
Ultimately I'd expect the IoT scenario to be less important than food poisoning scenario when you tally up both sides of both choices - and accordingly, food poisoning is something governments worldwide are already dealing with. However, and circling back to tty2300's original comment, the pattern of thinking is pretty much the same here in both scenarios.
>You're comparing urgent care to ER to somehow argue people view both DDoS and food poisoning in dollars?
No. I'm responding to:
>You measure your health in dollars?
And apparently reading it with a different intent than it was written with.
---
>If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down?
They should, yes. I don't really think they do though. Botnets cause them harm (they're often used to hide the source of hacks, which often target financial info, which does affect millions of people), but it's often further removed than food poisoning, so it's harder for them to identify the cause. They're also usually more attached to purchases of things that work for longer than a meal lasts.
People don't have visceral reactions to things in line with what they should rationally fear. I don't think anyone would debate that. I'm arguing DDoS (and everything else botnets are used for) are in that "does not get an appropriately strong reaction" category.
Most people have wildly inconsistent treatment of health risks and are largely reactionary. Not sure they are a good basis for determining what is ethical/moral actions.
That's a misleading representation of the defense. "If I didn't do it, they would eventually have become a public nuisance in the form of nodes in a botnet" captures the supposed intent more fairly, and with the intent to target every known vector used by Mirai and Qbot it seems quite realistic that these systems would eventually become botnet nodes.
It's IMO somewhat comparable to culling high-risk farm animals to prevent widespread disease outbreaks.
There is a dead reply to this which I think asks a fair question.
> Where do you have legal authority to do a vigilante culling of your neighbor’s herd, because you decided something about them merited it?
I am not making the argument that this is legally defensible. You can by definition not have legal authority to engage in vigilantism. It's from an ethical and practical perspective that I draw the analogy.
On the matter of legality, the difference you point out applies, but also goes both ways. There is no legal framework around "culling" insecure devices, and very little regulation.
I was wondering where the 'worm' designation for this came from, given that the source article doesn't mention the word at all, and the malware doesn't appear to hop from target to target as the term 'worm' would imply.
Looks like boingboing embellished a little.
I think it's more that the original source is preferred, out of respect to the producer of that content, rather than a "reblog" that doesn't add much substantive new info. BoingBoing is great, and plays an important role, but that role seems mostly to surface stories from elsewhere, highlight the key points and add its own editorial take, rather than generating original content.
Consumer devices and gadgets are not my main concern. Internet connected building automation is in similar sorry state. Someone will do large scale apartment automation systems attack, maybe just single manufacturer is targeted and as a result 5-15% of apartments go nuts at once. Just messing with the air conditioning can kill old and sick people until things get fixed.
The grid is able to cope with fluctuations in demand, but that's a totally different ballgame than switching massive loads on and off, synchronized to within something like 20 milliseconds (a single 50 Hz cycle), in a controlled, intentional and malicious way - and potentially worse, the attacker could observe the grid and react to the countermeasures (e.g. to detect how quickly the grid reacts, and trigger oscillations in some system never designed to deal with something like that).
I'd prefer to see ISPs/governments taking action against dangerous IoT devices in their network (sending probes to vulnerable devices and blocking Internet access until the owners of the devices have secured their shit or provide proof of running a honeypot). We can't really expect such measures from companies right now, but a tool like Brickerbot might kickstart a movement.
I'd prefer the bot to just change the password and disable vulnerable services though (which still might brick devices if their web servers are vulnerable). Still, I do believe that why device fallen to Brickerbot would have fallen to any other botnet within days anyway, so the botnet is not inherently bad in my opinion.
This problem should stary disappearing as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market. Until then, steps have to be taken to stop DDoS attacks as they are getting worse and worse.
Do you know of a country where this is true? I know there are some broadly worded laws in England against the use of "hacking tools" but there are so many legitimate uses of port scanning that it'd be hard to explain why a port scanner is any more of a hacker tool than traceroute is.
I'm not sure if there's even been a case to test it though, and as the general population becomes more tech savvy it seems unlikely such a conviction would be made for port scanning on it's own.
[0] https://nmap.org/book/legal-issues.html
AKA the plausible end of open source. Once writing any program and sharing it can get you sued, people will stop doing that.
Such legislation should not be there to allow (class action) lawsuits but should be upheld by a government body, responding to complaints from the general public.
Problems with open source can also be solved by requiring companies who do not wish to take responsibility to give users to either sign a waiver (explicit, no TOS bullshit) or return the product immediately in exchange for money back. With open source software, there is no money given, so no problem. With closes source software, this highlights the vendor's behaviour regarding security support and might make consumers think twice before going with certain vendors.
Another way to do this would be to require vendors to put a clearly visible, standardised sticker/tag/image on their products detailing the support life cycle (warranty / software updates / security updates), similar to the nutrition information found on many food products. That way, consumers can shop around or hold a company responsible of their smart thermostat suddenly stops working because the company behind it got bought out by Google.
There are tons of variations of bases for legislation, but I don't see why physical and digital goods are that different.
If my CCTV system short cirtcuits and causes a fire, the company behind it can be held responsible for mot recalling the decices if the flaw was well known. If my CCTV camera has a known flaw that let's hackers in without authentication to record my alarm code so that they can break in, suddenly we're in the wild west of software support where you're on your own. Why is there such a difference?
In the US and UK (and I expect most countries) you don't need to charge for a product to be sued when it fails.
Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things? Declare that the manufacturer has taken reasonable steps to make the device secure and is liable for damage if that turns out to be untrue.
When I first heard about it I was pretty dubious given government's track record on regulating technology, but its actually a really solid document, covering 13 guidelines which are specific enough to be useful, while not going deep into technical detail which will go out of date:
1. No default passwords
2. Implement a vulnerability disclosure policy
3. Keep software updated
4. Securely store credentials and security-sensitive data
5. Communicate securely
6. Minimise exposed attack surfaces
7. Ensure software integrity (this is probably my least favourite guideline, as it basically says you should check signatures on all firmware, by extension shutting down people's ability to control their own hardware with custom firmware)
8. Ensure that personal data is protected
9. Make systems resilient to outages
10. Monitor system telemetry data
11. Make it easy for consumers to delete personal data
12. Make installation and maintenance of devices easy
13. Validate input data
[1] (PDF) https://assets.publishing.service.gov.uk/government/uploads/...
WRT. 7, forcing secure boot is an overkill and pretty anticonsumer, IMO. There really needs to be a provision allowing for user-initiated software changes. If you're from UK, please let them know via e-mail to: securebydesign@culture.gov.uk.
I was worried about 10 (I don't really like the vendor collecting any telemetry on my IoT devices), but the actual document is more reasonable than the headline makes it sound - it's "if you're collecting telemetry - and keep in mind point 8 - then monitor it for security anomalies".
More cynicism says that it will require IoT devices to be closed source in order to get a signature, and require government access and audit on running devices (to confirm integrity.) That may secretly be the backdoor clause. I'm probably wrong, but the UK government is fully committed to total surveillance, and the opposition either has no position or tacitly supports it.
If manufacturers would like to add some signature checking chip they can allready do that.
I wouldn't be too cynical, unless they make it illegal to modify the firmware.
I sent them the following e-mail:
Hello,
I recently discovered your Code of Practice for Consumer IoT Security. After reading through the entire PDF, I'd like to commend you. It's a great document, and a great initiative - nicely striding the line of being specific enough to make a difference, while not constraining manufacturers and service providers too much in technology and business model choices. It's great that such a good document is taking lead on this issue.
That said, I'd like to strongly object to the point 7, "Ensure software integrity", in its current form. I strongly believe this would have a negative impact on both consumers and IoT security.
As a tinkerer (or "maker") and a leader in a community of tinkerers, I value the right and ability to flash alternative software on devices I own; software both made by myself and sourced from the world of Free/Open Source developers. It's what enables people like me to derive more value from our purchases, to innovate by experimenting with them, and most importantly - to help our families, friends and random strangers with less interest in technological minutea to derive more value from their own devices, including extending their usable lifetime way past the end of manufacturer's support.
Secure boot would prevent all of that, by removing the ability of end-users to modify software on devices they own. This goes against both the interests of end-users, IoT security and society at large for many reasons, including the following:
- Software provided on IoT devices is typically closed-source. Homegrown/community software is almost universally open-source, which means many more skilled professionals took a look at the code to ensure it is secure against attacks and does not secretly siphon off data, personal or otherwise.
- The ability to flash your own software means the IoT device lifetime is no longer determined by the lifetime of its manufacturers. When the original vendor decides to EOL the device, the community of users can still continue to provide timely security updates and feature improvements.
- Extending the lifetime of devices through the ability to install custom software also means the devices take longer before they end up on a landfill, thus reducing their environmental impact.
I kindly ask you to please reconsider the point 7 of your document. While its intentions are noble, its particular form is, in my opinion, counterproductive to the overall goals of the document. Please help create a future in which companies minding their users' interests can thrive, in symbiosis with a healthy community of tinkerers.
Regards, Jacek Złydach
Because its a moving target. A light switch that isn't going to burn my house down when I buy it will still be safe in 10 or 20 years. A "secure" piece of software of even minimal complexity almost certainly has many severe bugs yet to be discovered.
I worry that the effect of legislation like this would mean you could no longer buy a $25 router to hack around with or put OpenWRT on - the legal liability would make such products non-viable, leaving only expensive enterprise grade stuff for purchase. Maybe that's for the greater good in the long run, but it would still be something of a loss.
The hope would be that the former leads to the latter. "Making companies liable for doing a shoddy job" is rarely a thing that happens on its own, without pressure from below that's usually a reaction to incidents that hurt people.
No airplanes required. Just make flying cars.
No, it isn't
>"This device has no security holes" is often not provable even if they try to secure it.
That's not how security compliance is determined
Just as is done with HIPAA compliance, NIST compliance, etc. there would be a framework of security controls that need to be adhered to
Also, it's not about making wi-fi light bulbs cost thousands of dollars, but if yours were on the cheap end, they are most likely garbage products with highly intentional planned obsolescence, and subsidized by data collection app you have to install, which is the whole point of making them in the first place. If such business model were to become unprofitable, I believe it would be a great win for the society (and the environment).
--
[0] - https://www.theguardian.com/science/2018/nov/26/uk-firm-sold...
It's a bit like guerilla pot hole repair crews: https://www.citylab.com/equity/2017/03/portland-anarchists-w...
Worked in a bug bounty program for a spell, there are some young folks out there with borderline scary levels of talent and tenacity. Making this about the age of the person doesn't really add anything. (This is coming from a relative dinosaur, so maybe I'm just age-sensitive haha)
[0] https://twitter.com/shelajev/status/796685986365325312
Not that I condone destroying people's property to accomplish that, but at least there is a potential upside.
I recently read the Shockwave Rider [1] in which the word 'worm' was first coined, thanks to the discussion on HN [2] on Stand on Zanzibar. Can recommend both books for those into SciFi and would like to thank the community.
[1] https://en.wikipedia.org/wiki/The_Shockwave_Rider [2] https://news.ycombinator.com/item?id=19879830
And that 14-year old is living in Europe, not Iran.
Please leave the propaganda out of your tech news, zdnet.
I'm the ZDNet reporter who wrote the story.
What in God's green earth are you talking about?
The article says the hacker's server is rented from an Iranian company. And yes, despite your ignorant claims, the IP address is the C2 server.
What imagined propaganda are you talking about?
So yeah, factually correct maybe, but very biased by selective editing. That's textbook propaganda. If you don't see that you're part of it.
I read the article a few times and I don't think there is any anti-Iran sentiment in it at all.
Anyway, my immediate thought (before the article got to the teen) was that this attack might have be spillover from an attack by the US on Iranian systems. Or at least an hint at how exposed Iran's digital infrastructure might be to future attack from the US in light of recent events. Or maybe how this will get Iran to tighten up it's digital infrastructure before such an attack from the US could happen.
I also immediately didn't think that the origin of Cashdollar's attack had anything to do with the origin of the creator or perpetrator.
But maybe like me they turn on a VPN and are now operating out of "China" or "Turkey" or, gasp, "San Francisco".
I think SF would be a good look to potential investors.
Probably just a typo but in case not: interpreted.
I agree that IoT is a real problem, and I fear people will realize too late, with an accident or something, but this kid should have known better, depending in how it unfolds he could be in trouble.
basically its like knowing a computers root password with remote access apparently from anywhere. it looks pretty simple but effective and many iot devices are known for their lack of or lacking in security measures.
If otoh you brick the device in a way that requires flashing via jtag and suddenly have hundreds of people all over the country return their broken webcam to WalMart you make a little more impact. The thing is, it would have to keep happening for stores to start noticing a pattern and start caring. If it was a one time thing it might be cheaper for then to just throw them in the trash and hand out new ones or refund.
And... more trash. Ugh
Maybe people who set the same password on the whole fleet and don't make the user override it?
Come on, you can't take the responsibility from well-paid incompetent professionals and put it on a teenager from a third-world country.
One of the selling points of things like smart thermostats is the you can remotely control them; you can set them to your preferred temperature as you are traveling home so it will be nice when you arrive.
My IoT devices are on my network but you would need to get yourself inside my network to talk to them. I'm not exposing ports for my lights...
>The BrickerBot author argued that it would be better if the devices were destroyed, rather than sit around as cannon fodder for DDoS botnets, and haunting the internet for years.
... yea, broadly I'd agree. IoT vendors are causing a tragedy of the commons, inflicting quite a lot of damage without feeling any of the pain because it hits others.
It's the sort of thing that should be addressed by legislation of some kind, but absent that (which includes nearly all international cases)... what else can you do to stop the worst offenders?
The thing missing is somebody to define that severe security vulnerability is considered as a defect in the product. Once this is done, consumers can either demand the seller to fix the problem, replace product or give refund. This should pretty quickly create a financial incentive to sell products which get updates.
Still, yea, this is exactly the kind of thing that needs to become commonplace. Hopefully it will be, though this is far from the first time that something like this has happened and here we still are.
Easy to do the same thing here, as long as the company has (and wants to keep having) presence in the EU that's enough to enforce standards worldwide.
Still, even just the fact that companies are now thinking about this and providing ways to download/delete your data makes me love the GDPR.
It would be just the question of somebody testing it. Take a relatively new() device with unpatched security hole, try to get it replaced/refunded and if necessary proceed to court to get a decision on the matter.
Although I wonder if any vendor actually wants to go to court arguing that a product with severe security issue is actually working as intended.
() Safe bet should be a device purchased less than 2 years ago, since that seems to be the time the vendor is at least responsible for defects.
Or an incentive not to sell in Europe.
Of course that doesn't fix the global problem.
Let's hope this forces manufacturers to improve their security. That's the main good that needs to come out of this.
They have generally had mixed success, from the ones I've read about. Pretty often there are side effects that aren't good (e.g. bricking some devices, excess traffic due to being too successful, that kind of thing)
But yea, in this case I think it's clearly too complex. There are thousands of different things hooked up to the internet, and you can't fix all of them at once.
If they wait too long, then the result is more sales, which increases profit for the whole ecosystem that produces these devices. But if it's more common and results in returns and exchanges, the result is less profit, which might finally put some pressure where it needs to be.
You'd need a JTAG cable and a fresh firmware image to fix that. Hardly something within the reach of a typical home user.
It's an interesting problem in that the value of hackale/wormable IoT devices stems from their sheer number. In a way the culpability of any singular vendor for any singular sale is low by and of itself; it's only large in aggregate of all the vendors' products ever sold & still deployed.
This is a reverse of, but comparable to, the low value of personal data being gathered&processed (or stolen) - for every singular individual, the value/loss is exceedingly low. The value lies in the aggregate of the data; the whole is much more than the sum of its parts. Thus the prosecution of things like the Equifax hack, or the insecure IoT devices is pretty spotty at best.
Perhaps we need a new, specialized legislation & judiciary for cases where network effects dominate.
That said, I agree there's a sort-of similar pattern here, which I feel underlies many of the biggest problems of our era - including climate change. The pattern is that there's a lot of entities - individual, small and large companies - engaging in transactions, and each transaction has a small negative - some personal data stolen here, some toaster joining the botnet there, some trivial amount of carbon emitted elsewhere. The negatives however add up, and ultimately manifest as huge and international problems.
We definitely need to develop legislative methods that would combat this pattern at the structural level, regardless of the business domain it shows itself in.
That doesn't entirely match the current situation, but there probably is a some responsibility that lays with consumers for buying crappy devices from companies with no track record for keeping them updated. It's not always simple to reason about, but it is something to consider next time there's a decision to buy some cheaper device from some no name company or from some well known company (that hopefully has some sort of support lifetime at least).
When appropriate measures have massively failed, what are the alternatives? Vigilante actions are one option, as is "watch the whole thing burn down", and probably an infinite variety of stuff in the middle and along different axes. What's your preferred option?
edit: also, it's not quite "may [do] something worse". Insecure IoT things are used as botnets. Frequently. It's not a hypothetical threat at all, it's just a question of scale / frequency.
So, vigilantees concerned about damage to innocent people keep breaking into the windows, stealing the guns, unloading them, and tossing them into landfills. I'd be like: "Stop putting your guns in front of the windows. Be a responsible gun owner." Enough broken windows and stolen guns might incentivize them to do that.
So, in a way, yes. I do. So do lots of people when they go to urgent care rather than the ER, knowing that the ER could bankrupt them, and they'd rather risk the delay. I don't have numbers off the top of my head, but I don't think it's as uncommon as you seem to think it is.
Like, seriously? If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down? Come on.
"People", in general, are dumb. Individuals, when explained, might actually support it, but I guess you'd have to start teaching the following in school:
If you want to compare things, you need to express them with the same unit. It so happens that there is universal unit already in use to compare everything to everything else - money. Which, internationally, means dollars. Moral caveats apply, but this is how you correctly compare arbitrary things when you lack a more suitable common unit.
> Like, seriously? If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down? Come on.
Sure, why not.
Unsanitary practices: you weigh some people likely ending up in the hospital with food poisoning against shutting down a business employing a dozen or more people for unspecified amount of time, which may or may not have pretty bad secondary effects (e.g. waiters aren't exactly the kind of people who can afford asudden job loss).
IoT botnet: you weigh bricking a bunch of (at this point in time) non-essential trinkets, thus inconveniencing innocent consumers and making it worse for garbage companies who shouldn't be in business in the first place, vs. enabling moderate-probability inconveniencing events for thousands to millions of people, such as DDoSing random websites, and occasional low-probability high-impact event, such as Maersk getting pwnd and disrupting worldwide shipping.
Ultimately I'd expect the IoT scenario to be less important than food poisoning scenario when you tally up both sides of both choices - and accordingly, food poisoning is something governments worldwide are already dealing with. However, and circling back to tty2300's original comment, the pattern of thinking is pretty much the same here in both scenarios.
No. I'm responding to:
>You measure your health in dollars?
And apparently reading it with a different intent than it was written with.
---
>If you went around and asked people whether their restaurants should be shut down for unsanitary practices, do you think you would get a response on the same planet as what you would get if you asked them about their IoT devices getting shut down?
They should, yes. I don't really think they do though. Botnets cause them harm (they're often used to hide the source of hacks, which often target financial info, which does affect millions of people), but it's often further removed than food poisoning, so it's harder for them to identify the cause. They're also usually more attached to purchases of things that work for longer than a meal lasts.
People don't have visceral reactions to things in line with what they should rationally fear. I don't think anyone would debate that. I'm arguing DDoS (and everything else botnets are used for) are in that "does not get an appropriately strong reaction" category.
You are priced per body part based on your income. Your health is most certainly defined in dollars.
I daresay that this situation even applies to "most people", or at least a very sizeable proportion of the global workforce.
It's IMO somewhat comparable to culling high-risk farm animals to prevent widespread disease outbreaks.
> Where do you have legal authority to do a vigilante culling of your neighbor’s herd, because you decided something about them merited it?
I am not making the argument that this is legally defensible. You can by definition not have legal authority to engage in vigilantism. It's from an ethical and practical perspective that I draw the analogy.
On the matter of legality, the difference you point out applies, but also goes both ways. There is no legal framework around "culling" insecure devices, and very little regulation.
Where do you have legal authority to do a vigilante culling of your neighbor’s herd, because you decided something about them merited it?
Ah the good ol, "guessed the password".
The longer I live, the less impressed I am of "hackers". 99% social engineering...