For people who believe the marketing: why would a legitimate sysadmin tool include a cryptocurrency miner and a feature to turn off the webcam indicator light?
This was used to exploit people, plain and simple. While I don't think blanket arresting people who downloaded it is reasonable (someone may have fallen for the marketing and used it for legit reasons), shutting down the C&C servers to disable it from working and aiding in exploitation & blackmail is a net good.
From a purely consequentialist perspective, it's not clearly a net good if it has a chilling effect on legitimate research, increases economic costs due to regulatory uncertainty, or has other negative second-order effects concomitant with arbitrary and overly subjective legal systems.
From a deontological perspective, it's not clear that this is good if one of your values is clearly and consistently enforced law.
In addition to more cost, if any part of the led indication circuitry were to fail then you also lost your mic. For some this may be preferred, but I can see some people not happy if the failure of an unrelated part took out their mic.
For the people that think this is legitimate software, this is not legitimate software. It is sold and encouraged as malware, used to blackmail girls (barely women) - that's what "cam capture" is for - keylogger, general malware, backconnect proxy, auto-start and persistence.
I don't think anyone seriously doubts the intended, arguable unethical, use of IM-RAT. The question is whether authoring and or selling these tools is or should be illegal. I'd argue that would set a dangerous precedent allowing governments to go after any security researchers whenever they feel it convenient to do so.
I mean, there's probably more than enough intent here for prosecution. There's a fairly significant difference in selling a RAT that does remote administration, and selling a RAT that includes bulletpoint features to evade antivirus detection, persist through removals, and other features explicitly against the wishes of the device owner while the author actively and exclusively only advertises on script-kiddie forums and provides advice and encouragement of malicious infections.
I don't think I've ever seen people gone after for the former, even when it's been abused by miscreants.
At least in the US, people already "go after" security researchers all the time (at least the companies not smart enough to realise just how much a well meaning email can save them).
For what it's worth: I have tracked, managed, and recovered hundreds of devices using tools that were hardent against detection and removal at the express request of the device owners in enterprise, educational, and private settings. Mobile asset tracking is a thing. So yeah, there is a legitimate use for that too.
A key requirement under UK law is unauthorised access.
If someone explicitly asks you to hack their systems (and they have permission themselves), or if you want to do pentesting and hardening or your own systems, you should be fine.
The grey area here is how likely it is that someone would buy this tool for legitimate security analysis.
Most people wouldn't, which puts it on a slippery slope. A good defending lawyer should be able to make a good case for genuine legitimate use, but of course that's still going to leave some risk, not to mention a lot of stress and inconvenience before a case even gets to trial.
It is worth noting that there exists legitimate RATs that offer the bullet point features you've highlighted. The marketing issue is another problem entirely; these legitimate projects are usually open source on GitHub or posted on public blogs.
> Robbins v. Lower Merion School District is a federal class action lawsuit, brought in February 2010 on behalf of students of two high schools in Lower Merion Township, a suburb of Philadelphia. In October 2010, the school district agreed to pay $610,000 to settle the Robbins and parallel Hasan lawsuits against it.
> The suit alleged that, in what was dubbed the "WebcamGate" scandal, the schools secretly spied on the students while they were in the privacy of their homes. School authorities surreptitiously and remotely activated webcams embedded in school-issued laptops the students were using at home. After the suit was brought, the school district, of which the two high schools are part, revealed that it had secretly taken more than 66,000 images. The suit charged that in doing so the district infringed on its students' privacy rights. A federal judge issued a preliminary injunction, ordering the school district to stop its secret webcam monitoring, and ordered the district to pay the plaintiffs' attorney fees.
> The lawsuit was filed after 15-year-old high school sophomore (second year student) Blake Robbins was disciplined at school for his behavior in his home.
Acknowledging that the school was found to have been in the wrong and that the courts came down on the side of privacy, is the company which sold the school district the software they used to violate privacy guilty of anything? Should it be?
... said its software was intended to be used for theft recovery. Easier to recover stolen goods if the laptop can surreptitiously take pictures of its surroundings and send them home, see? Is that software inherently bad, like the software you're talking about is? It could certainly be used for the same thing.
I remember that case, and I am surprised criminal charges were not considered against the school officials.
Back to the matter at hand, I think the installation process can serve as a litmus test. If the software requires effective ownership of the device for initial installation, it would be of limited use as malware. Typical DRM software has characteristics of malware.
>By seizing control of the website, police will have been able to "take a good look at what the site has been up to, including who has bought the illegal items", said Prof Alan Woodward, a cyber-security expert from the University of Surrey. //
When did the law pass making owning software illegal, as opposed to using it for nefarious means? (Last time I looked CMA required use.)
I'm not aware of any EU regulations relating to computer "misuse".
Also AFAIK Europol is a co-operation facility, not a front line policing organisation. It is a means by which the member states co-operate and share intelligence, but not (yet) a source of rules and regulations.
Also the UK relations to Europol is a a bit freestanding, and we opted out of the Justice and Home Affairs stuff, then asked to opt back in to making use of Europol.
No charges seem to have been brought yet – but it's an offence under Section 3A of the Computer Misuse Act 1990 (CMA) to obtain articles for use in offence under Sections 1, 3, or 3ZA CMA, even if you don't actually use them.
Obviously, the prosecution would need to prove intent but it's possible the mere presence of the software could suffice for mens rea — people don't typically buy this software accidentally.
Presumably most people who bought the software will have used it, making a prosecution under S1 CMA more likely.
It's probable that the CPS will only charge people they are likely to get an S1 conviction from and discard any S3A charges as S3A charges will be difficult to prove.
As in any area of law, (glibly) it only matters if you get caught.
The Security Services of the UK Government have statutory exemptions allowing them to use these tools. These statutory exemptions are contained within the Intelligence Services Act 1994 Section 5 and the Investigatory Powers Act 2016 Section 99.
This was my thought as well. It has to be some sort of UK law? But I can't find it. The laws apply to surreptitiously installing the software on someone else's phone or computer. But then owning (and/or buying) the software shouldn't be illegal and the site shouldn't have been able to come down.
Hacking tools aren't illegal by default, that I know of anyway.
I imagine it's dependent on how the software is marketed. If you advertise it as a way to spy on others and steal passwords and banking logins, then it's pretty obvious it was built and sold with that intent. I imagine it becomes more grey if there are people who use it nefariously but you market it as a security analysts tool.
Also it seems like taking down the website stopped the software working. If it was centralised then there is a link between the theft of bank logins and the associated fraud directly to the website. Of course it might just be dialing in and checking the license as opposed to the website facilitating functionality.
Edit. Just seen an archived page for the tool, looks like a legitimate network access and monitoring tool. If that's the case then arresting the dev seems excessive. I did note that the page provided support, so I wonder if there was some entrapment along the lines of "how do I monitor for bank logins ?" Perhaps with enough info to make it clear the tool was being used to perform illegal activity, and that support is what fucked the dev?
A large portion of common law revolves around intent - I think the technical term is "mens rea" (mentioned by another poster).
If a site sold knives as "neighbor killers", with the comment "use this and you can definitely kill your neighbor, $19.95", then all the same considerations would come into play. And knives aren't illegal, at least to cook with.
This is covered by the Computer Misuse Act 1990 — specifically Section 3A which covers obtaining articles for use with related offences covered in Sections 1, 3, and 3ZA of the Act.
It's a crime to own the software intending to use it even if you don't actually use it. Arguably, the purchaser intended to use it at the point they made the purchase; people don't typically purchase software like this accidentally (of course there are obvious exceptions like perhaps security researchers wanting to decompile it to understand how to block it in the future, etc.)
I think it's 3ZA(1)(c) that's changed - by the Serious Crimes Act 2015 - in that this allows that an action can simply "create a serious risk of, damage of a material kind".
AFAIR that's different to how the act was prior to SCA2015. Indeed this section including "material kind" strongly suggests that the original intent was that the Act would punish material damage, rather than a trumped up suggestion by the CPS (on whomevers behalf) that an act might be reckless as to whether it creates an increased risk of serious damage.
This legislation seems to work like "well you went on a road near some property, which is exactly what a criminal who was going to destroy that property would do, so you're clearly guilty". It seems somewhat over-reaching to me.
The whole of 3ZA is new — that didn't exist before the Serious Crimes Act 2015.
However, they do have to actually take action and material damage is defined by s3ZA(2) with "damage to human welfare" (s3ZA(2)(a)) constrained by s3ZA(3).
It is unlikely that the threshold for a charge under S3ZA would be met. The more likely charge is S1 (unauthorised access) or S3A(3) which makes it an offence to obtain any article intending to use it to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA — you don't even have to actually use the software to be criminalised, merely possessing it is enough provided the prosecution can prove your intent beyond reasonable doubt.
The Europol press releases says the arrested developer and employee were in Australia and Belgium, so it's probably law of those countries that's most relevant. The website had Australian phone numbers.
No, just a mutual aid request from an agency recognized under law enforcement treaty. There are many agreements for mutual recognition and execution of legal process across national boundaries (normally added as part of trade treaty negotiations but sometimes in things like extradition treaties). Lots of FBI raids outside the US are conducted this way - local police do the raid and have FBI 'observe' them in action. In this case, an EU police force and a 5-Eyes nation like Australia will be zero-friction recipient of assistance.
From what I gleaned from that article, IM-RAT was publicly marketed as a remote management tool.
The article further states: "With the amount of reports of this tool being used for malware and the discussion on illegal forums, it would be very hard for the developer to argue that he did not know how the software was being used."
This seems pretty thin. Would the authors of say nmap be liable because people can/do use it for illegal purposes?
Assuming judges signed of on the raids and domain seizure, I sure hope there was evidence of actual criminal activity beyond what is mentioned in the media.
I suspect the marketing materials on other websites give it a different purpose. It's pretty common with this type of software to have a clean website for paypal, but to then be telling people on forums or support chats exactly how to infect unsuspecting people with it.
Lol, yes, I could do any of those things if I chose to do so.. Should ssh be illegal? Or shall we stick with the principal that "commiting crimes" is illegal, not "producing tools that can be used to commit a crime"?
For what it's worth: "methods to disable the camera LED" was not a feature, that functionality was provided by an external plugin.
what if pretty much ALL your customers are criminals? That near enough the line for you yet? The moment you add that requested feature to disable the recording light on the camera of the targeted laptop. is that over the line yet?
How about if your customers request a function that encrypts the root C:\ and puts up a message with a linked bitcoin address? We still on plausible deniability?
Please stop the "what if's". They are speculation and have no relevance to the available information about this case.
For me personally, the line of what constitutes ethical behaviour has been long passed by the authors of this tool. That is however completely irrelevant, the question is whether they acted illegally. I have a hard time seeing that based on the information available in the media.
There's no currently active principle that "commiting crimes is illegal, not producing tools that can be used to commit a crime" so we can't stick to it - it's well established in many countries for many years now (the particular UK act is from 1990, so 29 years ago) that in certain conditions producing tools that can be used to commit a crime is a crime by itself.
Of course, that doesn't apply to all circumstances and all tools that might be possibly used to commit crime, and it's a valid discussion topic on where exactly the line should be drawn; but that line between crime and not-a-crime definitely has some "I'm just producing tools" people on the crime side of it, not only morally but also legally, and not in some indeterminate future, but for as long as some of these "tool producers" have been alive.
For UK, it's the Computer Misuse Act of 1990 at http://www.legislation.gov.uk/ukpga/1990/18/section/3A with things like, among other things, "A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under [section 1, 3 or 3ZA]." and "(4)In this section “article” includes any program or data held in electronic form." so it is quite explicitly about tool making - with the quoted criteria about the belief/intent separating whether it's a crime or not.
Though it seems that I made a mistake about the age of that law - it seems that this particular section of the "1990" law was actually inserted in 2006 with the amendments introduced by the Police and Justice Act 2006 http://www.legislation.gov.uk/ukpga/2006/48/section/37 so it's old but not that old.
Handguns aren't entirely illegal in the UK. If I'm not mistaken then, roughly speaking, what the law prohibits is rifles below a certain size. In the UK it is possible, for example, to legally own long-barrelled pistols and old flintlock pistols. Another caveat is that the ban doesn't apply in Northern Ireland.
I don't think this really detracts from your point at all.