On the very same day this information came out, 'Viceroy Research Group' managed to release a 33-page 'analysis' of these results. With illustrations.
>We believe AMD is worth $0.00 and will have no choice but to file for Chapter 11 (Bankruptcy) in order to effectively deal with the repercussions of recent discoveries.
Viceroy Research lists no employees or contact address, but it appears they are not a crack team of hardworking & incisive business analysts, but two Australian teenagers and a former UK child social worker, struck off in 2014 for misconduct.
They have previous form in producing or plugging short-call stories (quite effectively), and latterly investigated by South African media for similar shady business.
(Replying to myself because I can't edit my post anymore)
Edit: And it gets better! If you check the HTTP headers when requesting the whitepaper from their servers, it will tell you that the file was placed there (last-modified) at 13:22 GMT, so just 1 hour before Viceroy Research Group created their analysis - and probably ages before the actual news broke.
If they did this just to short AMD and make money, that's indeed quite shady, and they go through all the trouble of hiding their real intentions because they also know it's super-shady.
That said, unless the whole "research" is fake, I wonder if we could be seeing more such tactics in the future against tech companies, and whether or not that would give them an immense incentive to care about security - or risk getting ruined in the stock market.
Honestly, such a huge incentive may actually be needed to get most companies to get about security. The money equation needs to make sense to them. Right now most think investing the absolute minim amount in security for compliance reasons is already too much money wasted on security. If this were to become common, I think maximizing security would actually start looking quite profitable to them.
I mean, this research is already saying there are some backdoors in AMD's chips. I imagine in the future, companies would be way more careful about allowing backdoors in their products, whether intentionally or by mistake, if they knew they risked getting their stock crushed.
So yeah I just like to play with this idea a little bit. So far this revelation doesn't seem to have had the "desired" effect by the backers of the research, though, but we'll see. I just want to know whether or not the research is real, so I'll wait for AMD's confirmation. I assume AMD wouldn't try to lie to us about it, because there are now probably at least a dozen security teams trying to pick AMD's chips apart, so the flaws would be found soon enough, if real.
Well, this could be interesting. AMD is a US listed security. If true, these two lads could very look forward to a visit from the US SEC. Seeing as how market manipulation is not a capital-crime, I don't see Australia objecting to an extradition, should charges be warranted.
I work in finance, and one of the many hats I wear at the small ATS (Alternative Trading System) I work at is regulatory analyst. Action probably wont start with SEC, but possibly FINRA or any exchange they're trading through. This definitely wreaks of manipulation. If I knew of any trades on this came through my ATS, it would be my legal and ethical duty to report it. I could still be asked to provide all trade activity for AMD.
Trading on research, no. But attempting to artificially manipulate the market while doing so is effectively "pump-and-dump" but short instead of long. A lot comes down to timing and exactly what the communication says.
Not a sure-thing conviction, but certainly a dangerous business plan.
> If you think a company is bad, or fraudulent, you can sell its stock short and try to profit when everyone discovers its problems and the stock drops. If you want to hurry that process along, you can always noisily publish research reports explaining why the company is bad or fraudulent. If your research reports convince other investors of your thesis, then the stock will drop, and you will make money. There are more longs than shorts, and more dicey public companies than noisy short hedge funds, and so people who use this strategy tend not to be especially popular. In particular people often go around accusing them of fraud, or market manipulation. "Wait," people ask, "how is it not manipulation to short a stock and then publicly announce that the stock is bad?" I am always confused by this complaint. Just flip it around: It's not manipulation, surely, to own a stock and then publicly announce that the stock is good.
(Followed by further justification of this position).
>All of the exploits require elevated administrator access, with MasterKey going as far as a BIOS reflash on top of that. CTS-Labs goes on the offensive however, stating that it ‘raises concerning questions regarding security practices, auditing, and quality controls at AMD’, as well as saying that the ‘vulnerabilities amount to complete disregard of fundamental security principles’. This is very strong wording indeed, and one might have expected that they might have waited for an official response.
Extremely fishy. 1-day notice? Such aggressive wording without even the chance for AMD to address the concerns?
Yeah it's suspicious. The website has many fancy infographics, marketable names and fear mongering but you have to dig into the whitepaper to find any details about the actual vulnerabilities. And even then it starts only on page 8 of 20 and you discover that it's vulnerabilities targeting the secure boot infrastructure and you need local admin to exploit them. It's not good but it's not a new Spectre or Meltdown.
If I was the tinfoil hat type I'd guess that Intel is trying to spread FUD but maybe it's just security researchers trying to generate a bit of buzz for their company at the expense of AMD.
It's possibly even more nefarious than that: 1) execute a series of puts on AMD, 2) release exploit 3) profit. If you execute an option with a far time horizon and give the company enough time to mitigate their vulns, then I think this is not an irresponsible thing to do (as it incentivises the company to actually do something), but with 24 hours notice...
Seeing as CTS-Lab's CFO also founded a hedge fund you're probably on the right track.
>Yaron co-founded CTS-Labs in 2017, and previously served as an intelligence analyst in the Israeli Intelligence Corps Unit 8200. He is also the founder and Managing Director of NineWells Capital, a hedge fund that invests in public equities internationally. He holds a B.A. and M.A. from Yale University.
> Could something like this be considered inside information?
No, illegal insider trading refers to trading on inside information when you have a confidentiality agreement or a fiduciary duty. Information asymmetry is insufficient (or else it would be virtually impossible to profitably trade at all).
> Or is it legal to actively manipulate stock prices to ones benefit in this way?
The way you're presenting this is a false dichotomy. It's not "manipulating" stock prices except insofar as people broadcast news all the time which alters stock prices. Strictly speaking, it's not market manipulation if it's true. If it's false, it can be, which is why you really try not to do it unless it's true.
Might the latter also depend on how you present it?
As far as I can see this is only an exploit of secure boot if you are already on ring 0 level auth. Making a whole webpage with lots of graphics and whatnot, sending press releases all over and in general present it like a security flaw on the level of meltdown seems .. false?
Probably court level material.. In any case it seems to have backfired as the stock is up.
I think the difference in facts you're talking about is a difference of degree, not category. In other words, it's not plainly false, there certainly is a vulnerability, it's just perhaps exaggerated. I could see a case being brought against the researchers on those grounds, but I'd be really surprised if anything came of it.
It could be interesting. Some of the flaws presented require you flash your bios, if I understand correctly. They are included with what are likely real flaws, but maybe it's enough for a case of misleading the public in part. To me it seems sort of like saying Ford engines have a tendency to blow up, but only after you've overwritten some engine firmware. By itself, not much to talk about, but when attached to something that has indications of being used to make money through stock changes, maybe is more likely to be looked at unfavorably by the SEC?
What if it's not false but misleading? That sounds closer to what they are doing here. Sure they included a ridic disclosure agreement that you apparently agree to have read if you continue to read their webpage, and it says something along the lines of "this is our opinion".
"Slimy" and "gross" are not nearly sufficient for either insider trading or market manipulation. I don't particularly like the way these researchers are acting either, but that's actually because I don't like vulnerability impact being exaggerated and over-hyped. The other stuff doesn't bother me too much.
If the news pushing a stock price is so misleading that it's categorically different from the truth, then I could see a case for market manipulation being brought against them. But I doubt that will happen, because unfortunately people have broad latitude to portray vulnerabilities however they'd like as long as they're convincingly authentic.
I was just relaying my feelings in that last bit. But the misleading bit in my comment specifically refers to the fact it's an exaggeration as you put it. So you can flash a bios and make it do nefarious things: that is a true statement but it's hardly a "security flaw" and almost not even too surprising. They are however taking a flaw and spinning it out to be the massive security flaw which it isn't.
I guess that's my question. If you take a true statement and put it out with connotations that it's actually a terribly thing and worse than it is, just being "true" doesn't matter. Stupid analogy, say you go to a bakery and buy bread, then it goes stale in two hours or so by the time you get home. Then you go and write a negative yelp review that this bakery is terrible at baking and they sell subpar bread. You just don't know that the bakery doesn't bake in preservatives that bread you buy from the supermarket does. And it certainly isn't like the bakery is selling spoiled/poisoned bread or selling rocks painted to look like bread loaves.
To add to the other comments here, a recent high profile case of something similar was Bill Ackman shorting Herbalife. Basically, he shorted the stock and then went to the media with his research showing that he believed Herbalife to be a pyramid scheme. Ultimately, I believe he lost money on the whole fiasco, but it's not an uncommon strategy. The whole thing made the news after a particularly amusing exchange between Bill Ackman and Carl Icahn (who had the opposing view) on CNBC: https://www.youtube.com/watch?v=hCZRk1lL90Q. I worked on Wall Street at the time and I remember the entire trading floor at my bank was almost frozen as they were watching this live.
So it could be a viable business to do research like this, short stock and then release the information. It just has to be a bit more damning than this as the stock price is actually up today!
Seems a bit shady in any case if one were to do this in the same way as when companies pay researchers to make claims publicly that benefit the company like the process leading up to the banning of led in petrol etc.
I tend to imagine that if Intel were doing this they’d do a better job of it. Even if CTS-Labs are completely legit, the way it’s been done has led to immediate suspicion of the claims and people involved, in a way that feels much more like a small group straining for attention or to make a quick buck and making a bit of a mess of it. If Intel were involved, I’d expect it to be done more professionally and simply better, so that people don’t suspect foul play and go looking for problems. It is possible for a company to deliberately obfuscate the trail by doing it this way, at the probable cost of some effectiveness (though if the claims are overblown it might perhaps be more effective this way), but it seems less likely.
A really fun interpretation of it is AMD doing it themselves, deliberately badly, so that they can come off as the wounded party that actually have really good hardware. Risky, but probably not impossible to carry off.
While fun, it would also mean they would be publicly disclosing vulnerabilities in their own systems and then deliberately withholding the patch, just to put on this shpiel in order to appear as the underdog?
Confirmation that the vulnerabilities are legitimate pretty much writes this one off. At the time I wrote it this wasn’t entirely clear, though it seemed probable (though even then they could have been overstated).
People here seems to be mentioning short sellers being connected to this research as if there's some sinister collusion going on.
This is the entire point of short selling, and SEC encourages this type of activism. It allows people who can provide expert knowledge to profit off a trade if it can reveal damaging and legitimate information about a company
For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte....
Without this incentive, why would anyone bother to reveal damaging info? You're placing your self as a target with no reward. The payment is the natural balance of the market.
So yes, this research firm is connected w a hedge fund, and they have a very vested interest. But that doesn't make their claim untrue
CTS-Labs is very forthright with its statement, having seemingly pre-briefed some press at the same time it was notifying AMD, and directs questions to its PR firm. The full whitepaper can be seen here, at safefirmware.com, a website registered on 6/9 with no home page and seemingly no link to CTS-Labs. Something doesn't quite add up here.
Anandtech is reporting on the situation more than the flaws. That does require covering what the flaws are though. Not covering it at all isn't exactly performing good journalism either.
Independent researchers don't owe AMD a chance to address anything. They bought the chips on the open market where AMD makes them available, and then used their own time and materials to conduct their own research. Their work product is their own, and AMD has no claim to it.
There are, as I see it, two rational, coherent ways to be outraged about this story:
1. The vulnerabilities are fabricated and the report is fraudulent, in which case, by all means, slag the researchers.
2. The vulnerabilities are real, in which case. AMD is an 11 billion dollar company that got outmaneuvered by what appears to be 4 dudes in a basement.
People use AMD chips. It's about more than AMD's stock price.
I do not need to be a security researcher to understand that they, as with everyone else, have an obligation to the body politic to not be a dick (as in all things!). There are actors who may be aware of this attack already--but, as I mentioned elsethread, wider knowledge of attacks like this have a much higher chance of splashing back on end users who literally don't know any better than it does AMD. I mean, I couldn't give less of a shit about how AMD feels--they'll be fine regardless--but there are people downrange of this, not just some company.
This is shoot-the-hostages stuff, and I believe that you are better than to be OK with that.
You can consult the search bar at the bottom of the page to learn that I am 100% OK with immediate, uncoordinated disclosure. It's not what I personally do, but that's easy for me to say because I don't find these kinds of vulnerabilities.
This isn't "shoot the hostages". The researchers didn't manufacture the vulnerabilities; AMD did. If 4 dudes in a basement can find exploitable driver vulnerabilities, so can 10 researchers none of us will never have heard of working in a nondescript office somewhere in Bulgaria. The only moral differences is that these 4 dudes told us about what they found --- something else they had no actual obligation to do.
Again: it seems really likely that these vulnerabilities have been hyped way out of proportion to their real impact. I think it's reasonable to be irritated by that (again, though: this isn't a first). But other than that, I don't understand how people arrive at the conclusion that independent security researchers owe strangers the results of their work.
I understand what you are OK with. I am saying that I believe, from a fairly long scope of interaction, you are a better person than that.
They've disseminated widely an attack strategy to people who didn't have it. Nobody except AMD can fix the problem, regardless of the good intentions of other actors--on the other hand, many bad actors can use that information. That's as shoot-the-hostages as it gets.
Security researchers owe "strangers" (which is a really weird term for "society at large" that I don't think you, specifically, would be using with such connotations outside of a security context where you'd already made a decision) the same courtesy they owe everyone else: to not endanger people unnecessarily. I agree with you that this is a relatively minor vulnerability, I'm not hyping it or anything--but it's still a vulnerability, it is still more widely known now, and there is a bigger pool of bad actors than there was last week able to use it against people, irrespective of AMD's stock price.
There's certainly a gray area, if a vendor hasn't acted to fix something you know they know about. I'm not talking about that. But 24 hours and briefing the media before letting AMD know, as it very much seems like they did, is well outside of what I could consider any reasonable gray area.
If you care about end users, and you should because they are your fellow people, you don't publicize how bad actors can hurt them. You just don't. It's just...minimal decency, to care about other people. I can't see it any other way.
I strongly disagree with the reasoning you're using here.
The premise of your argument is that without vendor cooperation, end-users are helpless to mitigate the impact of security flaws.
No, they aren't. Not only are they not helpless, but many of them are in fact ethically obligated to mitigate exposures with or without the assistance of their vendors. Almost every end user has at least one last-resort mitigation for any vulnerability: the power switch.
Most of the time, most users have better non-patching mitigations than that. These vulnerabilities are all post-compromise privilege escalation flaws. Their exploitation is situational and most users can do things to eliminate the situation that enables their exploit.
You might not like the fact that end-users have to make hard, expensive choices about how to mitigate flaws. But if you think about it for just a second, you'll see that the idea that patches were saving them from this choice was fallacious. There is no reason to believe these 4 dudes were the only ones in the world capable of finding these flaws (the reality is that if they're the only ones who know about them, it's because the kinds of flaws they found simply aren't important enough to demand focused attention from others). All restricted disclosure does is prevent end users from making the choice for themselves.
I believe that as a general rule, we're better off when we have the most information available to us about vulnerabilities. Personally, I'd probably stop short of publishing exploit code. But other researchers that most of us respect a great deal in the abstract do not have that particular scruple, and some --- like the original Metasploit project --- made it a point to publish exploit code immediately, patch or no patch, to arm operators with information about their exposure.
This isn't an idle opinion. If there was working Usenet search in 2018, you could find me making approximately the same argument back in the 1990s, when I worked as a researcher at SNI, the world's first commercial vulnerability research lab.
> These vulnerabilities are all post-compromise privilege escalation flaws
I would say they are all invasive evil maid threat vectors. Each one requires either physical access to the hardware or (as you stated) an already established root privileges. We all know that if you have physical access to hardware, it's essentially game over.
However. One of the vulnerabilities supposedly allowed to subvert UEFI secure boot. If that's true and allows to boot arbitrary media, then the others are equally feasible, because an attacker can boot into a root shell of their choosing.
The timing in this disclosure reeks of malice, though. Giving a 24h advance warning basically allows the outfits to claim that they disclosed vulnerabilities to manufacturer before going public. Technically true. Just highly misleading and dishonest.
I have personally no beef with full disclosure, and have advocated it as a viable mechanism since the mid 1990's. I also happen to think that responsible disclosure is a good approach, but it definitely needs the threat of FD as a stick, because otherwise vendors would not have any real incentives to work on addressing security bugs. Name-and-shame does work.
Let's get back to AMD flaws. Giving a really short window? Basically just enough to have an initial PR response ready? Have the decency to go full disclosure. Or give a full month. AMD won't be fixing the bugs before news breaks in either case. Just don't claim this is anything but a maliciously crafted exercise with ulterior motives.
While I'm fine with criticizing them for partial disclosure, I again have a problem mapping any of this back to ethics, because, again, independent researchers do not have an obligation to vendors or to any amorphous public. As long as they aren't literally exploiting (or arranging to have exploited) vulnerabilities to break into people's computers, or lying about what they found, I don't think ethics have much to say about what they should do.
No obligation to vendors, no obligation to the public, so what are your ethical standards exactly? It sounds like committing crimes is it, but that’s a legal standard and not an ethical one. At what point are you less of a researcher and more of a sociopath with a keyboard? What makes researching software vulnerabilities such a uniquely non-ethical undertaking compared to all other forms of research?
You seem like a living argument for ethical standards being imposed on your industry, by law if needed.
You are not harmed by someone discovering a vulnerability and telling you about it. Obviously that benefits you rather than harming you.
You are harmed by them discovering a vulnerability and telling the world about it.
And if they discover a vulnerability and tell both you and the rest of the world, the harm may easily outweigh the benefit.
Suppose I go wandering around the city where you live, checking for unlocked house doors. I find that you've left your front door unlocked and gone on holiday. I then wander the streets shouting "Thomas's house is unlocked and no one's at home!". I also phone you up to let you know your house is unlocked.
It was your fault, not mine, that the house was unlocked and no one at home to deter burglars. In principle, anyone else could have come along and burgled your house, if they'd found it before I did. None the less, I think that in this scenario I have done you wrong.
The argument against your position that people are trying to get across to you is not that. It is that publication of vulnerability without giving heads-up and time to prepare solution to the vendor greatly increases the risk that a user will be harmed by attackers exploiting the public knowledge. Often substantial number of users are not going to mitigate or resolve the problem without their vendor giving out the official solution.
From this and other similar responses of yours here I think that you do not have a convincing way to resolve the obvious problem with the absolutist 'i can do whatever i want with my research' stance that people here pointed out to you. So you do whataboutism directed at vendors, misrepresent people's arguments or try to pivot the discussion. Perhaps it is time to write less and let the discussion sink in a little. You may find a better way to argue your point, or even find you no longer want to do that.
I don't think anyone's arguing that a researcher has a responsibility to tell anyone. If they find a vulnerability and then decide to completely shelve it, that's fine (if maybe a little pointless?). But if they do decide to do some kind of disclosure, I (and others) would argue that researchers have an ethical responsibility to do so in a way that they believe will do the least harm.
It's certainly reasonable to argue which kind of disclosure is the best way to achieve minimal harm, but my opinion is that it's unethical to disclose without considering what method of disclosure will do the least harm, or, worse, just not caring and going for the "biggest splash", as is what it seems these researchers did.
”The premise of your argument is that without vendor cooperation, end-users are helpless to mitigate the impact of security flaws.”
I know everyone in my family is ignorant of this “disclosed” security flaw and is powerless to mitigate the vulnerabilities disclosed on their own. Even if they did know to “turn off their computer” as someone said, are they supposed to wait until someone calls them to tell them a patch is ready?
Disclosing a vulnerability for profit at the expense of everyone else is a shitty thing to do. Would giving AMD a few days to fix it have hurt as many people as giving them one day?
How many vulnerabilities are you capable of finding in software that everyone in your family uses, and can't find for themselves? I'm sure the number is not zero. Is it unethical for you not to go look for them?
This is probably where we diverge. From where I stand, "end users" are incapable of making a meaningful decision about security at this level. It would be awesome if they weren't, and god knows I have spent a decent amount of time in my life trying to bootstrap people into such a position, but it doesn't...like...work. There is a computing priesthood, as much as we have tried to democratize this stuff, and it's all goddamn nonsense to those outside of it. The set of people I know who do not actively work in tech and can make meaningful decisions about the technology they work with is...my girlfriend, probably. Can't really think of anyone else who isn't reliant on "do this" the advice of others, whether it's correct or not.
Continued education to help end users get to the point where they can make meaningful and educated decisions is great, and should be pursued, and I do it where I can (though most of the time there's just a shrug and a "whatever"). But, barring that, somebody's gotta make choices on their behalf, and there's a Jerry Garcia quote for this one, you know? With great power comes great responsibility, and we gave ourselves that power. And, outside of a security context, this is why I unflinchingly come down on people who work for shit companies that hurt people, why I'd never hire someone who worked for, say, a toolbar vendor in the 90's/00's and why I have fired clients before when I discovered they were doing shitty things with data gleaned from people who trust them: because we have ethical responsibilities to the people downstream of us who are ill-equipped to make meaningful, educated decisions. I can't compel anyone to do as I do--but I can say that one should, because it's decent.
I can't agree that the power switch is a reasonable mitigation in 2018. In the nineties, sure, but too much of life revolves around this garbage we invented and keep mostly creaking along. (Should it? Probably not. Does it? Yeah.) We are on a ratchet, we can't go back, and kicking the decision down to people who literally-literally lack the tools to make a wise decision while painting a target on them for bad actors who can take advantage of them is profoundly disturbing to me.
This particular vulnerability is a post-compromise privilege escalation flaw, yes. But it strikes me that the conversation must be bigger than that, because the same arguments are used for both. This? Low stakes. Heartbleed? Incalculably high stakes. But the same argument could/would (if it were found by shitheads rather than people with a certain amount of decency to them) be used for the latter instead of the former, and that's what makes me itch.
(And to be clear, irrespective of this conversation, you know I am a big fan.)
So the 11 billion dollar vendor who shipped vulnerabilities in the first place gets to treat these problems as an externality, but 4 dudes in a basement who did a basic research project have to be restrained from speaking?
I don't get how you get to me thinking the vendor gets to treat these problems as an externality? I am all in favor of slagging vendors who release buggy shit. For hardware (and some software) manufacturers I'd be in favor of significant legal remedies available to people who purchase hardware later found to contain security vulnerabilities.
But I think that should be done after mitigations are in place to protect end users, or if the vendor is not taking good-faith steps to mitigate the problem.
And I am not saying one should be "restrained from speaking" at all. I am saying that choosing to do so makes one an asshole, and that decent people should strive to not be assholes.
I don't understand the chronology you're working from. The timeline here shouldn't start from "when the independent researchers find something in their basement". It should, rather, start from "when the first MRD for the product is sent from the PM to the development team". That's when the clock starts ticking on mitigation. AMD had years.
I don't think you understand the dynamics here. I don't think anyone knowingly shipped vulnerabilities. That's an impossibly low bar: all you have to do to "not know" is to not spend any money on security verification. The complaint here is that AMD was outdone on verification by 4 dudes in a basement.
I think saying that they were outdone by 4 dudes in a basement is being intellectually dishonest. There are a lot of dudes in a lot of basements looking for vulnerabilities all the time. Those four happened to find it, but there were hundreds of others looking. There’s no amount of money that amd can spend that would make them not outgunned eventually by all the hackers and intelligence services and security researches looking to break it.
Why do you assume that there were hundreds of other people looking for these vulnerabilities? Chances are, when we learn the technical details, we're going to find out that they're bog-standard memory corruption flaws in driver code, and that the thing that prevented anyone from discovering them was that nobody looked for them.
Have you ever worked with a code base before? Even when you scrutinize for bugs, they still can go unspotted. Sometimes hundreds of people can look at the same code and not see anything wrong with it. Software has the benefit of having higher levels of abstraction, I haven't designed any hardware but as far as I'm aware it's not easy to abstract it. That will make it much harder to find things. While 4 guys in a basement may have found this vulnerability, it doesn't mean they will find every vulnerability or that anyone else would have this as they had. Throwing money at verification will not make it fool proof.
> If there was working Usenet search in 2018, you could find me making approximately the same argument back in the 1990s, when I worked as a researcher at SNI, the world's first commercial vulnerability research lab.
This being a controversial topic straight at the intersection of technology, the way it changed and affected society, the public good and our dependence on technology, I really don't think that "I haven't changed my mind about this in 28 years" supports your argument ...
And honestly I would say that whether I agree or not.
I wasn't working in security but I definitely moved my opinion on the matter. In the (late) 90s I was mostly for full public disclosure arguing the same "we're better off when we have the most information available to us". But today I'm leaning way more towards "responsible disclosure is good" (as you can tell I'm also not 100% black-and-white on the matter like you said you are).
Maybe it's because I was younger then and had more of a reckless mentality and an innocent belief that people will make the right choices given enough information.
Maybe it's because in the past 28 years technology has changed our society to such an extent that impact of security vulnerabilities is rather incomparable to the impact they had back then.
Maybe it's because I definitely don't believe that you can defend this opinion with the very same arguments that were used back then without even addressing the spread of information technology and the drastic way they altered society in the past 28 years.
Maybe it's because I now realise that I myself am not always better off with more information if I can't act on it, and therefore it's not reasonable to assume it as a general rule. Which is very much something I had yet to learn 28 years ago, had to swallow some pride. I wish everybody was a clever as I was back then ...
>Almost every end user has at least one last-resort mitigation for any vulnerability: the power switch.
So if a hospital runs a life support on a vulnerable chip, they should just hit the power switch until it's fixed.
Or what about a computer controlling a nuclear power plant? An airplane? Spacecraft or Satellite?
Vulnerabilities don't restrict themselves to equipment that is non-essential for people to survive or would cost millions to replace in consequence of a hack or shutdown (please try to revive a sat after you did a full shutdown, I will be awaiting your report on how you'll align the antenna)
I don't think this is an appropriate way to argue. Sounds like if he disagrees with you, he is somehow below standard.
> It's just...minimal decency, to care about other people.
Alerting folks to the danger that they face is one way to do so. Responsible Disclosure is caring about the vendor, whereas full disclosure gives other people the chance to take action on their own to remove themselves from harm.
As another note, why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.
> Responsible Disclosure is caring about the vendor, whereas full disclosure gives other people the chance to take action on their own to remove themselves from harm.
That is true, but you missed the other side of the argument. Coordinated disclosure is preferable also to a part of users/customers. Significant part of them have no understanding or incentive enough to mitigate on their own. So the question the discoverer of a bug then faces is 'how much headstart should I give the vendor and the users that depend on the vendor, before I make this public'? This has no universal answer, it may depend on how long the bug is out there and what kind of users may be harmed. But it is easy to see that a little headstart in terms of weeks is more reasonable than headstart=0, especially for bugs that are out there for years.
> why not argue for Responsible Development? This is where the outcry should be. Flaws in products come about because they are shipped before they are finished.
Flaws are not always due to cutting corners. Some bugs in computers are very unintuitive and it could be years before they manifest. More responsible development seems like a good idea, but again, this ignores the other part of the problem - major group of users do not understand the intricacies of development and are not willing to buy more 'responsible' product, if it is 5years behind the newest trend and costs 5x as much.
So let me get this straight: are you arguing that because some portion of bugs each year is due to vendor negligence, it is OK for us security researchers to make the vulnerabilities public and expose users dependent on the vendor any time we want?
How does that matter? The only thing that matters is the harm that certain types of disclosures will do to average users. It doesn't matter whether a bug could have easily been found before release or not; the bug is there, in the wild, in a position to harm users.
By all means, vendors should be taken to task, and be beaten up even more when a bug was easily avoidable. But a bug's stupidity is completely unrelated to how a user might be harmed by an "irresponsible" disclosure. Giving the vendor their just desserts is secondary to that.
> The only thing that matters is the harm that certain types of disclosures will do to average users
I disagree. This does not account for the fact that malicious actors are likely to exploit these before the vendor fixes them on a schedule that they would prefer to dictate. And all users are not incapable of making alternative judgments about the use of vulnerable technology. Users include my Mom, hackers at small companies, giant corporations who are capable of overnight turning off SMB V1.
The harm to users comes from vulnerable software that the vendors put there in the first place.
> I do not need to be a security researcher to understand that they, as with everyone else, have an obligation to the body politic to not be a dick
So are you talking about AMD being dicks by releasing buggy chips, or the researchers somehow being dicks for finding out?
Related question: if a "food security researcher" discovered a vendor was selling contaminated produce - would it be reasonable for them to give the vendor 90 days notice before telling the public?
While I think it's reasonable and appropriate professional practice for _some people/teams_ to go down the "coordinated disclosure" path (I think the world is a better place for having Tavis Ormandy disclose the way he chooses to), it does without doubt benefit the company who's products are flawed more than the researcher or the public. Anybody who knows they work at a firm that's going to be described dismissively like AMD here did "This company was previously unknown to AMD" is quite likely correct to publish-and-be-damned, because you can bet there's a non-zero chance that AMD's response to non-public disclosure is going to include either stonewalling and stringing the problem out as long as possible, or lawyering up ad threatening to sue the "previously unknown to AMD" company into oblivion.
If you don't want public disclosure of security flaws about your products, either don't make flawed products or don't ship them to the public. Especially if some of the key selling features of said product include bullet points like "AMD Secure OS".
Close to 100% of software has bugs. Almost all drivers have bugs. Anything that prioritises company profit and release dates over complete correctness in sectors where bugs == deaths, will have bugs. (And even those sectors are not magically immune) So yes - I expect they do.
Unless they released it maliciously, I don't hold it against them. And wouldn't call anyone a dick unless they planned to do something evil.
Exceptions: issue was known but got ignored due to release schedule, or security was never mentioned in the project and at no level was there any security consideration. But that's for specific management issues, not engineers or the vendor in general.
What's important aren't really the bugs, bugs can be fixed. What's important is who is allowed to run, inspect, share, and modify the code. If only the copyright holder is allowed to do this, that's proprietary software and that's malicious. If a user's software freedom is respected so users can choose to fix it themselves, wait for another release, hire someone else to fix the code, or live with the bugs that's treating the user properly.
Everyone makes mistakes; it's more about how those mistakes are handled and if a user's control over their computer is respected.
>Independent researchers don't owe AMD a chance to address anything.
The goal of responsible disclosure windows have nothing to do with saving face for the company. The point is that it gives the company time to come out with a fix so that their customers aren't left with massive holes in the security of their systems.
That presumes the only response people have to vulnerabilities is to patch. But that's never the only response people have: people can trade availability for exposure. But because nobody in the industry wants to face a costly tradeoff like that, we pretend that we're stuck with the lowest common denominator response.
What about responsible disclosure ethics? Yeah they don't owe AMD anything but all AMD users lose - since they claimed there is virtually impossible for any security product to mitigate those vulnerabilities in their televised security vulnerability disclosure interview.
Btw, your HN search result page links to all of the references that you THINK what the term "Responsible disclosure" means. Be it "coordinated disclosure" or whatever else, I don't care. But I don't think it's ethical to disclosure the security vulnerabilities to the wild without contacting the vendor and given them a timeline (should be MUCH LONGER than 24 hours) and the benefit of doubt first.
Hypothetically speaking, if you are researching vulnerabilities solely for the intent of money (because you can sell to them to 3rd parties or your side hedge fund business can profit from disclosures in the stock market) then shame on you, because you are doing the society a dis-service and gaining on everyone' losses. To me, you are as evil as those hacker who utilize them.
There's a decent counterargument - take it or leave it - that this kind of research is extremely difficult and expensive, and upon success, privately weaponizing it and/or selling it to organized crime or nation state-level actors is extremely attractive, and therefore, the ability to short the stock of a sloppy/insufficiently-careful HW vendor to fully or partially fund the research instead is legitimate in that it ultimately improves overall-societal welfare relative to those other alternatives.
Vendors who wish to discourage that behavior could offer comparably-large bug bounties instead. And, of course, make their products more secure in the first place.
Seriously, if some fly-by-night security outfit has managed to discover this, they're probably not the only ones.
They didn't blow full technical details on this exploit after 24 hours, they went public with a summary... one that's so high-level that many people are even doubting they exist. That's not exactly dumping a zero-day on the internet either.
There's a whole lot of shooting-the-messenger going on with this topic. Making plays against the stock is scummy and possibly illegal, but that doesn't make the exploits here any less real (assuming they are). These are actually quite serious breaks, potentially VMs can jump the sandbox straight into SMM mode and PSP, so it actually is much more severe than just "root password lets you do root things".
There is a long and storied history of showing the disadvantages of your competitor's products. Edison went on a campaign against Westinghouse's AC electricity, culminating in him electrocuting an elephant to death to demonstrate how dangerous it is.
Right now we need more spotlights on computer security than ever, and as long as it gets bugs patched (hardware, software, or firmware) I don't really care who's doing it or what their short-run motivations are. If AMD won't secure their code appropriately and Intel wants to call them out, fine. If Intel is leaking timings through sidechannels and AMD wants to call them out on it, fine.
And if we want to throw stones here, it was AMD who blew the embargo on Meltdown a week early because they wanted to force a response from Intel at CES... different in degree, not really in kind.
So you believer in the absolute freedom of security vulnerabilities disclosures and security researchers should just do so at will?
Do you know that the general public are usually the ultimate victims and impacted by those vulnerabilities the most?
Especially Intel/AMD are corporations worth tens of billions monopolies in their fields, and if their CPUs with zero days unpatched and sample code and exploitation techniques out in the wild, what else are you gonna use on your desktop computers?
We've seen similar happened for Microsoft after Shadow Brokers's disclosure. It's gonna be worse for hardware products as it's virtually impossible to retroactively fix silicons chips.
It turns out it's exactly the "release a general idea to the public to light a fire under the vendor's ass, only release exact technical details to the people who need to know" that you might expect. They didn't dump a zero-day into public.
"Responsible" to whom? Terms like these indicate what side one takes, such as how one expands the term "DRM": digital rights management means taking the 1%/elite side favored by the publisher, the few in power. 'Digital restrictions management' highlights what's happening from the user's side, the 99%, the side of the many. Similarly with the harm to the users and the desire for freedom in the term "jailbreaking".
So, since we recognize the reporters owe AMD nothing, to whom are they "responsible"? Or what are they responsible for?
This phrase strikes me as useless except to try to foist a responsibility on people that they don't actually have and getting the relatively powerless to serve the interests of power -- users who can't inspect, edit, or share edited CPU microcode are somehow not acting responsibly if they don't give proprietors sufficient notice.
Where is the "responsible disclosure" for Intel when they refuse to let users fully control the signing keys used in the software that sees every network packet before the rest of the computer (for inbound network traffic) and before a packet leaves the computer (for outbound traffic)? The one-sidedness of it all sticks out like a sore thumb.
3. The vulnerabilities are minor, barely worse than normal expected behaviour; just enough to call them vulnerabilities. All these "exploits" consist of using ultra-privileged access (signed device drivers, or flashing the BIOS) for bad purposes.
In the white paper, many attacks are hypothetical and many phrases are vague and slippery, suggesting the "researchers" barely achieved execution of something, not real payloads.
I hope AMD invests the little money needed to fund this sort of PR campaign, er, research initiative, against Intel. The net result would be a greater awareness of the perils of "sponsored" science and of the poor state of PC security.
You're recapitulating an argument that Arrigo Triulzi posted on Twitter based on his reading of the CPS-Labs white paper. The white paper doesn't include technical information about the flaws.
Dan Guido and Trail of Bits got to read the actual report, and vouched for them as real vulnerabilities. The fact that there are vulnerabilities in signed drivers is a bad thing: it means that AMD shipped cryptographically signed versions of vulnerabilities. Arrigo's twitter thread implied that the use of signed code somehow mitigated the vulnerabilities, but the opposite thing is true.
Pwn2own, iOS jailbreaking, and Playstation hacking have shown time and time again that chaining up seemingly innocuous exploits to get to the stage where you can run a "minor exploit which requires ultra-privileged access" is definitely within the reach of bored/smart teenagers with no more motivation than a new laptop or gaining the ability to pirate or cheat at games...
Suggesting this is "just hypothetical" because "nobody is going to get physical access or code execution in a signed driver" is pretty shortsighted in my opinion...
When the enemy manages to install a signed driver or flash the BIOS, the difference between being 100% owned by design and being 105% owned because of this sort of vulnerability is the last thing to worry about.
I wonder if the unusually short disclosure process was related to their disclosure of related financial interests.
If it was, it’d seem that this research was in support of a financial play similar to how Muddy Waters shorted St. Jude Medical on the basis of insecure medical devices. That would appear to be a legitimate strategy, but if the market didn’t punish Intel for their processor vulnerabilities it seems likely they’d react similarly here and the research would fail to move the stock price in any significant way.
2. Shouldn’t be news given that a couple of dozen Saudis outmaneuvered a superpower, taking out two skyscrapers and thousands of people. That’s also no excuse for the behavior in question.
I’ll add 3. It’s not all about the researchers and AMD, but the people who use AMD chips and deserve a modicum of protection and consideration. Unless there were exploits in the wild, the security of users seems not to have entered into this.
There are plenty of ways to be outraged by the actions of these "independent reseachers." How about 1. Irresponsible disclosure affecting end users. 2. Shady trading practices of their hedge fund CFO. Just to name 2.
You are a fucking retard, and your obvious anti-AMD bias is showing.
I'm not saying it was them, but I wouldn't be surprised if Intel was trying to recover its reputational damage by hiring ppl to heavily research breaks in AMD chips to even the reputational playing field. They're the ones that stand to gain the most from this legal but shady tactic and have been reportedly scared of losing their long held market dominance in desktops and servers. Iirc AMD wasn't vulnerable to Meltdown which I speculate changed the market calculus in ways detrimental to Intel that both companies would be well aware of.
Interestingly, you'll note that the researchers claim public interest as their reason for non-standard practices, but then later it is revealed you need admin privileges to exploit them. The rhetoric the researchers use is inflammatory and staged in a media savvy way like a PR campaign.
This is a totally evidence free assertion and I'm not an infosec person (and am therefore happy to be set straight by experts) but I'll be happy to crack open the popcorn if something interesting is revealed a few years down the line.
No, I'm just noticing again that people who don't don't do a lot of vulnerability research have a lot of interesting opinions about the professional norms of people who do that work. But you never know --- maybe they do a lot of research, in which case, yes, their opinion on security research norms is a lot more interesting to me.
I am not a security researcher, and I do not speak for the person you are replying to, but I do believe that Intel's documented history of unethical, anticompetitive practices against AMD, for example, deliberate compiler handicapping for non Intel CPUs, is enough evidence to establish at least some suspicion regarding these results, especially considering the short warning given to AMD before public disclosure.
I also wonder, what is the purpose of such white hat operations if vulnerabilities are disclosed publicly without anywhere near adequate time for a fix? Isn't SOP to give more time before going public?
> 1-day notice? Such aggressive wording without even the chance for AMD to address the concerns?
Are the reporting parties under any obligation to give AMD notice?
Behaving according to AMD's wishes is not an obligation. Businesses will be the first to tell you that agreements and laws form obligations, not what someone perceives as a nice thing to do.
If not, then you're reacting to a distraction, a detail that doesn't matter: how the corporate-friendly tech press is trying to shift blame away from the party that either sold CPUs with bugs in them (mistakes happen, and this is unfortunate) or distributed nonfree (proprietary, user-subjugating) software which also happens to contain insecurities (a malicious and unjust way to distribute software).
People here seems to be mentioning short sellers being connected to this research as if there's some sinister collusion going on.
This is the entire point of short selling, and SEC encourages this type of activism. It allows people who can provide expert knowledge to profit off a trade if it can reveal damaging and legitimate information about a company
For example, a short seller last year revealed (through extensive research), that Valeant Pharmaceuticals was stuffing its channels and faking its finances. He placed a huge sort sell and went public with the damaging info - tanking the stock from $270 to $12 and made a ton of profit off of it: https://www.nytimes.com/2017/06/08/magazine/the-bounty-hunte...
Without this incentive, why would anyone bother to reveal damaging info? You're placing your self as a target with no reward. The payment is the natural balance of the market.
So yes, this research firm is connected w a hedge fund, and they have a very vested interest. But that doesn't make their claim untrue
The point is the counter balance the other side - companies have an incentive to overstate their upside and understate their risk.
Short sellers want the opposite. So they both present their best cases and let the public decide, much like how lawyers will defend their own clients to the last breath regardless of the amount of evidence against them
Pity that vast incentive didn't seem to work out when they promoted all these chips as having "Firmware Trusted Platform Module", "Secure Encrypted Virtualization", "AMD Secure Processor", and "AMD Secure OS" as features.
AMDs incentive, like any corporation, is to maximise shareholder value. Same as any tiny little security research firm. If a research firm can maximise their profit buy discovering vulnerabilities and shorting stock before disclosing them, is that any ethically worse than a chip company rushing out flawed hardware with big flashy marketing bullet points claiming how secure they are?
(I'm not saying short-selling chip vendor stocks on the back of vulnerabilities is a way I'd choose to make a living, but surveillance capitalism doesn't seem an "ethically better" industry to work in either...)
As to ethics that's mostly irrelevant to this discussion. Both sides could have ethical behavior, I am simply pointing out which side has the larger incentives to exaggerate. After all the stock could drop and a short seller could still lose money. They need the stock to drop a lot even over a minor issue.
More and more lately I'm leaning towards the, "responsible disclosure is a bunch of crap" camp. You have to be "in" to get the news. Even if you're "in" security people love to play info war power games and withhold things because it tickles their jimmies, etc. And don't forget, you're deliberately keeping a vulnerability secret from consumers during a long period where you have no idea who else knows about it. If I'm a "user" or 3rd party and there's a critical vuln in some system I depend on, I want to know that I shouldn't use it or that I should take extra caution or whatever rather than being clueless in all in the name of the vendor's image.
This is how the whole industry ran in the mid-1990s. There were secret vendor lists that the cool kids got to be on. If you didn't have the right friends, you were shut out. Vendors took their sweet time getting patches out, because their preferred customers were all read in and had workarounds in place. It was a shitty way to organize an industry, and it fell apart with Bugtraq and full-disclosure security.
It's sad to see people arguing for a return to those norms, especially since the rejection of them correlates with a renaissance in our understanding how to secure software.
It looks like the short notice in this case is not intended to force a timely fix, but to prevent it. They are hoping to cause as much of damage to the company as possible both directly and indirectly through its customers so they can profiteer from it.
I'd say that the intent makes this qualitatively different to what I'd consider legitimate disclosure.
> It's sad to see people arguing for a return to those norms
Where do you see anyone arguing for that? Or is it just a strawman? What I see is not people arguing against disclosure but people arguing for disclosure with an embargo longer than a day. You're going to have a hard time proving that one day is a norm, or that it correlates with a renaissance in securing software. Your response looks much more like circling the wagons when a member of your tribe is criticized.
If some security researchers are currently choosing immediate highly publicised disclosure and short selling because it's the most profitable path for them - perhaps companies should reconsider their default/expected response to vendor-privileged-disclosure?
It's not like AMD set their chip prices based on "ethics" or "duty to the public". As "the public" I'd prefer a Ryzen 1900X to sell for $150 rather than $500 - It's just a bunch of sand after all (plus some intellectual effort). I don't think AMD get to choose their pricing model but then complain about how security companies price/sell their intellectual work...
> Having a financial incentive to mess up AMD might explain why they only gave 24 hours' warning, though.
A good way for companies to prevent this is to have a generous bug bounty program. Money is still transferred from the shareholders to the researchers, but then the company can impose conditions like delaying public disclosure for a reasonable time to prepare a fix.
If it's actually someone attempting to make money on a short or to benefit from a working relationship with a competitor, then a bug bounty program does nothing. No one can run a bounty program that pays out anywhere near as much as the information is actually worth to an adversary. Bug bounties work to engender a bit of good will among researchers and to provide some incentive to an otherwise neutral party to play ball. They don't mean shit to a hedge fund or a competitor in a multi-billion dollar industry.
> Not unless the bounties are large enough to attract the attention of a hedge fund.
Which they should be if the alternative is a much larger loss to the company's share value. The shareholders come out ahead to pay five million on a bug bounty if the alternative is to lose a billion dollars in market cap.
I'm not a finance expert, but my very lay person understanding of how financial markets work tells me that those would have to be some rather huge bounties. See e.g., the effect on Intel from earlier this year:
That's fine, but it doesn't change the fact that the possibility (likelyhood?) of financial gain affects the authors credibility. Especially since it is already strained by other issues with this disclosure.
It seems to me that disclosing vulnerabilities is in a different category from disclosing fraud. In the latter case, the only entities that suffer materially is the fraudulent organization and its investors, in the former you have the additional potential to expose all users of the vulnerable software to risk.
At what point does it go from being legal (utilizing information that anyone could have discovered with enough time and effort, whether through short sale or investment) to illegal (stock manipulation through rumor or innuendo)? This qualifies in my eyes, but it's probably hard to prove when one is attached to the other. I agree, it does feel slimy.
A new twist on an old game. I hear people ask why short-selling exists, but’s a good check against corruption but prone to it’s own abuses. Citron Research (a short-sell shop) is a good example of this— they savaged companies like NQ Mobile, Lumber Liquidators, etc. and make a bundle doing it.
The security angle is a fascinating and concerning new development, however. That said it may encourage more secure practices (as opposed to theater) through the hardware/software lifecycle in response to serious fundamental design problems.
It will also serve to increase the premium on 0days...
But did you create an entire website about the vulnerability, including graphics and headline-friendly names, as well as sending out briefings to major media outlets ahead of the disclosure? Because that's what this group did
> just based on false accusations that they put out in a "report".
If it's false information, isn't that classic stock manipulation? I thought for it to be legal to make money on the stock it had to be both accurate and publicly available (if potentially hard to put together)?
Citron Research? Total hack and the premise that they provide the market a value is a stretch at best. Sometimes right and lots of times incredibly wrong but makes money on investors panicking immediately.
I agree on the premise of moving the market but they don't necessarily need to be only short sellers, they could have hedged both ways and still made money.
They could have exercised puts if it went down (which it did in the morning) or bought stock/calls both before the site release and in the case of it going down because they knew it wouldn't be a concern or dispelled by AMD.
Unless, this is truly a flaw and in that case, they can still buy more puts and just wait for AMDs official response.
AMD's stock was negative multiple times today ($11.38 on March 13, 2018 10AM,and at 12Noon on NASDAQ). Shorting the stock would be an obvious play. I have heard of people thinking about trading on security flaws in products but never seen it done in real life.
I've done it once or twice when I reported a vulnerability directly to a company and I knew they'd have to report it to downstream customers pretty quickly. I've also been in discussions for larger vulnerabilities with security-focused hedge funds such as Muddy Waters. Generally I'm weakly skeptical about profiting from it consistently. In particular, funds like Muddy Waters have a pretty high bar for the sort of vulnerability they're willing to work with. You need not only a severe vulnerability, but the right kind of vulnerability, so you know that it can't be swept under the rug.
That said, it's pretty striking to me how aggressive this disclosure is. It may be an attempt to narrow the window and increase the profitability of a short sell.
It's not uncommon for short sellers to take a position first before releasing a report like this to drive the stock lower. Of course, there are legitimate groups that, in the past, have unearthed real issues and corporate misconduct, but there are also questionable groups that will release reports with little to no substance. This case certainly does looks dubious, but I'd like to see an assessment by reputable security expert.
That’s... sort of ok? It’s not perfect, but it opens up another avenue to finance security audits besides selling exploits to intelligence services, attacking end-users (both worse), and collecting rewards from the companies (better).
i always find the importance of these disclaimers blown way out of proportion to their probable economic impact. AMD shares are -up- 2% right now, for a presumably negative piece of news. the stock market is a big and sometimes inscrutable place. but ethics likes to treat things as morally black and white.
No they aren't. Aside from the inherent and obvious lack of nuance in that terminology, black hats do not report their vulnerabilities. They weaponize them and use them, or they sell them to criminal organizations.
No, it's actually not. It's distinguished precisely by using a vulnerability with the intention to compromise others. You can't just redefine "black hat" to be whatever normative disagreement you have with how people choose to disclose vulnerabilities. That's entirely subjective.
Excellent, great citation! Now, precisely what did the security researchers hack for their own gain, and precisely which computer's security was violated?
If we can call them "hackers" just because they ostensibly compromised their own hardware or software as a proof of concept for the vulnerability research, does that mean that all of Google's Project Zero consists of hackers and black hats because they get paid (personal gain) by Google to find security vulnerabilities?
Project Zero practices responsible disclosure. They do not make money from the exploitation of the companies whose software/hardware they find flaws in. The difference is very stark and you are being deliberately obtuse.
> They do not make money from the exploitation of the companies whose software/hardware they find flaws in.
Right, and neither did these researchers.
In point of fact, no, the difference really isn't all that stark. It's a difference of degree, not category. You apparently have a problem with disclosing vulnerabilities without providing advanced notice to the vendor, and you consider it especially distasteful to do so if you're financially benefitting from that. But all of that still comprises vulnerability disclosure, which is categorically different from actively using a vulnerability to compromise users as part of a criminal enterprise.
We can go back and forth like this all day, because every time someone bends the definition of black hat to fit something they disagree with, I can form a counterpoint which is technically true but which no one is willing to call black hat behavior, like Google Project Zero. On the other hand, if we use the definition of black hats as criminals engaging in online fraud, augmented by security vulnerabilities, then of course Google Project Zero doesn't qualify. You're going to have a very difficult time broadening the scope of this terminology to suit your definition without accidentally including groups you don't want to be in the same bucket.
And that's precisely my point. If you broaden terms too much, like "black hat" to "stuff with computers in bad faith", we can just weasel in whatever satisfies the definition or agrees with our personal viewpoint. Black hat criminals do not engage in debatable behavior, because it's strictly illegal and directly profits at the expense of other people. At best, all you can do is formulate an abstract argument about people being harmed by rapid disclosure, but that actually comes down to a debate of disclosure guidelines, not a debate of activist investing.
There is a reasonably accepted definition for what a "black hat" is. I don't particularly agree with conceptually bucketing people into black hats or white hats, but the paradigm has an existing meaning.
In any case, if we go by what you're saying, then anyone can define "black hat" to mean whatever they want, which means it's a meaningless and unproductive concept to throw around in conversation.
Your assertion is in a catch-22 here. Words have meaning without requiring an independent body to rigorously define them. The established definition of a black hat is someone who compromises other people using security failures for their own gain. If instead we choose to say that the term has no established definition, then the entire point is moot, because calling someone a "black hat" no longer means anything.
> There is a "reasonably accepted" definition of black hat, by your reasoning, and it is: someone who uses computers in bad faith.
Speaking as someone who 1) works in the security industry, 2) has managed corporate disclosure programs as an internal security engineer, 3) has run a security consulting firm working with many companies, and 4) has reported security vulnerabilities in disclosure programs; no, that's not the reasonably accepted definition. I can't think of any colleague I've ever worked with off the top of my head, nor any widely read security-focused periodical (like Krebs), who would use the term "black hat" for such a generalized disagreement of ethics.
I think the "security industry" has a delusional image of themselves and regard most of them as grey hats at best. An insider's opinion on what constitutes black hat is not particularly impressive to me. And this is not a generalized disagreement of ethics. Bad faith is has a specific meaning and you are unreasonably stretching it.
> I think the "security industry" has a delusional image of themselves and regard most of them as grey hats at best.
This criticism of the industry might hold more weight if you actually evidenced a willingness to use terminology according to its accepted usage, not as a tool to advance your ethical opinions.
> And this is not a generalized disagreement of ethics.
It actually is, because I strictly disagree that either of 1) trading on bad news, like security vulnerabilities, or 2) disclosing vulnerabilities without notifying the vendor are unethical. You're free to disagree! Your opinion is just as valid as mine; the thing is, we don't define words based on opinions, because then we'd never get anywhere, and we could label people we don't like whatever term we know other people don't like, even if we don't share the same definition of the term. By calling people who do either of #1 or #2 black hats, you're exercising rhetoric that puts them in with actual criminals, doing actual illegal things just because they are doing something you disagree with.
> Bad faith is has a specific meaning and you are unreasonably stretching it.
Okay. I guess I'm free to also call scientists working on whatever thing I disagree with pseudoscientists then, just because I find their work ethically unsettling. Better yet, I could call them criminals.
Words aren't defined by any authority. Their historical and present common uses however are documented by dictionaries et al. The most authoritative source on the term "black hat" is probably esr's jargon file: http://www.catb.org/jargon/html/B/black-hat.html
To save the click: "1. [common among security specialists] A cracker, someone bent on breaking into the system you are protecting."
Your (and hdyr's) looser version is not in common usage and in that sense is wrong.
black hats use them for bad, white hats use them for good.
ideological discussions about disclosure policy aside,
if they are doing this to manipulate stock prices and in doing so create a situation where more actual exploits occur, I'd say that is 'black hat' behavior.. the 'weaponization' is in the 'social engineering' of the market reaction, rather than a direct exploit in this case..
The problem with your first line is that it leaves the definition of black hat open to interpretation, when that is not how the word is actually used in the security industry or in popular reporting. Black hat activity specifically refers to criminal activity, which we can demonstrably perceive and attribute. By your reasoning, I am free to call security researchers black hats if they don't give vendors advance notice. You might disagree with that, but you can't say I'm wrong without making a normative argument about whether or not something is ultimately unethical. There is no categorical difference between me choosing to call people black hats if I disagree with their behavior and you calling these researchers black hats because they're doubling as activist investors.
On the other hand, this entire sideshow is bypassed if we use the well-established definition for "black hat", which refers exclusively to illegal behavior involving security vulnerabilities and online fraud. More to the point, reporting facts is not "market manipulation" (which is also a well established term) even if you want it to be, and "social engineering" is not the same as publicizing information with the intent to move the markets. Using these words in the way you are is the same as flippantly redefining them as you go along, with the result that the conclusion is quite brittle. There could be a strong argument that the behavior is unethical, but using these terms as you are doesn't help that point along, it hampers it.
> Black hat activity specifically refers to criminal activity, which we can demonstrably perceive and attribute
stock manipulation is clearly criminal, if you want to take the 'letter of the law' approach..
beyond this, this gets into the same debate as letter of the law vs spirit of the law, which has both nothing and everything to do with this topic.. black hat is not 'defined exclusively' anywhere, and of course one leaning to a 'letter of the law' argument would then also look for 'exclusive definitions'
as to your point:
> free to call security researchers black hats if they don't give vendors advance notice.
if they are doing this for malicious purposes, yes
if it is for an ideological stance, then, well, it depends on how you view their ideology.
what happens if the law is incorrect?
again, letter of the law vs spirit of the law.
"normative argument about whether or not something is ultimately unethical"
laws are normative arguments about whether or not something is ultimately unethical.. not neutral 'things' that exist in a vacuum. and they can be correct or incorrect, and also incompletely defined..
how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?
Say for example, actively portscanning (legality nebulous) for already infected computers and then overcharging 2000% for cleanup? Then spamming virii from a jurisdiction where it is not illegal in order to grow this 'business'? All legal.. so it's "white hat?" or is it 'grey hat' because it is in a legal 'gray area'? I don't think that's what grey hat means either..
> laws are normative arguments about whether or not something is ultimately unethical
That wasn't the distinction I was making. A law is a positive statement. An argument of what should be lawful, or an interpretation of a law, is of course normative. But I already said that in this thread.
By the "letter of the law" (section 9(4)(a) of the SEC act and existing case law), stock manipulation involves promulgating outright falsehoods. Case law shows us that exemplary falsehoods have to be categorically untrue; a biased presentation of something that is true does not pass the bar. Being that there is a vulnerability here, the material we have to go on does not paint a favorable outlook on the researchers being indicted. Activist investors routinely present facts to the media with a clear agenda, but the SEC virtually never prosecutes them if there is an inarguable, material kernel of truth to their allegations. There's a vulnerability here. Reasonable people can disagree on the severity of the vulnerability and how it should have been disclosed. But it's not fraud.
> how does acting completely unethically yet entirely within the law for malicious purposes fit into your framework?
Your question has a presupposition; if the security researchers traded on their knowledge of this vulnerability, I find that to be neither unethical nor illegal stock manipulation.
I'm sure they believe that, but to be blunt, that changes the definition of "black hat" from "compromising people with security vulnerabilities" to "doing things I personally find unsavory when publicly disclosing security vulnerabilities."
If people want to bend over backwards to make an argument about the abstract way in which people are harmed by small disclosure windows, activist investing or information asymmetry in the market, they're free to do so. But none of those things qualifies as black hat behavior. Definitions require precision to be useful, and you throw all precision out the window if you decide to lump people with disclosure habits you dislike in with organized criminals stealing identities en masse.
> If the term is flexible, why the hard reaction to my flexing of it?
The terminology is not flexible, it has a well established meaning. If your bar for a black hat includes legitimate security researchers disclosing vulnerabilities in a way you don't like, you've just expanded the group of people we can call "black hats" almost arbitrarily. You're putting security researchers you have a normative disagreement with into the same group of people who commit actual fraud, steal identities and sell your credit card data.
"Although we have a good faith belief in our analysis and believe it to be objective and unbiased, you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports." from the disclaimer
It's quite unsettling that Linus thinks as much of security in general, given that he maintains a kernel and he's responsible for accepting its security modules that are next to unusable because of their complexity. Could his general disbelief lead to a (kind of) dismissive attitude in this respect? Keep in mind he's the one that would never properly disclose of a security fix - instead of saying which problem is fixed, the general approach is to just publish a new kernel minor release and say "some security bugs are fixed, go figure".
24 hours means they don't deserve to be called security researchers. They're exploit creators. Given the material effect this would have on AMD's stock, one might also reasonably speculate about their financial interests.
Creation and release are two different things. They have created the exploits, or else AMD wouldn't be taking them seriously. They have also contributed more to the re-creation of those exploits by others than they have to security. So you can quibble over whether others use the exact jargon that you would have, but that doesn't change the underlying reality.
If the vulnerabilities were real, I'd have no problem with a company using it to promote themselves, trade and talk their book, etc. The issue here is the vulnerabilities are very overhyped (some are fundamental things like "if you reflash your BIOS with evil, you're screwed", some just make local root access more persistent, etc.
The problem with something like TRO LLC is that markets don't move on security info.
Is it wrong that my immediate reaction to that was "Wait, isn't/wasn't the PSP a portable? Did they actually use x86/x64 AMD processors in those? How?? AMD's traditionally been poor on power management!"
Who do you think you speak for? Assuming the vulnerabilities aren't fabricated --- it's happened before with other companies --- attaching your name to that white paper probably guarantees you lifetime employment in security research.
"Unheard of"? People have dropped serious vulnerabilities with _zero_ warning before.
Some researchers coordinate, some researchers don't. For a project originally organized around the principle of getting not just research results but functioning exploit code deployed regardless of vendor preparedness, look no further than Metasploit.
If you're referring to vulnerability research twitter, and not, I don't know, IT security twitter, then no that's not what's happening.
The CTS-Labs people are taking shit from vulnerability research twitter for overhyping the findings (meaning: they released a report on a day ending in "y"). People are noting the connection to the short selling --- but since this will be the 3rd or 4th time someone has very publicly done that, I don't see anybody shocked or outraged by it.
But this public ostracism you referred to --- specifically the notion that dropping vulnerabilities with 24 hours notice would reliably generate it --- is fictitious. I'm not sure how you can be a part of the vulnerability research community and believe that there is public shunning attached to dropping zero-days, since many of the best known people in the community have repeatedly done exactly that.
I somehow sense you have already made up your mind about the ethics of this, have your own - rather fixed - views of what the majority of researchers think of it, and are unwilling to listen to opposing arguments. I'll stop trying.
They had all the marketing material available and ready to go (and I bet that took more than 24 hours to make). The 24 hr notice is just an out against the usual accusation of publishing an exploit without giving notice. They sure well knew AMD couldn't even verify it in 24hrs, allowing them to get the full publicity while coming off as a reputable security firm.
If you put 10 people who find and publish security vulnerabilities professionally in a room, I do not think you would secure agreement that this is a "clear breach of ethics". There are extremely well-known researchers who have made a point of not coordinating with vendors; vendors, historically, have been far more abusive than researchers.
But security researchers don't exist in a vacuum: they're part of larger society. If the security researcher subgroup has a code of ethics that diverges too far from the popular perception of what their code of ethics should be, I could see popular pressure to bring them into alignment (all the way up to using the legal system).
I'm not saying the non-security researcher users on HN have an opinion representative of the public as a whole, but this comment and a previous question asking another user what security research they've published may point to such an ethics disconnect between security researchers and the broader populace -- or simply a disregard for the concerns of the broader populace. I think it would be beneficial for security researchers (or any professional group) to listen to ethics concerns of the broader group they're a part of.
On another note, I would also assert that abusive actions by vendors do not excuse abusive actions by researchers (and vice-versa).
Public security researchers compete with state-sponsored research teams and organized crime syndicates. Both of the latter entities are better funded than even commercial vulnerability teams, and neither of the latter publish any vulnerability information. I have a hard time ever seeing public researchers as the bad guys in these stories.
The title here is misleading. The vulnerabilities were not actually publicly disclosed, the only thing that was disclosed publicly is the fact that the vulnerabilities exist. The actual details of the vulnerabilities were disclosed privately with AMD.
What can AMD do with 24h notice? Could they even verify the veracity of the claim in that time?
As am AMD system owner, I would much prefer that big flaws were disclosed in a coordinated manner with AMD - giving them a fair chance to verify and find a solution, rather than giving bad actors a head start.
Depends on the flaw, might give them long enough to make a fix  (they might have one in the pipeline, or one they kept from release because of effects on speed, you never know) but more than likely gives them long enough to decide _how_ to handle it.
 I'm partially recalling a fix, on Facebook I think, that was implemented within a few hours of reporting; it was ac testing API that got exposed. Different field, of course.
It also seems like that you could make security claims and then perform market manipulation on the stock. Giving yourself 24 hours lead time and making AMD look bad would allow you to short the stock. It doesn't seem like it has impacted the stock at all though.
Wild guess / conspiracy theory:
Intel, afraid of the damage to their image just made worse by diminished performance advantage compared to AMD )due to Meltdown), fearing long-term market loss, quickly found ways to tackle the issue by, instead of pedaling to regain trust, damaging a competitor's image. It seems like a reasonable long game to support and perhaps steer the disclosure of AMD vulnerabilities that CTS-labs had been investigating. Or maybe is was Intel investigating themselves, had some cards up their sleeves, but needed some other entity to do the public disclosure.
Other theories discussed here seem less far-fetched than the above, but in any case, it does smell funny.
My guess is that some researcher found something and decided to maximize profits, scraping the bottom of the barrel of quasi-vulnerabilities, creatively exaggerating, and bringing in the lawyers, the financiers and the PR weasels needed to throw a scary web site and a misleading "white paper" at AMD. We'll see what CTS works on next.
A security researcher claims to have access to the full (non-public) technical report as well as PoC exploits for it. He says they're legit, and they are flaws, not just "you can do admin things with an admin password".
Good question. They call the "MASTERKEY attack" that requires a reflashed BIOS "remotely exploitable" because on some systems, the BIOS can be flashed from the OS. They then speculate "On motherboards where re-flashing is not possible because it has been blocked, or because BIOS updates
must be encapsulated and digitally signed by an OEM-specific digital signature, we suspect an attacker could
occasionally still succeed in re-flashing the BIOS." Page 9 in the PDF.
I'm not a professional security researcher but this is looking pretty darn flimsy. I also don't see any proof of concept code anywhere -- the "whitepaper" seems to just claim these things exist with very little mention of how to exploit them. Compare against Meltdown/Spectre, which was highly technical and had lots of PoC code. This just says "Upload malware to the processor" without further comment.
I'm not saying they didn't find anything, but whatever they found, they've hardly disclosed it.
Apparently all these can only be exploited if you already have administrator privileges. Raymond Chen calls that "being on the other side of the airtight hatchway" and has written about it numerous times.
Since all of this seems to be related to "Secure Boot" and other DRM related crap, can we please just have the option of booting with minimal firmware support, no hidden code, and go for a completely open, community maintained, and audit-able by /anyone/ infrastructure?
No, I don't want HDCP or any similar crap; let me run my servers and desktops in secure mode.
Insider trading claims might be difficult since you can claim the vulnerabilities were public knowledge waiting to be discovered, but...
Can you trade on knowing the security disclosure timeline prior to your publication of the vulnerability? That would seem to be insider knowledge until AMD authorizes publication. E.g. I've got knowledge that AMD likely wouldn't be able to fix the flaws prior to my disclosure. That knowledge would inherently be non-public.
Insider trading usually implies coming into possession of confidential information and acting on it. Trading on non-public information that results from your own research and then announcing it is not illegal.
Imagine someone buying stock and then saying the company is good. Not very controversial is it. Warren Buffet does it. Shorting stock and saying the company is bad is just the flip side of it.
In fact, there are equity research companies that do specifically that (e.g. Muddy Waters). Whether that research holds water or not is for the market to determine (AMD is up on the day).
> Trading on non-public information that results from your own research and then announcing it is not illegal.
Correct. I'm not referring to this. I'm referring to trading on information discerned from communications with e.g. AMD but prior to disclosure of the vulnerability, especially if those communications which establish e.g. timelines are only disclosed after trading
Hence my point about trading upon understanding AMD's response timeline e.g. from emailing them.
The 24h disclosure should not be too much of a problem, since they state:
> "we are letting the public know of these flaws but we are not putting out technical details and have no intention of putting out technical details, ever"
It's always a risk, because now people know where to look to recreate it themselves, it's not like this is a full-disclosure release where you're SOL as a manufacturer and have to race rampant public exploitation.
The upside of this is that most of these vulns are ineffective after disabling AMD "Secure" Processor at boot which is now an option in most firmware. Without breaking manufactures firmware upgrade key you cannot execute the first one to toggle the settings.
The interesting one is against Promontory. It still requires VM host access to exploit so the impact is limited.
> AMD is in the process of responding to the claims, but was only given 24 hours of notice rather than the typical 90 days for standard vulnerability disclosure. No official reason was given for the shortened time.
90 days is not a standard. Nothing was shortened. People are allowed to publish their research whenever they like. Vendor advance notification is optional.
And the users downstream of bugs that are made more widely vulnerable--because, as anyone who saw how, as an example, previously rare MitM attacks became commonplace after Firesheep etc. were publicized, obscurity is in fact a component of security--are...?
Well, fuck 'em, I guess.
Responsible disclosure, contrary to the super-cool leet kid notions expressed by people with who choose to exhibit an underdeveloped social conscience, is not doing a solid for the companies who have vulnerabilities. It's for the users who consume things. Security researchers are effectively taking upon themselves a role of public service. That comes with responsibilities to the public, not to AMD or whoever.
Meanwhile, this crew looks like they briefed the media before telling the vendor, which is all kinds of fucked.
Here's the strongest version of the claim that I understand:
1. All of the relevant people, i.e. "the users downstream of bugs" are already vulnerable.
2. It's possible, maybe even probable (or likely), that people, other than the researchers that are disclosing the vulnerability, have also discovered the same vulnerability and, furthermore, that those others can exploit the vulnerability.
3. Every delay in disclosing the vulnerability prevents the victims from protecting themselves from any bad actors mentioned in  thru means more drastic than applying a patch or similar from the relevant vendors (e.g. taking the affected components offline or otherwise making them unavailable).
The argument hinges on the probable size of the bad actors mentioned in . If you assume that the disclosing researchers are the first people to discover the vulnerability, then it would possibly be best for them to first disclose the vulnerability to the relevant vendor or vendors. But note that even vulnerabilities disclosed to vendors can be leaked to bad actors.
And if you don't assume that the disclosing researchers are the first people to discover the vulnerability, then not disclosing ASAP prevents people from protecting themselves.
I think your perspective is a bit narrow. If you consider each individual person,  is indeed nonsense. However, the impact of many hacks comes disproportionally from high-value targets.
Some high-value targets (e.g. key infrastructure, parts of government, major enterprises) have dedicated security teams, and can come up with a pretty decent response if given the appropriate information. Divulging vulnerability information widely, in particular, may or may not be a net benefit to them. (Consider e.g. Linux vendor vulnerability lists.)
Other high-value targets (e.g. journalists, human-rights activists, etc.) are utterly outgunned by their adversaries (who can afford to buy or find new vulnerabilities), and can only hope that something causes vendors to consistently write software that's sufficiently-uneconomic to exploit. In the sufficiently-long run, proponents of full disclosure would argue, anything that increases the cost of shipping vulnerable software should help these users.
(Disclaimer: absolutely not speaking for my employer here.)
Nobody appointed these security researchers to the authority to which you assign to their actions, though. Burning the immediate user on the off chance that it helps the hypothetical future user is some very weak tea.
I agree that some proponents of immediate disclosure would claim that their actions encourage vendors to ship less vulnerable hardware or software. I do not believe that that, in the general case, is why it is being done. And I am certain that that, in this specific case, is not why it was done.
Well, the very idea that there is some timelimit on mitigation before the flaw is disclosed anyways is that "very weak tea".
However, overall, I agree with you. Person with exploit needs to compare the probable consequences of disclosing at time N vs. disclosing at time N+1.
If it's being exploited in the wild and users can meaningfully self-protect, disclose now!
If the vendor will probably have a patch in 2 weeks, there is not widespread exploitation of the vulnerability, and disclosing now will cause widespread exploitation, disclose in 2 weeks.
If the vendor seems like they will never issue a patch on their own (because significant time has elapsed), such that at some point in the future there's going to be widespread exploitation and you're only hastening that a bit, go ahead and disclose now.
It is neither the vendor nor the researcher’s place to make those sorts of decisions on behalf of the end user, while keeping the end user ignorant of the fact that such a decision has been made for them.
A few (far from all!) software vendors realistically might be able to respond and issue a patch in 24 hours. But a hardware vendor cannot. See Intel's recent debacle  for what happens when a silicon vendor rushes a security fix out of the door without going through a proper multi-week QA cycle.
And at other times 90 days maybe inadequately short. But 90 is just a round number someone at Google thought is a good idea. And now it's become 'standard'.
I can go with immediate, or I can go with never. But realize that every vuln is different, and their impact (or hardship of writing or applying patches) may not always be fully understood by stakeholders involved before or immediately after the details are released [CVE-2015-0235].
The website's disclaimer says "you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports".