Some of my favourite threat actors to model are ones who developers can empathize with. What I call grad students and conference papers, where someone who is doing it for the professional prestige is pulling your feature or product apart.
A lot of threat modelling is story telling around your product. Who uses it, why, and how could someone else benefit from stealing what your company gets out of it? I formalized that threat story development it into a product a few years ago, and it actually generated security epics and stories, but ran out of runway for it and moved back into consulting, but the reason I think people get into security is precisely because of the threat modelling aspect of it. It's really the fun part of the field where you get to inhabit a kind of cyberpunk noir world of spies, opposition researchers, organized crime, hackers, and activists. I moved on from it because unfortunately 95% of the material demand for security is driven by Compliance, which is a set of methodologies designed to specifically avoid having to acknowledge your real threat model, because to do so and write it down creates discoverable liability for potential negligence in some enterprise cultures. Taking risks of all kinds is the game in business, and narrating the risk game and winning it at the same time can be hard to reconcile. However, a company that can look at its real risks and threat model and get everyone down to developers aligned on what they're doing and why is probably a 10x company culture. It might be worth looking at whether threat modelling could be used in product orgs as the founding narrative or story to get that kind of alignment that would supercharge their teams.
It’s probably safe to say that some major hacks have been made possible by security researchers publishing POCs for vulnerabilities that will inevitably never be completely patched by everyone. It’s not like a POC is just a tool that’s being abused, it literally has only one purpose.
Blaming the security researcher is a bad idea. Without the security researcher, the tool would still be created and used by somebody. The security researcher highlights the problem and makes people understand the danger in it. Without the POC, is it highly likely people who know nothing about cyber security just wouldn't care about the vulnerability.
I’m not blaming them in a bad way. I’m just saying it’s not completely not their fault. Releasing vulnerabilities and POCs for major applications should be handled with utmost care and responsibility unless they don’t care about the consequences of their actions. There are tons of script kiddies using POCs out there who would never be smart enough to create them themselves. It helps abstract the actual hacker process down to just grabbing a git library and executing it on a victim.
I'm currently advising a Fortune 500 on their autonomous vehicle work. In some of the design process I've tried to get them to do threat modelling. Every engineer is interested in it. Middle management, enterprise IT and enterprise security has tried its hardest to shut it down.
The reason why is probably control. Middle management wants to control the narrative and be able to push blame away. Enterprise IT thinks engineers are too stupid to understand security and wants to nanny the dumb engineers. Enterprise security is a black box that no one directly speaks to, but considers themselves above the peasants.
It most reminds me of Eddie Izzards "Encore on Computers" . They will almost certainly come back with concerns, but not give any suggestions on workarounds. "You have an issue in your design", "what is the issue, can you tell me please?", "I'm sorry I can't tell you, but let me know when you fixed it"
You can't speak directly to the security team. Nobody knows what they do, nobody knows who reviews what. They also rarely have a background in security and also don't know how to do threat modelling. On top of that its becoming increasingly clear that their centralized approach to security has created the biggest security vulnerabilities possible. I.e. Exchange and Solarwinds integrated into the normal LAN environment, while everything else receives extra scrutiny.
I'm in security, doing testing for internal project teams. Another consultant is typically advising them on general controls, policy and such, and telling them to come to my group to get testing done.
I think a large part of what we do is after the report, explaining what it all means, and how these various high/medium/low issues might be combined to create havoc with much more impact. Very low-key threat modeling is part of this, as someone else mentioned, story-telling. Really, it can be something of a mental pen-test, in that you've helped guide a narrative from where someone might discover issue A, leverage it to find B, and so on.
The biggest obstacle I struggle with in getting good threat modeling done is keeping the model up to date as design decisions and constraints change.
Building a threat model in to the ‘design phase’ only makes sense if you have a design phase - but an iterative dev cycle is just continuous overlapping design and dev phases. It can be really hard to recognize that a particular iteration of a design change is the one that broke one of the assumptions built into your previous threat model.
I’m really interested in ways to turn threat models into verifiable constraints and executable tests so iterative development can proceed safely in the framework of agreed security guardrails... but most threat modeling literature seems to end at ‘having a threat model’, not ‘verifying your threat model actually applies to your system’...
Covid lockdowns and WFH had a surprisingly large influence on self-service threat modeling facilitation I did for a client.
Pre-covid we would facilitate workshops interactively around a whiteboard. It turns out that starting with an empty one was essential for successfully thinking about risk.
During lockdowns whiteboarding was difficult so we often discussed the application stack in a virtual meeting using some Powerpoint or Visio diagram provided by the team. Without fail this diagram was huge and included lots of detail. This didn’t help group discussion because basically only the author would understand it enough. After a number of these unsatisfying attempts we started having separate virtual meetings to create high-level diagrams to abstract away initial detail overload. This often resulted in teams discovering their CI/CD pipeline and have a healthy discussion about scope.
This articles points out the obvious: security engineers are fewer, it'd be cool if devs could run and maintain their own threat models, you can help people with trainings, STRIDE can be useful for beginners.
What it doesn't address are what I think are the important points of a threat model:
* What tool or format do you use for threat modeling? There's unfortunately not many solutions. Sure you can do whatever, but uniformity helps (as opposed to every team using a different format) and spreadsheets really suck. I think there's really a need for a tool/SaaS in the space.
* How do integrate threat modeling in your processes? Doing a threat model once is nice, but you need to re-visit it often and make sure it's up to date.
* How do you analyze the outcome? Threat modeling is the perfect tool to identify gaps and prioritize security work, unless people discard it as soon as they're done with it. How do you milk what you've done, and how do you make sure others prioritize what came out of the threat modeling exercise?
Most developers are underpaid. Then if people above them wear flashy suits, drive to work in sport cars or share pictures of their mansions, it's easy to then exploit the feeling of envy. Befriend a developer and then offer them good money for information - probably the simplest way.
I had the pleasure of meeting these folks while they were building out this program - It was incredible to see the work they put in to truly scale out threat modeling by enabling dev teams to understand security risk.
Kudos to them on helping push us to a more secure future!