Software Integrity Blog


Threats threatening with threats

By now, everyone has heard of the Mandiant report. Many of you have taken the time to read it. This report and the discussion it generated refers to ‘threat’ so frequently that it’s worth discussing how its use of the word differs from what you commonly see here.

The buzz around hundreds of individuals poking at systems, finding vulnerabilities, weaponizing exploits, and crafting insidious malware invigorates security practitioners and contributes to the F.U.D. driving our industry. A range of individuals, including Mandiant’s Richard Bejtlich, urge organizations to think about WHO attacks them — Threat (agents).  The Mandiant report, for reasons rooted in geopolitics more so than in application security, references particular threat personas by name: Ugly Gorilla, for instance.

We, like those focused on threat intelligence, also believe that considering who attacks you is important. We refer to these agents of ill-intent as the “Threat” but do so without conflating the threat’s common attack vectors in the term (*1). Sometimes, delivering vulnerability discovery services (like penetration testing, source code review, or assessment) we focus less on motivation and goals than threat intelligence folk.

Other firms will reply, “We don’t!” They proudly state, “We’re not just pen-testers, we find business logic flaws.” When pressed they eventually mutter, “Well, you can sometimes browse to important actions without authenticating.” That’s not a business logic flaw… Not unless you base your annual revenue on a URL-security AuthN product. Sometimes I hear, “We got the site to ship a product w/o payment.” OK, that is a business work flow violation. Implement the state chart pattern and move on: we’re still not anywhere close to deeply understanding our threats’ motivations.

Organizations give us proxies for motivational understanding: specific assets and privileged functions within their IT systems. Rather than focus on individual personas and their motivations, we focus on enumeration of threats based on:

  • Skills and capabilities;
  • Access to attack surfaces.

Understanding threats’ capabilities and the surfaces to which they have access allows us to give developers specific explanations of the attack vectors that apply to their system. This prevents the, “That couldn’t happen on my system–it’s different” syndrome. By defining capabilities and attack surface in a software-centric, technology-specific way, we’re able to give concrete guidance as to the structural elements of their systems that need improvement.

In fact, let’s back up and think about the differences in threat modeling participants:

  • Threat Intelligence– identification of threats with interests in motivation, economics, and priorities
  • Vulnerability Discovery –discovery of exploitable attack vectors within software/systems
  • Risk Management –evaluation, scoring, and prioritization of identified vulnerabilities

It’s pretty clear: you need to incorporate all three perspectives to thoroughly succeed. See the graphic below, which embraces the same style as our initial Threat Modeling Glossary and relates the above actors, their activities, and their deliverables. As you can see, threat intelligence folk have a role to play in helping Vulnerability Discovery practices understand what they’re not yet looking for and closing the gap.


How can these different stakeholders benefit each other?

Security Research – Finding new vulnerabilities to look for

Vuln. Discovery groups sometimes have Security Researchers. These security researchers, in my experience, take a technology-centric view of the problem, asking “What could be exploited based on the toolkits our developers are using?” Don’t conflate this role with Threat Intelligence’s job because while new technology-specific vulnerability is important, it’s quite different from understanding  threat motivations and economics.

By discussing threats with threat intelligence, security researchers may get direction one how attackers will misuse tech stacks and will gain insight into how their research should be prioritized. Threat intelligence may also show vulnerability discovery how to conceive of attacks exploiting vulnerability, perhaps lower down the tech stack, that have second order effects (“stay hidden”, “insert key logger”, etc.). Opening this communication bridge may drive vulnerability discovery practitioners to look for different things… …or look at software differently.

Likewise, in reverse, security researchers can help threat intelligence understand where threats can use tech stack vulnerability  in ways the attacker hasn’t yet discovered (or hasn’t let on that they’ve discovered).


A few stakeholders impact a mature threat modeling process. Those speaking on Threat Modeling, ourselves included, sometimes forget that the other stakeholders and important threat modeling tasks exist elsewhere in the organization and should absolutely impact their own work.


More by this author