Software Integrity

 

Book review: Reading Shostack’s ‘Threat Modeling’

Increasingly, individuals and organizations alike express interest in building their own threat modeling capabilities. Some ask, “What do you think about STRIDE?”. More generally, “How can I help developers think about our systems’ security properties?”

Synopsys has published a bunch of valuable threat modeling material but the biggest single body of work continues to come out of Microsoft. Microsoft’s Shostack recently released what amounts to a replacement to Snyder and Swiderski’s Threat Modeling book. It’s a Wiley publication rather than Microsoft Press but Microsoft experience dominates the work. When I read Shostack’s Threat Modeling book I took lots of notes intending to write a solid book review. Reading what resulted from the notes, I realized I’d created more of a reader’s guide for practitioners. Hopefully, you’ll find it useful.

About The Author

I’ve reflected on Adam Shostack’s work in different domains though never worked directly with him. Compared with many in appSec, Adam operates differently. He selects thorny problems. He searches literature others may not have encountered. He looks for an underlying model to describe the problem. He seeks opportunities to apply simple tools and approaches to his models in order to drive practitioners. There’s much with which to agree and disagree.

Contents

Threat Modeling begins with a no expectations of an existing threat model or threat modeling capability. The book describes, from various angles, how to turn that blank page to something useful. Part I covers creating different views in threat modeling, elements of process (what, when, with whom, etc.). Coverage applies to both structured and unstructured approaches. Readers will find these views apply to range of organization types. Most importantly, Part I shows the reader what they can (perhaps even should) reasonably ignore.

Part II gives the reader help with starting to enumerate vulnerabilities; cue Microsoft’s STRIDE and Mitre’s CAPEC. This part also tackles the threat trees (straining Amaroso’s construct) and provides other threat enumeration artifacts with which readers can bootstrap their own work/templates.

Part III shows readers what to do with what’s been created or used in parts I and II. Part III speaks to software process but does so in terms of activities, techniques, and additional artifacts in support. It’s hard to pick my favorite between Part I and Part III, both have abundant high-quality insight. The reader should not skim these sections for fear of missing something useful. Shostack provides a sober look at various risk management practices within Part III without succumbing to the religious spats that often result from risk management discussion. Readers from organizations without particular (or simply informal) risk management regimes may “find themselves” in one of this section’s techniques and gain enough information to follow up.

Part IV turns sharply left. It’s written at a higher level, covering others’ work at a high level. I found this section to have overlooked a lot of useful practice. Chapters within this part also provide summary insight into referenced research. I believe Shostack fully absorbed referenced material and successfully used it to shape Microsoft’s security practice, business units’ use of threat modeling, as well as executive management’s perception. I do not, however, believe novice or experienced readers will find this material satisfying or useful within their organizations. I found myself, as a reader, referring back to related Part III coverage to remind myself of the solid actionable practitioner advice.

Yes, I’ve omitted what some will consider important. For instance, there are 10 (…and 10 too many IMO) pages devoted to tools. I’ve also omitted Shostack’s summaries of authN and crypto. This coverage was too light to aid a security practitioner find interesting attack vectors and too abstract to educate a developer as to potential use/misuse.

Reading Microsoft Security Literature

It is important to understand where an author’s approach comes from when one applies suggested techniques within a different environment. This is doubly-important when absorbing Microsoft’s offerings (be it SDL, Threat Modeling, or SafeCode). Why?

Who They Are: Microsoft is a huge software house, an ISV, and one that develops platforms, toolchains, and office suites (excepting recent forays such as music, gaming, cloud … all of which came well into the security initiative’s solidification). It’s unlikely your organization looks like Microsoft.

Mandate: Most importantly, Microsoft is one of the rarified companies that has sincere, highly-visible executive support for its security initiatives. Few remember that Microsoft stopped all presses to train its staff after the Gates memo and that they conducted a big security push. Security pushes continue as part of MS SDL.

People: Microsoft plays benefactor to (and benefits from) high-quality security boutiques. They have a security researcher outreach program and their own conference. They have Microsoft Research, Trustworthy Computing, and other groups with strong security players. Microsoft posts-up big numbers for folk dedicated to security and those tapped as development “security champions” in the BSIMM study. Microsoft has a developer culture. They hire smart folk with degrees in computing. Microsoft’s tester-to-developer ratio is the envy of my other clients.

Time: This is not Microsoft’s first rodeo. The Gates memo went out in 2002. Microsoft has their own process security model, assessment technique, and measuring stick. That shows dedication.

How do the above factors net out? ‘Depends on your organization. Shostack provides guideposts intended to help the reader with adopting content in their own environments as well as sections on communicating the value of threat modeling. I’ve seen many a program fail because they didn’t account for the challenges associated with adopting Microsoft methodology as-is. Readers won’t find much concrete help with organizational change here.

Good

Coming of Age: Shostack’s Threat Modeling shows clearly discernable maturation in the threat modeling space. Whereas the previous works read like impressions of what might work in the authors (acknowledgedly) educated opinion, this book reads as advice from someone who’s done this multiple times in differing scenarios.

Treats ‘Blank Page Paralysis’: In my experience, “blank page” paralysis kills more threat modeling initiatives than any other malady. Because the book immediately addresses the challenges that cause paralysis by providing concrete guidance, prospective readers should expect to be able to use this book regardless of experience level.

Multi-disciplinary (Developer-centric) approach: Readers from ranks of developers, security champions, or newly anointed security practitioners (especially those in “army of one” situations) will find different perspectives from which to start (e.g. vulnerability-based, structural, and data-flow based). Despite Shostack’s repeated dismissive protestation of their value, he includes simplified artifacts to start those comfortable with threat-agent and asset based perspectives. In suggesting practitioners build separate models (views) depending on the modeling target and chosen perspective, Shostack prevents another common infection point for paralyzing disease: trying to shoe-horn every aspect of a security design into a single view.

Job aids: Many aspects of application security remain art. the path to engineering discipline will lead us through templates, checklists, and other tools for simplification. The artifacts provided in Parts I and II, some within the appendices due to size, provide ‘take home’ value in support of insights provided.

Good-enough: “Perfect is the enemy of good”; indeed. Early parts of the book show experience and insight by providing the reader with heuristics for quality and done-ness. This also helps prevent paralysis.

Onion: Book layout “peels back layers of the onion”. As part I gives way to II, previous subject matter is enriched. Part III continues this enrichment. Until late in the book, Shostack did a great job of holding this structure together.

Risk Religion Agnostic: I firmly believe that translating Microsoft speak + technique presents challenges. However, coverage of risk management topics presents the reader a solid intro of which even unstructured practices can make use.

Secure Design: The book’s subtitle, “designing for security”, foretells the welcome change from “pile up bugs” to “fix problems you find”. Despite remaining process-/culture-agnostic, Shostack provides advice in closing the loop with development and tracking outstanding issues. Some mapping is made between the provided generalized security primitives and attack vectors (be it in the risk mitigation or full prevention). Architects and developers will not likely consider guidance sufficiently specific or actionable.

Re-consider these “pros”. This represents a big step forward for the practice.

Not So Good

Vocab: Notwithstanding the clearly ambiguous definition of ‘threat’, Shostack’s loose terminology is at odds with his diligence. Using threat to refer to a vulnerability, an attack vector, and an attacker within a single open pair of pages is jarring. Provided example threat trees don’t jibe with what Amaroso coined. The book’s trees display concepts like vulnerability (weakness) and attack vector (action) side-by-side potentially confusing the reader as to the concept/type of each diagram component.

Seemingly, in proactive defense of vocabulary transgressions, Shostack addresses ambiguous vocabulary and explicitly states that the only way to ‘win’ is not to argue about it. Microsoft has played loose and fast with vocabulary and prior threat modeling art since the beginning. One wonders if Shostack is just covering the prior legacy here or if resulting confusion of outside practitioners hasn’t penetrated the Microsoft echo-chamber. Consistency and quality of practice will improve for both experienced and inexperienced readers that disambiguate concepts this book mangles. Using terms consistently simplifies practice so that practitioners know exactly what they’re to accomplish when they attempt a certain technique or fill out a certain artifact template.

Attacker- and Asset-based Modeling: Shostack hits out at attacker- and asset-centric throughout the book. He acknowledges that these perspectives are compelling but that they may waste time, confuse/frustrate, or prove unuseful. Consider that this stance may well be borne largely out of the Microsoft perspective (described above). Much of Microsoft’s perspective, as an ISV, stems from developing software platforms and products. They simply can’t be expected to imagine all the ways a customer might use these products, and for what business purposes. It’s hardly surprising then that they focus on a software-centric approach. But it’s also not hard to understand why a bank, with a clear understanding of its assets, might focus on those assets. Nor is it unreasonable, when only a select few adversaries are capable of DoSing an organization, for that organization to seek deep understanding of those adversaries’ capabilities and motivations as part of their threat modeling. Some of these organizations get more specific about software- and vulnerability-centric perspectives as well. They do this because their portfolio of apps may closely orbit a single design archetype (think services like Twitter).

Organizational threat modeling programs I’ve observed (home-grown and those I’ve built) make very productive use of attacker-based and asset-based approaches in concert with a software-based perspective. Josh Corman, amongst others, has provided valuable material on techniques of an attacker-centered threat modeling perspective. Organizations like Akamai (for which Corman worked), eBay, and others have shared value from these perspective first-hand.

In medium and large organizations, various (sometimes non-IT) stakeholders collaborate to flesh out the non-software-based perspectives and uncover impactful security concerns to be addressed by secure design. Readers, in my opinion, will do well not to confine themselves to only the software-centric perspective–especially if they look little like Microsoft.

Build or Buy: Coverage includes a wide breadth of topics and perspectives. Techniques unique to or specially-tuned for threat modeling open source software and software acquired from other vendors are largely absent. Readers may find it difficult to apply the book’s techniques because they can’t get the necessary developer engagement or developer-generated artifacts (like source code, dfds, diagrams, and so on). Because most organizations deploy systems for which the vast majority of the code is adopted or purchased (rather than developed in-house) this might represent a large blind spot and a real stumbling block.

STRIDE: Is STRIDE a ‘con’? I used to believe emphatically, “yes” but less so now. There are limitations to the STRIDE mnemonic and using it as the conceptual frame for enumerating attack vectors. I believe the attack concepts (i.e. spoofing) remain too high level to support broad-ranging attack enumeration. Trying to fit even common long-standing attack patterns into this framework requires a Dremel® tool and hammer. For instance, Shostack places ‘command injection’ under ‘Escalation of Privilege’. Other Microsoft resources couch these injections and others as [resulting from] “weak input validation”. Both characterizations sheer off important functional aspects of the command pattern underlying and thus leave practitioners with too narrow a view to come up with certain natural and effective mitigations. The industry watched this kind of overly narrow characterization misdirect the characterization of XSS by some within the OWASP from at least 2003-2006. First, so-called experts hammered on the input validation peg in an attempt to plug this hole. Eventually, they switched to an output encoding peg. Eventually, we saw the emergence of Caja and CSP, two strategies–not without their own limitations–having emerged from what I believe to be their inventors’ more natural (and effective) perspective.

That having been said, I’ve observed first-hand gifted highly-experienced security architects teach novice practitioners to use STRIDE productively when the organization has chosen to adopt this approach. So, whereas STRIDE may not scale to breadth and depth like I want it to, it does–like other resources within the book–help solve the blank slate problem.

Is thinking like an attacker dangerous?: Surprisingly, Shostack comes out against “thinking like an attacker” while writing within a hair’s width of concepts like misuse/abuse cases. This is yet another concept from which I’ve seen organizations benefit. Thinking back over my short career, it’s hard to overstate how useful misuse/abuse elicitation has been, sometimes directly measured in millions of dollars.

How could Shostack and I come to diametrically-opposed positions on this? The answer is: wasteful for whom? Developers. Developers are some of the most optimistic humans on our planet. They regularly sign up to produce something, the feasibility of which they have no handle on–let alone an solution approach. They do so on a deadline to boot! Developers’ success is entirely predicated on seeing how things might work. Hello-world, stubbed out components, and end-to-end prototype programming reinforce this predilection. It’s a rare developer that can “think like an attacker”. For that matter, it’s a rare rank-and-file QA engineer. With Shostack I’ll agree on this: IT folk, especially developers, from within your organization are not likely to be good at thinking like an attacker. Indeed, training them to do so, with rare exception, is a lost cause.

However, business folk (real business analysts, not their IT representatives), operators, customers, and experienced security folk are some of those who can quickly think like an attacker. Work for a healthcare company? Find a contract underwriter. Rile that person up and you’ll have more misuse/abuse cases than you can handle in 60 minutes. In finance? Find a trader. Find a matrix-pricing guy. Someone who understands the business–really understands it–can think like an attacker naturally. Don’t miss the opportunity to learn from them or your threat model will suffer.

Think like a Chef?: In support of his position of “thinking like an attacker” as antipattern, Shostack asks if you’d ask someone to think like a chef, listing what are intended to be surprising concerns an executive chef takes into account in their profession. Yes, readers are just as unlikely to become expert threat modelers as they are to become master chefs–sorry y’all. But this metaphor works really well with threat modeling actually. Even those who have never cooked can gain from watching masters cook on TV, a meal at a chef’s table, or a hands-on cooking class. Threat modeling novices that “think like a [metaphorical] chef” stretch beyond their comfort zone (sometimes penetration testing or code review and sometimes coding) and see ways to improve existing technique as well as wholly new techniques they’ll need to practice again and again. Through practice these experimental self-improvers will avoid the second-biggest threat modeling failure mode (remember blank page as #1?): failing by repeating well-trodden penetration testing and code review analysis.

Oddly, the book later evokes a cookbook as metaphor. Though the book doesn’t go this far, I’ll assert that cook-booking threat models for similar technology stacks and design archetypes is the essential way to scale: both in terms of decreasing effort and in terms of engaging less experienced resources. As a reader, think about how much of a model one might reuse as they go from one JEE app to another. both may be MVC architecture and 3-tiered. Both may use a struts derivative controller and similar view frameworks. Both may be built on mySQL. Working with design, assets, attackers, and attacks feels like ingredients, kitchen tools, and appliances. Reuse in threat modeling is composable just like in cooking. Modeling perspectives (threat agents, attack surfaces, attack vectors, and assets) are like cooking techniques (i.e. poaching an egg, whipping an emulsion, searing a… you get it). Perspectives and views for a whole system combine these components into a coherent dinner (er …I mean threat model).

Novice threat modelers don’t need to plate seven course on their first effort. Poaching an egg will provide value to build on. Modelers will add to, as well as refine techniques more easily if they practice and absorb subject matter expertise (such as that of attackers and security researchers). One big difference between cooking and threat modeling is that threat modeling, as a discipline, is in desperate need of its Joy of Cooking. When we get there, we’ll know we’ve got it licked. In the meantime, keep trying to think like a chef… …but keep practicing the basics as well.

Summary

Coming off the “not so good” section, it’s important to re-emphasize the positive. This book takes threat modeling forward in big steps. It conveys real experience distilling practice into reusable artifacts. The book arms readers against threat modeling’s worst enemy: the paralysis of a blank page. It helps modelers organize their work even if their process remains unstructured. The book will serve as helpful companion even after you’ve gotten the hang of modeling.

Readers can sidestep most of the book’s major methodological limitations if they recognize Microsoft’s historic mistakes (hey, give them credit for killing DREAD) and contextualize convictions that might not translate outside of Microsoft. I encourage readers to diligently consider key elements that go into “threats” (a situation or actor, a surface, a set of actions, and perhaps a vulnerability, as well as the many aspects of risk management). The reason past and present luminaries ascribed specific names to these concepts is so we’d consider each and every thoroughly and not forget something in the excitement of finding a break. Remember, we model threats to uncover unknowns and find what our other techniques have overlooked.

But in summary, buy the book. Read it carefully. Take notes. And, appreciate the incredible amount of work that Shostack obviously put into it.

Coda

My first conversation with Microsoft about Threat Modeling, which included Window Snyder, was in (what must have been) 2002. Further discussions occurred in 2004 with Mike Howard. At Synopsys, we’d already been doing threat modeling for four years. After 15 years of practice in several organizations and observation of 67 BSIMM participant organizations’ own practices, we have our own strong ideas about what works and what doesn’t work. I’ve been central to the creation, evolution, and sharing of these ideas.

It’s disappointing and more than a little alarming that Shostack’s only direct attribution to Synopsys’ practitioners is a podcast interview Gary conducted in 2011 (page 386 under “experimental approaches” BTW.) Others [Melton, Los, OWASP, SecAppDev, …] did better. I’ll leave that there. Other methods, for instance OWASP’s PASTA, are entirely absent as well.

From the very first conversations with Microsoft, it was clear their approach would grow to differ from our own. Late last year something important occurred to me: in order to be successful, threat modeling practitioners must look at our industry’s threat modeling approaches considering the lesson of the parable of the elephant. As it pertains to threat modeling author’s fochole. Understanding and mastering this metaphorical elephant, however, is not possible without understanding its many varied parts. Shostack, I hope, sees this truth as well. That may be why his book provides the different views prior work didn’t.