Learn how to align security controls with the functional elements of a development framework to improve software security, using MVC as an example.
Practicing software security builds on knowledge of tools, techniques, and technologies. I consistently harp on the importance of understanding development frameworks. These frameworks provide a foundation for technology knowledge:
Understanding frameworks is key to a number of BSIMM activities. Despite this importance, I’ve encountered many security practitioners that aren’t passingly familiar with development frameworks—let alone well-versed about what security these frameworks provide and how, as Meera, one of our exceptional technologists, put it in a tweet:
— Meera Subbarao (@MeeraSRao) July 18, 2014
That lack of familiarity makes it hard—for developers and security alike. Building skillfully on anything requires understanding, be it an architecture generally or a specific instance—a framework. Framework developers seem to love architecture and abstractions and (for better or worse) design using them liberally. Whether you share their love, understanding where security fits means diving into this design.1
That sounds involved—is there a shortcut? Yes:
Align the placement of security controls with a framework’s functional design using its existing abstractions.
You don’t have to be a fancy architect to do this. Simply:
Consider the Model View Controller pattern as an example.4 Step 1 above prescribes reading framework documentation. SpringMVC, true to form, describes its elements in terms of the patterns implemented. Using only its brief introductory material, we can proceed to Step 2, writing down their functional responsibilities. If this is your first try, read the documentation and try not to get stuck on anything, writing something on the back of a napkin:
Seriously. For a first cut, this will do.
Even if you’ve not used such a development framework or studied the MVC pattern, Wikipedia equips you with enough information to know that “the model stores stuff.” And introductory database classes taught you about the ACID properties. Thankfully, our DB software takes care of ACID for us. (We’re equally thankful when the framework takes care of security for us. 😉 ) Web frameworks guide you through configuring (or coding) use of their controller elements when you dispatch your first pages following their “Hello World” tutorials. These tutorials, along with the framework’s features page, often tout how easy it is to write pages within using view elements. So by the time you’ve compiled your first app, you’ve touched on the functional highlights of each of the pattern’s components without any fancy theory or experience.
A more thorough reading (still only of the introductory documentation) might yield the following:
Start with back-of-the-napkin simplicity if you like, googling a framework on your phone while you eat lunch in prep for a dev meeting. Over time, iterate your framework understanding until you reach the level that feels like mastery for you—whether it’s simpler than the above table or much more detailed. Typically, a development effort that hasn’t yet considered security can’t absorb the security detail that would accompany the more detailed table. So it’s OK to start simple, even if you’re skilled, experienced, or participate in a mature software security initiative.
Step 3 states that, working element to element, we should align security responsibilities with the appropriate functionality. Start by making a list of security features (AuthN/Z, encryption, Input Validation, Output Encoding, and so forth) and try to align those with functional elements. Yes, software security is way more than security software, but we’ve got to start somewhere. Again, the napkin:
The model elements provide a straightforward security comparison. The database package provides ACID functionality. Model code provides functionality for storage and persistence for application entities. Security brings responsibility for authorization and access control on that data. The database community thought of this long ago, describing access control in part using the CRUD acronym. SCRUD adds “search,” an important factor to consider as some public sites have been compromised because, once logged in, User A can search for and retrieve User B’s records.5a
How about the controller? The controller’s responsibilities include navigation and dispatch, so it’s probably reasonable to associate authentication and authorization with controller logic: decide if a user can access something before dispatching the access.
And so it goes, element by element. Don’t fuss over enumerating every security responsibility exhaustively or enumerating every possible application of security concept to each pattern element any more than you fuss over perfect security or boiled oceans. Make an incremental start. In fact, confusion can lead to revelations.
Back around 2002, I helped an organization’s developers build a reference security architecture and compliant security toolkit. They were a Java EE shop and had made an early version of Struts 1.x their own. They’d created a tag library and script-based UI for the browser. They eagerly attacked the responsibility of listing out their toolkits’ patterns and attributing each its security responsibilities because (1) Struts is so straightforward, (2) these were experienced architects and dev leads, and (3) they were excited to be tackling security so proactively for their organization’s developers.
They encountered confusion immediately. Much like in the napkin drawing, one architect had indicated “input filtering” was the controller’s responsibility. “You can only trust the server,” she said. The dev lead had made it the view’s job. “The response time sucks if we do all that validation on the server,” he replied.
Their ensuing argument caused their revelation.
The developer was correct: The client-based view does have validation/filtration responsibilities—even in so coarse napkin-based model. The view is responsible for an intuitive, low-latency user interface and experience. So forms and other input can check users’ data for correct format and semantics to (a) prevent confusion, (b) save server time in the case of honest mistake, and (c) provide the user quick contextually clear feedback.
The architect was also correct: The server must apply all necessary checks in order to protect itself and others. Yes, view components do often live on the client and thus cause a host of “client-side trust” issues. However, now that the view has responsibility for formatting and semantic checks on well-intentioned users’ input, the server now knows that violations it experiences have made it through the UI controls are not due to a user’s honest mistakes—they’ve already subverted a “control.” With this extra bit of knowledge, and because the view is handling “well-behaved” user experience, controller logic is now free to respond more robustly, perhaps without user-hand-holding or error handling useful to an attacker.
Both architect and developer were thrilled. The developer built a responsive view implementation that helped users interact with the site effectively, and the architect designed and built a controller that responded to probing and attacks earlier and with confidence. The elements of their MVC framework worked together functionally and towards security’s goals. They had wins for scale and responsiveness too.
With a detailed understanding of a framework’s low-level design, the mapping between functional components and their security responsibilities can get more specific, as in the table that follows:
While the security responsibilities are high-level, they’re tied to either specific technologies, or specifically where they can be addressed within low-level design constructs (even classes) in a development framework. Developers are often sharp problem solvers that need to know where they need to consider what aspects of security. Knowing that, they often come up with a good answer—even if security doesn’t have one waiting for them.
To be specific about what security control goes where and how requires more information about the development framework being used, and how developers are using it. Though, being this specific is immensely satisfying to development and security alike (e.g., “We implement input validation using our own package of Validators, which we maintain for the most commonly used data types. We recommend application of each through DataBinders,5 preferably using controller annotation”). This, however, is a topic of its own. But as you can see, it’s practical to expect progress without getting that specific. It’s nice to know that even a cursory read of framework documentation and coarse accounting of their design elements can yield a good starting point for making sure each aspect of a design bears appropriate security responsibilities.
1. A solid well-documented pattern appears simple, even obvious, but packs a ton of hidden design. This is why patterns are so valuable as communication tools—whether a design using them conforms or not.
For instance, consider the humble Singleton—it couldn’t be easier right? What does a Singleton “mean” in a virtual machine that supports garbage collection? In a cluster of collaborating nodes? How does the Singleton’s implementation—and the software that relies on it—perform in the face of these questions?
Take the time to really unpack patterns’ hidden design and meaning, and it will do wonders for your security analysis.
2. Isn’t it a bit audacious to suggest that frameworks will have docs, and that those docs will make mention of patterns out of which the frameworks are built? In my experience? No. Framework developers, by the very nature of what they’re doing, already saw a pattern. It’s what drove them to write the framework. My experience, without meaning disparagement, is that these folk like to (and consistently) convey the patterns that drove the design of their work. If it’s not in the docs, I’d be surprised. But one can always check the code. Though some consider it bad form to name a component after the pattern it implements, this too seems to be a compulsion too great for programmers to avoid. Perhaps my favorite comes from Synopsys static analysis code: the good ole AbstractPascalParserFactoryFactory class.6
3. Rather than the “security responsibility” listed here, some literature asks security folk to associate “security controls” with systems (NIST 800-53, for instance). Responsibility, however, is distinctly the right level of abstraction for this kind of design. It’s not until a concrete exercise in detailed design or implementation that a security responsibility can be resolved into the right control itself—be that a configuration, piece of custom code, open source, vendor product, or other.
4. Though it’s decades old and out of favor, the MVC’s explicit controller, which the developer takes part in implementing, makes it a clearer example. “Modern” frameworks implement MVV or similar patterns by—in essence—pushing controller responsibilities more fully into the framework itself, exposing them only through configuration. This is fine (and frankly preferable to the overwhelming majority of developers) because rarely does one need to change the way in which functionality is dispatched.6
5. Input validation abstractions, as well as those for AuthZ across a Java EE container to its ORM, are both fascinating examples of where security responsibilities—arguably held by one or two parties—creates a problem. Whereas the View/Controller ambiguity ended well, with subresponsibilities being unambiguously defined, rendering each component’s job easier, it often doesn’t end so well (like the husband and wife who each thought the other picked their kid up from soccer practice). It was the AuthZ “husband and wife play” that caused the public data record breach (5a). This is, in part, why Synopsys conducts an “ambiguity analysis” as part of each of its architectural analyses.
6. AbstractPascalParserFactoryFactory!? Are you freaking kidding me? You really mean to tell me that (1) you not only believe that you need an abstraction for different ways your software might create a way to create a Pascal parser, but (2) you felt this so strongly, you imbued this insanity into a class name!?7
7. As fate would have it, this class inherited from a parent which implemented the Singleton—albeit more discretely.
John Steven is a former senior director at Synopsys. His expertise runs the gamut of software security—from threat modeling and architectural risk analysis to static analysis and security testing. He has led the design and development of business-critical production applications for large organizations in a range of industries. After joining Synopsys as a security researcher in 1998, John provided strategic direction and built security groups for many multinational corporations, including Coke, EMC, Qualcomm, Marriott, and FINRA. His keen interest in automation contributed to keeping Synopsys technology at the cutting edge. He has served as co-editor of the Building Security In department of IEEE Security & Privacy magazine and as the leader of the Northern Virginia OWASP chapter. John speaks regularly at conferences and trade shows.