So you understand the difference between bugs and flaws and that the defect universe is roughly a 50/50 split of bugs and flaws. Awesome! (If you don’t yet understand the difference, here’s a great read about software flaws in application architecture that will explain it.)
You’ve also decided you want to start actively doing some sort of analysis to look for flaws. Again, awesome! But where to start?
Let’s look at a few considerations when deciding how you will start doing architecture analysis.
It’s possible you are going to do a type of analysis you haven’t done before. That can be exciting, or scary or both. In any case, I suspect there is a business-minded person who will want to know what will be the benefit of the lost productivity. It’s important to have a plan to describe what successful output of this new activity might look like. Here are some suggestions:
But here’s the punch line. No matter how you define success, you have to do this type of analysis because it finds defects that can’t be found with other techniques.
If you’re working at a company that only produces one product or piece of software, your choice is simple and you can jump to the next section (or read on for when this may apply to you in the future). For the rest of you, there may be a portfolio of applications to choose from. So, which software should you select for this analysis?
OK, so you have selected the software to be analyzed. How do you decide how deep and wide that analysis should be? Scoping the amount of analysis to do can actually be pretty tricky. But this a problem you may have already addressed to some degree. If you are doing penetration testing, what made you decide ‘N’ days was enough? Budget? Time? Skill set of tester? Tech stack of application being tested? Deployment model? Whatever the selection process was, you will develop a process that makes sense for you and your organization. Likely selection criteria include: overall application risk, goals/concerns of the business unit, time to do the analysis, budget, skill set of reviewer, compliance concerns, etc.
But you still want to identify potential design flaws. So, where should you begin? If you have some internal metrics of defects found in the past, that is a great place to start. If you don’t have your own metrics, look to public metrics—you know, all those ‘Top 10’ lists out there. It seems a common defect has to do with authentication being broken or improperly managed sessions in a web application. I would suggest you create your list of common security controls (e.g., authentication, access control checks, use of cryptography, etc.), and create a list of things to verify for each control. For example, each control in your list you might:
Here are some important things to remember.
Finally, as with other analysis techniques, you are never really ‘done.’ When your penetration test is completed, do you think you found all reachable vulnerabilities? Highly unlikely. When you completed your code review (whether that review was done by a human or machine), have you found every implementation bug? Highly unlikely. Similarly, no matter how much time you spend doing this architecture analysis, it is unlikely you will find all the flaws that exist in your application. Make sure expectations are tempered with all interested parties.
Good luck hunting for flaws!
Jim DelGrosso is a senior principal consultant at Synopsys. In addition to his overarching knowledge of software security, he specializes in architecture analysis, threat modeling, and secure design. Jim is the Executive Director for IEEE Computer Society Center for Secure Design (CSD). He also predicts that “OpenSSL will have at least one new vulnerability found in the next 12 months. You can pick the start date—it’s the ‘12 months’ that matters.” Jim relaxes and decompresses from work by playing with the dogs, listening to music, or just chilling out with a beer and a movie.