Software Integrity

 

Understanding architecture analysis and secure design review

So you understand the difference between bugs and flaws and that the defect universe is roughly a 50/50 split of bugs and flaws. Awesome! (If you don’t yet understand the difference, here’s a great read about software flaws in application architecture that will explain it.)

You’ve also decided you want to start actively doing some sort of analysis to look for flaws.  Again, awesome!  But where to start?

Let’s look at a few considerations when deciding how you will start doing architecture analysis.

How will you measure “success”?

It’s possible you are going to do a type of analysis you haven’t done before. That can be exciting, or scary or both. In any case, I suspect there is a business-minded person who will want to know what will be the benefit of the lost productivity.  It’s important to have a plan to describe what successful output of this new activity might look like.  Here are some suggestions:

  • Although the hope is to find flaws, be careful about promoting this as a success criterion. If you are just starting to do this type of analysis, it may take a while to get good at identifying flaws. And, as with ALL software security analysis techniques, just because you didn’t find any defects doesn’t mean no defects exist!
  • Because this type of analysis complements other activities you might be doing, you can possibly produce another document highlighting the efforts you are taking to make your software more secure.
  • Performing this analysis may give you better insight into how the application is designed. A high return on your investment might be identifying a flaw that when corrected, eradicates tens or hundreds of bugs.

But here’s the punch line. No matter how you define success, you have to do this type of analysis because it finds defects that can’t be found with other techniques.

Select your software to analyze

If you’re working at a company that only produces one product or piece of software, your choice is simple and you can jump to the next section (or read on for when this may apply to you in the future).  For the rest of you, there may be a portfolio of applications to choose from. So, which software should you select for this analysis?

  • Web applications are a good place to start. Why? Because there is a wealth of information about attacks against this type of application. You may already be doing software security scans against this app (e.g., penetration testing, secure code review). This gives you an opportunity to look at the application from a different perspective and perhaps find flaws that those other techniques missed.
  • High-risk applications are also a good choice. You probably have some sort of application risk classification scoring system in your organization.  Use this risk classification score as one of your selection criteria.  Notifying a business owner that a flaw was found in a high-risk application can solidify the value of doing this type of analysis.
  • Not too big. Not too small. An application that does very little is unlikely to reveal a design flaw. A complex application may very well reveal design flaws; but if you were learning to ride a bike, would you attempt to ride in the Tour de France? Probably not. It’s a similar situation here—start with small steps.
  • If you are already doing other activities like penetration testing and code reviews, are you often finding some particular defect? If so, that might be an interesting application to analyze.  You may just find the root cause of a particular type of defect is a design flaw—possibly a huge win if you can squash several bugs with a design change.

Decide on depth and breadth of analysis

OK, so you have selected the software to be analyzed. How do you decide how deep and wide that analysis should be?  Scoping the amount of analysis to do can actually be pretty tricky. But this a problem you may have already addressed to some degree. If you are doing penetration testing, what made you decide ‘N’ days was enough? Budget? Time? Skill set of tester? Tech stack of application being tested? Deployment model? Whatever the selection process was, you will develop a process that makes sense for you and your organization.  Likely selection criteria include: overall application risk, goals/concerns of the business unit, time to do the analysis, budget, skill set of reviewer, compliance concerns, etc.

Conduct the analysis

Since this post is all about getting started with architecture analysis, I’m not going to write about more advanced techniques like Threat Modeling or Architecture Risk Analysis.

But you still want to identify potential design flaws. So, where should you begin?  If you have some internal metrics of defects found in the past, that is a great place to start.  If you don’t have your own metrics, look to public metrics—you know, all those ‘Top 10’ lists out there.  It seems a common defect has to do with authentication being broken or improperly managed sessions in a web application.  I would suggest you create your list of common security controls (e.g., authentication, access control checks, use of cryptography, etc.), and create a list of things to verify for each control.  For example, each control in your list you might:

  • Confirm the use of the control is in compliance with internal or industry best practices (if the security control is used).
  • Verify the security control is appropriate for what is trying to be accomplished. A classic example of a flaw would be using an integrity control where a confidentiality control should be used.
  • Determine if the security control is weak. A weak control does not automatically mean it has to be fixed.  Perhaps there is some compensating control to address whatever weakness exists.  Or, perhaps the failure of the control is an acceptable risk.
  • Understand where the control is located. This becomes interesting as you start to think of ways to bypass the control.  As with a weak control, this may or may not be OK.  But these are exactly the sort of things you want to find out and verify.

Get started … but start slowly

Here are some important things to remember.

  • Architecture analysis will find defects your other activities are not likely to find. Be prepared to explain this in the event you get questions like: “How come you didn’t find this doing a code review or penetration test?” or “But the application has been deployed for years, how come nobody has noticed this until now?”
  • If you are just starting to do this analysis, you will probably be inefficient at first and make some mistakes. Cut yourself some slack. As with many skills, the more you do it, the better and more efficient you will likely become. Over time you will develop shortcuts, checklists and other techniques to improve your efficiency.
  • Avoid staying in your comfort zone. You may have a particular field of expertise; something that you have a lot of knowledge about and really keep up to date on. Don’t get me wrong, that’s great. However, when doing this analysis, don’t forget to look for flaws in areas outside your field(s) of expertise.
  • Like other analysis techniques, this is an ongoing exercise. Be prepared to do this analysis more than once.

Finally, as with other analysis techniques, you are never really ‘done.’  When your penetration test is completed, do you think you found all reachable vulnerabilities? Highly unlikely. When you completed your code review (whether that review was done by a human or machine), have you found every implementation bug? Highly unlikely. Similarly, no matter how much time you spend doing this architecture analysis, it is unlikely you will find all the flaws that exist in your application. Make sure expectations are tempered with all interested parties.

Good luck hunting for flaws!