“All software projects are guaranteed to have one artifact in common – source code. Because of this guarantee, it make sense to center a software assurance activity around code itself.”
-Gary McGraw, Software Security: Building Security In
When an author sits down to write today, they have great tools available to automatically check their spelling and grammar. They no longer need somebody to tediously proofread their work for the mundane errors; the computer does that for them. Tools such as these won’t help if the author has nothing to say, but the simple errors that have long plagued writers are quickly found by modern tools. By the time a work is reviewed, the simple problems have been identified and resolved.
Modern developers have similar tools available to them to address common problems in code. Static analysis tools, also known as code scanners, rapidly look at code and find common errors that lead to security bugs. The tools identify the common problem patterns, alert developers to them and provide suggestions on how to fix the problems. These tools will not take care of underlying design flaws, but they often help developers avoid many security bugs in code long before that code is turned over to testers or is put into production.
It is axiomatic that all software has bugs and that some of those bugs will be security problems. Over the years, development-minded security people and security-minded developers learned to recognize these security bugs. The average developer is still learning these lessons, but even when they do, like those spelling and grammar errors, mistakes still happen; mistakes that must be caught and must be caught before a hacker finds them.
Many organizations recognize the value of code reviews for catching developer mistakes. Whether they are informal peer reviews or more formal reviews, having another set of eyes on code is invaluable. There is growing realization that these reviews must also be looking for security issues in the code. But all too often, the expertise required simply isn’t there. Most developers are very good at making code work, but their mindset is different than the hacker who is trying to break the code. The average developer just isn’t the best resource to look for security bugs.
When the security expertise is available to look for security bugs, manual code reviews are very time intensive and so they’re also quite expensive. Manually tracing data flows throughout the call stack is very tedious and slow. Even the best security reviewer can become burned out quickly, missing important bugs.
Luckily, some of the expertise to find security bugs in code can be turned into automated tools which can do the grunt work for us. Like spell check and grammar check before them, automated code review tools can tirelessly look for the simple bugs and alert us to their presence.
There are a variety of tools available to locate security bugs in code. They range from simple ‘glorified greps’ that look at the text of the code for bug patterns, to tools that shadow the build process, build their own representation of the code like a compiler does and analyze the program looking for problems at a deeper level. These tools range from free to very expensive. They don’t get tired. They don’t lose focus. They don’t want to go home at the end of the day. They can continue working night and day without any break.
The simpler tools look for patterns in the code and do not require that the code be complete enough to compile. These ’glorified greps’ are quick and easy to use. They simply look at the text of the code for problem patterns. These patterns can be as simple as looking for dangerous and deprecated functions that should no longer be used, to more complex patterns like the creation of database queries through string concatenation. Since they don’t require the code to compile, they can be used as soon as the code writing process begins.
Some of these tools that examine the text of the code can run automatically. Code Sight checks code for issues anytime a file is opened or saved. Klocwork provides on-the-fly checking and will find bugs as the developer types. When tools automatically check a developer’s code, they provide a secondary benefit of actually teaching the developer to avoid making these common mistakes. Most other tools require the developer to explicitly launch the scan but they still provide tremendous value by finding common bugs.
The better versions of these simpler tools will provide remediation advice for fixing the bugs. These tools have their downsides; for example, a higher false positive rate (findings that are not really security bugs) which can waste a developer’s time. But, the automated tools are a valuable piece of the puzzle to help keep some of the common security bugs from ever being checked in, let alone make it into production.
Advanced static analysis tools move beyond simple pattern checking and take into account the data and control flows of code. They usually require the code to be more mature. Advanced static analysis tools can track data throughout the call stack and recognize issues like where external input is improperly used to create a database query after several function calls, or where a buffer overflow originates seven or eight layers deep in a call stack. Errors like this are difficult, tedious and time consuming for a person to find, but they are easy for the computer. This is not quick for a computer to do. But, computers do excel at tedious and detailed work like this and don’t get tired in the process. These tools can take anywhere from several minutes to several days to run, depending on the amount of code being scanned; but the level and complexity of the type bugs they can find are worth the wait.
Both types of tools, the ones that simply look at the text of the code and the ones that look deeper, can be run at a variety of times throughout a project’s life cycle. They can be run on demand at a developer’s work station or by a security-focused code reviewer. They can be run right away or be injected into the routine build process to produce results which can be evaluated after the build. They can be run often or at set points along the project’s life cycle. No matter when they are run, they will help you avoid introducing certain classes of security bugs before your code ever reaches testing or production.
Over time, the trends in the tool’s results become very useful information for program managers. If a certain type of bug keeps appearing more often than others, it becomes an obvious candidate for developer training. If the number of bugs do not go down over time, it may mean that the security bugs are not being fixed. (Although that can also just mean there is a lot more code or that the scanning rules have changed from one run to another.) Areas within code with the greatest number of bugs, or the highest severity bugs, are areas to concentrate on both in fixing the code and in testing it.
While these tools are very helpful, they do come with some issues. No matter the quality of the static analysis tool used, they all suffer from some level of false positives and false negatives. False positives are annoying, take time to deal with and may impact developers’ confidence in the tool if they waste too much time. False negatives (actual bugs missed by the tool) are potentially more damaging; believing code to be safe when it actually has security bugs which were unrecognized by the tool. In general, the benefits outweigh the problems; just be aware they exist.
Another potential downside is that tools have no understanding of the context of the software. Tools don’t know the business goals or the environment in which the code will run. Tools can only see the code in isolation. This means that those using a tool will have to place the tool’s findings into the big picture and see if the issue makes sense. If this is left to developers, they may tend to down play the tool’s findings. If left to an external assessor, they may not know all external factors that may come in to play. A happy medium must be found and some level of effort must be made to place the tool’s results within the bigger picture.
Because of these types of issues, no static analysis tool can simply be run and the results taken as gospel. Instead, consider these tools as guides to your manual review efforts. For instance, the tool found something interesting that the resident security expert or a more senior developer should review. Does the item of interest look like a real security issue? Is it more or less severe than the tool indicates? Are there other things in the code or the environment that the tool did not take into consideration? While this is a manual process, it is far less time consuming and less tedious than a full manual code review; the tools took on that part of the process. Tools allow humans the ability to spend their time focusing on areas that are far more interesting than looking for needles in the haystack.
Like grammar and spell check, static analysis tools, such as code scanners, are good at finding the simple and recurring problems within code that is introduced as it is written. These tools automatically and tirelessly undertake what is a tedious and time consuming problem for a manual review process and make it relatively fast and accurate. Tools provide both the code assessor and developer a set of focused findings so that they may concentrate their efforts on finding security bugs and fixing them long before the code goes on to testing or into production.
Mike Lyman is a senior security consultant at Synopsys. He works with customers on secure code reviews, vulnerability assessments, and trains developers in secure development. Prior to Synopsys, Mike spent 12 years with SAIC and helped create their software assurance offering for DoD customers at Redstone Arsenal, AL; pioneering most of the processes and procedures used by the practice. He learned IT security in the trenches with Microsoft's network security team throughout the heady days of SQL Slammer, Code Red and Nimda. Prior to that, he was a software developer supporting US Army project offices at Redstone Arsenal and served on active duty as an officer in the US Army. He has been a CSSLP since 2008 and a CISSP since 2002.