close search bar

Sorry, not available in this language yet

close language selection

When and how to support static analysis tools with manual code review

Mike Lyman

Jan 27, 2016 / 5 min read

Analyzing source code for security bugs gets a lot of attention and focus these days because it is so easy to turn it over to a static analysis tool that can look for the bugs for you. The tools are reasonably fast, efficient, and pretty good at what they do. Most can be automated like a lot of other testing. This makes them very easy to use. But if you are not careful, you may still be missing a lot of issues lurking in your code or you may be wasting developers’ time looking at false positives the tool was not sure about. As good as these tools have become, they still need to be backstopped by manual code review undertaken by security experts who can overcome the limitations of these tools. Let's examine how manual code review compliments static analysis tools to achieve code review best practices.

Overcome the tool trade-offs

Like all software, static analysis tools are a collection of trade-offs. If they go for speed, the depth of their analysis suffers and you get more false positives. If they try to reduce the false positives, they run slower. If tools are inexpensive, chances are there is less expertise and less original research behind them. If they have more expertise and more research, you help foot the bill by paying more. One tool may be very good at catching some classes of bugs, another tool may be good at catching other classes of bugs; none are likely to be good at catching all classes of bugs. These trade-offs will affect the tool results

Why manual code review is key

Manual follow-up to the tools can help overcome these trade-offs. The reviewer knows the tools and knows what rules provide reliable results and what rules provide weak results. They weed out the problems before they ever waste a developer’s time by shielding them from the noise that all static analysis tools create. Conducting manual review after the tool has been run can identify areas where the tools can be tweaked to provide more reliable results. This might be done through things as advanced as custom rules or using existing features in the tools to help them better understand what various parts of the code are doing. It may be simple configuration changes that help the tools run better.

Context and environment

All tools suffer from a lack of understanding the environment regarding the software they are analyzing. They also lack any real understanding of the context of what they are looking at.

The tools can only look at the code. They cannot know that any particular database query is hitting a read-only schema which contains data that has been carefully vetted outside the application and might be considered trusted—or at least more trusted than data that is coming from other sources. They cannot know file system permissions that may make things like directory traversal extremely difficult. This knowledge of the environment is crucial for understanding the risk posed by any given finding and that understanding is vital if you do not wish to waste developers’ time with issues that may not need to be addressed or could be put off for a future release.

Context is another issue with tools. Tools often have little understanding of the context of what the code is doing. This is why so many flag all instances of random number generators or all uses of date/time functions for inspection by the reviewer. The tool cannot know if the random number generation is rolling a dice or providing a security function; it may not be able to tell if the date/time function is just a normal use or if it is there to trigger a time-based backdoor or logic bomb. The tool simply flags them and tells the reviewer to figure it out based on the context of the function’s use.

Why manual code review is key

Manual reviewers, on the other hand, know all these things. They understand the operating environment and who the users are. They know the purpose of the software from the individual function to the overall purpose of the entire application. They apply this knowledge to the tool’s findings to provide developers with actionable results rather than a collection of junk findings that waste a developer’s time. They apply their knowledge of the software and its environment to help priority issues.

More interaction with developers

The tools cannot be very interactive with developers. While they provide remediation advice for what they find, the advice is usually generic in nature and not tailored for the specific code in question. Tools also don't know who created the code and see it all as the same. Therefore, they do not help identify developers or teams who may need help with specific issues.

Why manual code review is key

The reviewer who follows-up after the tools to evaluate their results can do better. They take their logical knowledge of the environment and context of the code to provide better remediation advice to the developers. The reviewer helps the developers understand the issues and answer questions in ways the tools cannot. Even better, they can join the developers at the white board and help them come up with detailed fixes for complex issues and evaluate the fixes before the code is committed and ready for another scan.

The reviewer can also get to know the code and who owns particular parts of the code. Over time, they may begin to know that particular developers or particular teams create the same security bugs time and time again. They may begin to see that Steve has a problem with time and state issues, Bob has problems validating input, and Sue often forgets to encode output for the proper context. This insight can guide training efforts either for one-on-one training or broader group efforts. This, in turn, can help keep the problems from recurring over and over again, reducing the time spent fixing the same old bugs and make tight schedules easier to meet.

Summing it up

Static analysis tools are providing a wonderful way to look for common security bugs in code in a relatively fast and reliable way, but by themselves they are not quite good enough.

If we provide a manual review of the results, we improve our tool configurations to run faster and provide better results. We overcome the inevitable trade-offs in the tools that may lead to wasting developers’ time with unreliable findings. The reviewer uses their knowledge of context and the environment to better understand the findings and provide a proper prioritization in fixing them. The reviewer can help the developers fix the findings in more detailed ways than the tools’ generic help. Over time, the reviewer can also see trends in the findings and help provide focused training to those who need it to help reduce the time spent fixing the same problems over and over again.

Tools are doing a good job, but code review best practices include the help of a security expert reviewing results, findings, and remediation strategy in order to achieve the most beneficial long-term and short-term approach.

Continue Reading

Explore Topics