More and more organizations are using static analysis tools to find security bugs and other quality issues in software long before the code is tested and released. This is a good thing, and despite their well-known frustrations like high false positive rates and relatively slow speeds, these tools are helping improve the overall security of software. Unfortunately, these known frustrations may also introduce a dangerous blind spot in these tools which do not know modern frameworks as well as they know the base languages.
Frameworks are doing more and more of the basic work—providing common functionality of an application. This is a fantastic leap forward in terms of productivity and the ability to release software faster and faster. This frees up more time to focus on the core business functionality of applications.
Sometimes these frameworks are clearly separate things (like Spring, for example) and sometimes they are a mix of basic functionality and advanced features (like the .Net Framework where the tools understand some features but not others). These frameworks are virtually exploding around us, offering many options to take care of the basic drudge work of application writing.
This explosion is happening fast and it seems to be accelerating. New versions and even new frameworks are appearing faster than most can keep up with. Static analysis tools are doing a decent job keeping up with basic languages. However, there is almost no way they can keep up with all these frameworks and handle even a few of them well, let alone all of them. As these frameworks take care of more and more of the plumbing within applications, this inability to understand what they are doing creates a blind spot in which code gets scanned and nothing gets reported.
Frameworks create data flows that the static analysis tools may be blind to. They introduce sources of tainted data that the static analysis tools know nothing about. Therefore, there is nothing to trace to the sinks created in code where problems could occur. These frameworks may introduce new sinks, but since the tools do not know of them, the sources in code cannot be traced to them. They also provide functionality behind the scenes that the static analysis tools do not see at all.
If the static analysis tools cannot see it, they cannot report it. If they do not report it, organizations are left feeling secure when they are not.
If your tools are not reporting problems, how do you know if problems even exist?
One of the clearest indications of a problem is when a penetration testing team reports problems that static analysis tools are not reporting. Some of that, of course, comes from design problems and configuration issues rather code implementation problems. If non-code problems can be ruled out, and static analysis gives the code responsible a clean bill of health, the tool most likely has a blind spot to correct.
Another way of looking for suspected blind spots is straight forward analysis of the functionality that the frameworks are providing. If they are routing web requests to the controllers that your developers have written, external input is getting to those controllers. If it sends the output to the other end and that output is going to a web browser, you know it should be properly encoding that context. Look at this functionality and see what is coming out of these features the framework provides. If you see problems but your static analysis tools are not reporting things, you have a blind spot that needs correcting.
Verifying the blind spot can take some work. One approach is to inject test cases into the code where you have known problems sources and known sinks where dangerous things occur. These sources and sinks must be those that you know the static analysis tool can detect when it has a proper data flow. Once these test cases are created, run a static analysis scan to see what it detects. If nothing is reported, you have a false negative problem because the tool does not understand what is happening behind the scenes.
The better static analysis tools provide the ability to create custom rules. When blind spots are present, use these custom rules to teach the tool to see them. These rules may have to identify sources that the tool does not already identify. The rules may have to identify sinks. At other times, the rules may be specifying that there is a pass through of some type where tainted data is passed along with no validation or sanitation. There may be areas where the frameworks actually introduce tainted elements to data already deemed safe.
Sometimes that is not enough. Some framework functionalities are not visible at all because there is no source code available. The functionality may exist within the binaries the build process produces or they may exist in third-party binaries. Some tools cannot handle these binaries at all, but others are designed to look at the binaries as well as the text of the source code in order to better understand the real data and control flows. These tools may not understand what the framework is providing within these binaries. You may have to manually dig into these binaries or disassemble them. Look at the byte code or the MSIL code to see what is happening. Depending on the tool you’re using, custom rules may need to be crafted to look at what’s going on at this level.
Once these custom rules are in place, there may still be issues. Some of the code may be private data and methods or even classes not directly exposed to the outside world. Some of the tools may assume they do not have to look at this during the binary analysis. It may prove necessary to create modified versions of the binaries where the visibility of private structures have been made public. This task can range from rather simple to difficult depending on the languages used.
And finally, you may even have to resort to getting the source code for these frameworks and integrating that into scans. This may require custom rules to help the static analysis tool understand how thing are wired. Getting this code should be easy with open source frameworks but more difficult if not impossible for non-open source. In those cases, you may be left with the binary analysis discussed above.
Static analysis tools are wonderful for helping to secure code and avoiding the introduction of some types of security bugs into software. It is important to realize that tools cannot do everything and often leave us blind to problems that may be lurking, especially with the way frameworks are used today to speed application development. If tools cannot show problems with things they do not understand, they leave us with a false sense of security. Luckily these blind spots created by frameworks can be found. Workarounds can then be created so that static analysis efforts do not tell organizations that they are doing a good job when in fact they are not.