Posted by Synopsys Editorial Team on August 9, 2016
With the current increase in tool-based scans throughout the security industry, it becomes all-the-more challenging to identify the right issues and reduce false positives. For example, with static and dynamic code scanning there are tools and plugins like Fortify, AppScan, and FindBugs. These come with a standard set of default rules to identify the issues. However, the security analysts spend more time than usual triaging and filtering out the false positives.
Over time, the entire set of findings resemble a “Boy Crying Wolf” story. False positives are often overwhelming to the individual reviewing the tool’s results. In the midst of all the junk they may miss out on the real issues. One possible solution to get more fine-tuned results is to use a customized rule set instead of default packs.
With an increasing user base, most of the latest tools come equipped with a greater adoption capabilities. This means that the tool design makes it easier for users to customize them in a suitable way and adopt with greater ease.
Identify framework/IDE defaults. A default rule set is usually written in order to identify simple issues (low hanging fruit, so to say). It also identifies other issues that may be generic in nature across multiple platforms and frameworks. Having said that, this gives way to a lot of false positives during the scanning process. These false positives might seem like legitimate issues, but in practicality are not. For example, there might be a simple rule to look for a “TODO” keyword in all of the code files. Modern frameworks and IDEs often annotate new methods with these kind of comments. This helps the developer easily refer back and provide contextual information. As a result, a tool-based review produces many instances of the TODO keyword. These instances may not always be relevant or produce an attack surface.
One way to reduce such noise from the review results is to customize the rules. This ensures that the tool only provides results for relevant rules.
Enterprise-oriented rules. Most organizations have their own home-grown frameworks or libraries that take care of the typical functions. These include input validation, database connections, and SQL statement creation. In such cases, default rules that come along with the tool may not be entirely relevant. They may need some customization to identify the right issues and to leave out false positives.
Provide consistent guidance. Any large or medium-sized organization can be expected to have a few hundred developers working on various platforms and languages. Customization may not just be restricted to the rules for scanning but also for the guidance that is provided by the results generated to the end users. For example, internal references are beneficial. They help keep the source code consistent across the entire organization and make it easily adoptable for other developers.
Customization is highly technical. It requires skilled technical support that is often difficult to find and costly to retain. Although, the tools come equipped with manuals and guidance, it is often a time-consuming process to write or update the rules. Even with all the manuals, the customization process depends on the tool. Some systems offer visual step-by-step tutorial wizards. Others allow Excel file uploads. Some may even demand complicated API access requiring advanced coding efforts.
It’s also important to know that customization specialists need to have a very good understanding of the frameworks and libraries that the enterprise is using. It’s not just the knowledge of the tool that’s important. The specialist should also have a deep understanding of common issues, common false positives, and the ability to analyze them in the context of the enterprise’s libraries and APIs.
Reducing false positives is a battle that has been taking place for years with many different strategies. However, in the interest of time and investment, one of the most effective and efficient options is to customize the tool’s rules to align with organizational policies. The deciding factors pretty much boil down to which tools you are using and how much customization effort you’re firm is willing to undertake.
Get the latest Software Integrity news, thought leadership, and more.