Posted by Meera Rao on May 20, 2016
You probably hear time and time again that static application security testing (SAST) should be incorporated into the application development and deployment processes. In fact, the software security touchpoints also emphasize using code review tools. But, no SAST tool effectively addresses threats to a development environment “out of the box.” It is a misnomer to believe that the cost of tool adoption depends primarily on getting the tool working in a build environment, configuring the tool’s runtime parameters, or the tool’s execution time. Let’s explore how to get the most value out of SAST tools.
The first step towards successfully adopting a tool is to incorporate additional rules that more accurately reflect the business logic embodied by the scanned code. Every tool has a fixed set of rules that represent available knowledge and best practices for the languages or APIs the tool can scan. Often, these guidelines are captured in APIs developed in-house. It’s unrealistic to expect any code analysis tools to have knowledge of such rules or be able to infer them.
Most commercial and open source SAST tools have advanced provisions for creating custom rules that can capture secure guidelines. However, with this advanced facility comes additional requirements (i.e. the ability to clearly understand each tool and work on requirements in an exact form that the tool can understand). As with every rule, the custom rules need to be tested sufficiently and fine-tuned before they are rolled out throughout the enterprise. The more accurately a custom rule reflects the actual requirement, the lesser the eventual maintenance overhead.
None of the tools available off-the-shelf are specific enough to meet the requirements completely across all frameworks and APIs. Fine-tuning is required in order to make the most efficient use of such tools.
When I say fine-tuning, what I mean by that is adjusting a tool’s capabilities to meet application or framework-specific objectives. Not all rules embodied by all tools are relevant or meaningful for a particular code base. Similarly, every tool comes with a default set of rules that are turned on. It is likely that some rules are turned off, but these may be relevant to the code base being scanned. It is, therefore, imperative that a tool be fine-tuned to align code scanning with the nature of the code. The degree of such fine-tuning is dictated by the nature of the code base and depends on the results of test runs over an extended initial period.
The output produced by a tool during these test runs provides insight into the kinds of problems that need to be flagged, in addition to the problems that the tool is flagging that are irrelevant in the context of the code being scanned. To give you an example, if the code in question is not multi-threaded during execution, any results that point to potential race conditions, or time-of-check-time-of-use (TOCTOU) problems, are irrelevant in the context of that code.
Such fine-tuning requires an investment of time and effort. Initial scans of the code, followed by scanning the same code by incorporating recommendations from initial scans, are required to pinpoint the rules that need to be maintained and rules that are irrelevant or in need of fine-tuning to be beneficial. There is a finite cost involved in maintaining a rule and it is imperative that a tool be fine-tuned to an optimum extent in order to achieve maximum return on investment per rule.
Having dedicated time and effort to fine-tune a tool and customize the rules to match the guidelines, it is important to integrate the tool into the SDLC. The utility of a SAST tool is not limited to testing immediately prior to deployment. Code must be tested as it is being developed. By testing code in the smallest possible units, it becomes possible to detect defects before code passes onto the integration stage. The effort required to identify and isolate bugs in a large, integrated code base is far higher than finding and resolving bugs in the respective components at an earlier stage.
Maximum utility of a SAST tool is derived by using the tool as part of a comprehensive code review process during the SDLC. It is important to act on the findings of SAST tools and incorporate them into a software improvement process.
In the absence of a code review process, there is no accountability and, consequently, no incentive to improve code quality. Such a process enables business managers and code owners to better assess the impact of inputs such as training and awareness programs on the quality of output code, and reduce business risks by producing high confidence, high quality code.
The integration of SAST tools into the SDLC eliminates finding security flaws late in the development process when they are riskiest and costliest to fix. This integration also helps improve efficiency by smoothly integrating SAST tools within your development cycles to increase productivity and effectiveness. Finally, it enables development teams to scan source code and systematically find and eliminate software security vulnerabilities.
Get the latest AppSec news and trends sent directly to you.