close search bar

Sorry, not available in this language yet

close language selection

How to integrate SAST into the DevSecOps pipeline in 5 simple steps

Meera Rao

May 07, 2018 / 9 min read

Static application security testing (SAST) is the process of examining source code for security defects. SAST is one of the many checks in an application security assurance program designed to identify and mitigate security vulnerabilities early in the DevSecOps process.

Integrating SAST tools into DevSecOps processes is critical to building a sustainable program. The automation of SAST tools is an important part of adoption, as it drives efficiency, consistency, and early detection.

Time and again, clients have asked me how to integrate SAST tools into the DevSecOps pipeline. They ask key questions like these:

  • How do I manage false positives?
  • How do I triage the results?
  • What happens to new issues identified?
  • My scan takes 4–5 hours to complete. How can I use this tool in my DevSecOps pipeline?
  • What do you mean by “baseline scan”?

If these are the questions you are asking, and you’re concerned about integrating a SAST tool into your DevSecOps pipeline, read on.

Integrating SAST into the DevSecOps pipeline

default-placeholder.jpg

The high-level workflow diagram above shows the various stages during which SAST tools need to be run. SAST tools need to be run in your developers’ IDE as a pre-commit check and at commit time, build time, and test time. Examine each phase in more detail.

However, that examination doesn’t focus on what you need to do to successfully integrate SAST tools into your DevSecOps pipeline. The rationale behind choosing the solution presented here is that it blends the necessary degree of manual oversight with the appropriate level of automation to implement a cost-effective, proactive, and secure DevOps process in the existing pipeline through five distinct activities.

default-placeholder.jpg

Now let’s dig into each of the five integration activities:

Activity 1: Application onboarding

Onboarding should take place for each and every application. This is a one-time effort to be performed by a security analyst, along with some input from the development team.

How do you onboard an application into the DevSecOps pipeline?

Scan code and audit/triage results

A scan cycle starts with artifact gathering. Make sure that you have all the source code and libraries. Before starting the scan process, it’s a good practice to clean up the cache, as it may have temporary files from previous scans.

There are no hard-and-fast rules on how long the scan takes. In general, the scan time depends on the number of lines of code and the complexity of the application. The scan time is usually a small multiple of the build time. For instance, you might expect anywhere between 4 to 10 times the build time.

Once the scan is done, you’ll receive a scan report file that has all the results. Then there are two possible scenarios:

  • If this is the first scan of the source code, do a complete audit review of the findings, called triaging.
  • If this is a subsequent scan of the source code, upload the scan report file to the enterprise server. The enterprise server will merge the new scan with the previously audited/triaged scan results. The merge will highlight new issues that haven’t been audited That way, you eliminate duplication of work.

Review the results using the enterprise server or in the SAST IDE. During the audit/triaging process, decide what bugs you’re going to fix, what bugs aren’t high priorities, what results are false positives, and so on.

Fixing the bugs is the last step. As bugs are fixed and new code is added, reiterate the cycle: Perform a differential or incremental scan of the code that was just changed, and start over from the beginning.

The first time you scan your application, you’re creating a baseline. This means you should look at every finding or finding group and take one of the following actions:

  • Tag the finding (“not an issue,” “suspicious,” etc.).
  • Suppress false-positive findings.
  • Hide those findings.

Once the baseline is established, make sure to upload the scan report file to the enterprise server.

On subsequent scans, focus on the delta, meaning the difference compared to the previous scan. This is where the merge feature comes in.

Merge subsequent scan results

Let’s now consider what it means to merge the results.

The enterprise server automatically merges your old scan with the newly uploaded one. Retrieve the merged scan file to identify newly introduced bugs. Also, push all subsequent scans to the enterprise server before breaking the build or pushing defects to your bug tracking system.

Let’s assume that you have been through a scan, broken the build, and created a defect in your bug tracking system, and the developer has fixed the bug. You have also marked some findings as false positives. You don’t want to go through the pain of reauditing the scan results all over a second time.

The solution is the merge feature of your SAST tool. Let’s say that you scan at week n and look at the results, where you find one false positive in bug 1 and a real bug in bug 2. You fix bug 2, you mark bug 1 as “false positive” in the scan file, and then you add more code to your project.

You’re now in week n+1, and you’ve done another scan. The first thing to do is to merge with the scan from week n.

Because you have given background knowledge to the SAST tool, it is going to remember that you suppressed bug 1, and it will also notice that you have fixed bug 2. It will mark them respectively as “suppressed” and “removed.”

Now, you also added code between week n and n+1 and introduced a new bug, bug 3. The tool is going to mark bug 3 as “new.”

You now have less work to do; you just need to review and fix bug 3.

Merge is an effective feature and is done automatically when you upload your scan results to the enterprise server. That will prevent duplication of work that you have already done.

Remove false positives

The most knowledgeable people to review the source code are the developers, assisted by a security analyst. A SAST tool can be seen as a virtual security analyst because it brings security knowledge to developers and reveals implementation bugs that they may have overlooked. Nevertheless, a tool is still a tool. Tools make mistakes as well. We call these mistakes false positives.

False positives occur when the tool reports as problems things that aren’t really problems at all.

By contrast, false negatives occur when the tool doesn’t find bugs that it should have. There is a simple reason for a large number of false positives: The tool cannot analyze like a human since it lacks part of the context in which the application lives; therefore, it must err on the side of caution and bring many potential issues to the user’s attention.

Not all SAST engines have the same accuracy. The semantic analyzer tends to report many false positives. The dataflow engine tends to be more accurate.

Before starting to look at the tool’s findings, make sure you know the context of the application. Knowing details about the application’s users, trust boundaries, sensitive information processed, security mechanisms implemented, input validation mechanisms in use, and so on will greatly increase your ability to eliminate false positives and determine the true severity of actual problems.

Customize rulesets

Customizing and fine-tuning the rules to suit a particular application is crucial to getting the most accurate and actionable results possible from the tool. Having the knowledge of the application onboarded and the triaged results, you may want to customize the rulesets at this point.

Since injection attacks are the No. 1 attack type on the web today, being able to trace where data comes from and which APIs it traverses before being interpreted or consumed is crucial.

The taint may have different origins. For instance, user input, property files, the file system, and databases are all examples of taint sources.

SAST tools allow you to expand their rules and declare your validation routines as taint cleaner rules. Once you have customized rulesets for the validation routine, the tool won’t report the finding again if the taint has been cleaned before reaching the sink.

Automate the SAST tool in the DevSecOps pipeline

Once scanning, triaging, removing false positives, and customizing are completed, the next step is to automate the tool in the DevSecOps pipeline.

This includes using command line options to scan or using the plugins available for the build servers, customizing thresholds for breaking the build, configuring email notifications to developers who introduce issues, and automating bug tracking.

Activity 2: SAST01: Highly configured rulesets

Once you have onboarded, triaged, and customized rulesets, it is time to roll out the SAST IDE plugin to your developers. The SAST tool automatically detects vulnerabilities and provides just-in-time security guidance as developers type their code.

Developers can eliminate the most common security problems by having their code reviewed for security vulnerabilities and using the tool’s guidance to fix issues as they keep coding.

Since developers are constantly reviewing the findings, it is key to make sure the false-positive rate is as low as possible, or even zero. The triaged results help in rolling out just the rulesets that are true positives and will give developers confidence in the SAST tool.

Here are few examples of rules that can be configured to run in the developers’ IDE:

Activity 3: SAST02: Client’s top 10 issues

If every developer ran the SAST tool religiously, there would be no reason to run any SAST further in the DevSecOps pipeline. But that is never the case. So as soon as developers check their code into a version control repository, assuming the SAST tool is automated, the same scan rules as configured in SAST01—and a couple more, like the client’s top 10 issues—are run completely automated. The SAST tool should take no longer than 4–5 minutes to run the scan.

So let us revisit the rules for the SAST02 checks:

  • SQL injection—same as SAST01
  • Cross-site scripting (stored)—same as SAST01
  • Cross-site scripting (reflected)—same as SAST01
  • Resource leaks—same as SAST01
  • Hard-coded credentials—same as SAST01
  • Session management
  • Configuration review
Activity 4: SAST03: OWASP Top 10 issues

At this point in the DevSecOps pipeline, you are moving toward the right of your process, and the activities take longer to run. This is when you could run your SAST tool with the OWASP Top 10 issues if your application is a web application. You might also run any customized rules you may have created for applications that use web services, REST services, or custom frameworks for which your SAST tool may not have comprehensive rules. A few issues have already been scanned for, such as SQL injection and XSS, in your SAST01 and SAST02.

A few examples of rulesets for the OWASP Top 10 are listed below:

  • Malicious file execution
  • Insecure direct object reference
  • Information leakage and error handling
  • Command injection
  • Weak encryption
  • Denial of service
  • Path manipulation
  • Insecure cryptographic storage
Activity 5: SAST04: Comprehensive rulesets

This is the final phase where you can perform scans with comprehensive rulesets. You can combine SAST03 and SAST04 and perform the checks together, or break them up further, as I do. The SLA here can be anywhere between 60–90 minutes and a few hours.

Some of the rules you can configure and run here are listed below:

  • XML injection
  • XPath injection
  • XML external entity
  • Open redirect
  • DOM XSS
  • Cookie injection
  • Expression language (EL) injection
  • Header injection
  • LDAP injection

The broader the set of rules you run, the longer it takes for the tool to complete the scan. That’s one of the reasons to try to divide and conquer the rules you run at each phase of the DevSecOps pipeline.

Once this phase is completed, you should have good coverage of all your SAST rules. As I always say, there is no “one size fits all.” Based on the language, architecture, technology, and framework you use, you will have to carefully configure your rules and also be willing to write custom rules.

Key objectives

Let’s recap the activities once more.

For you to use the SAST tool effectively, the very first activity is to onboard, scan, triage, and customize the tool.

Once everything is onboarded, get the IDE plugins in the pre-commit checks so developers have the tool and can find and fix issues as they are introduced.

Next, the same set of rules and the client’s top 10 are run during the commit-time checks.

During build time, configure the OWASP Top 10.

And finally, configure comprehensive rulesets during test time.

Except for when the SAST tool is running in the IDE, all other checks break the build, email notifications, and push defects to the bug tracking system.

The five proposed activities satisfy the following key objectives:

  • Allow developers to focus on fixing defects
  • Strategically align source code analysis earlier in development release cycles by using pre-commit checks in the developers’ IDE
  • Spur a preventative mind-set in the development organization
  • Enable security teams to maintain governance and centrally track the residual risk posture on an ongoing basis
  • Allows DevSecOps teams to integrate SAST tools without increasing time to production

Many developers who use SAST tools for the very first time go through a great deal of discovery and revelation. Trust me when I say that once the tools are onboarded and automated in the DevSecOps pipeline, they’ll start paying more attention to the security of their code.

Continue Reading

Explore Topics