close search bar

Sorry, not available in this language yet

close language selection

Application security testing is important—now can you quickly use the results?

Multiple AppSec tools lead to many results. Let Code Dx centralize your AppSec management to help you make sense of your data.

Code Dx centralizes AppSec mgt | Synopsys

Most organizations have more than one application—some large enterprises have hundreds or thousands of applications in development and production. Each application is constantly updated to fix security issues, improve performance, and meet new customer demands, and an essential part of the update process is to test the application for security issues. Most organizations use several different application software security testing tools to analyze their applications prior to release, so let’s consider what’s involved in using all these different application security testing tools.

Application security testing takes time

Not to overstate the obvious, but all application security takes time, and it takes place during different stages of the software development lifecycle (SDLC).

Starting early in the SDLC, static application security testing (SAST) tools inspect the application’s source code to find errors that can result in vulnerabilities. This saves time in the long run by providing feedback as developers are still building the application. It can also detect common weaknesses in proprietary code during the commit, build, and testing stages. Many people want to know how long it takes to run a typical SAST test on a code base, and the answer is unsatisfyingly vague. It depends on the size and complexity of the application. Some estimates indicate that the scan time will be a small multiple of the build time. Different SAST tools have different features that help merge subsequent scan results and focus testing on the top issues for your organization and application. Similarly, the number of results produced depends greatly on the size and complexity of the application.

Software composition analysis (SCA) tools also run early in the SDLC to track and analyze open source in a codebase. Open source components comprise most modern applications, and, like any software, these components can include vulnerabilities. Synopsys’ 2021 “Open Source Security and Risk Analysis” (OSSRA) report showed that 75% of the codebases audited in 2020 contained open source components and 84% of those had components with at least one vulnerability. Because SAST only evaluates proprietary code, it’s essential to run SCA tools as well. SCA integrates across the SDLC; you can set up a steady cadence of scanning for open source. When using open source, it’s essential to have visibility into the open source in your application, because new vulnerabilities may be disclosed or introduced at any time. The number of vulnerabilities uncovered is hard to predict, as it depends greatly on the open source components used, the size of the application, and the complexity of the codebase—in addition to the unpredictability of vulnerability disclosure itself.

Run later in your SLDC, during the test/QA stage, interactive application security testing (IAST) solutions help you identify and manage security risks related to vulnerabilities in running web applications. Many IAST tools integrate into continuous integration (CI) and continuous delivery (CD) tools (commonly known as CI/CD tools), and can return results as soon as your developers have changed and recompiled the code and retested the running application. Because it can be integrated into the CI/CD, an IAST tool is part of the process and doesn’t add time to the functional testing process.

Dynamic application security testing (DAST), which tests the application in a running state, doesn’t look at source code and therefore it’s not language- or platform-specific. Evaluating the application from the outside in, it finds mistakes that other testing tools miss. It occurs late in the SDLC during the test and production phases. If you continue to scan applications as your applications change, you can identify and remediate emergency issues. Regardless of how often you run your DAST tools, you’ll gain insight into your application’s vulnerabilities in its running state that will help you address potential vulnerabilities before a hacker has the chance to exploit them.

The last common component of application security testing is penetration testing, or pen testing, and it is primarily a manual process, not an automated one. Usually occurring just before an application is released, penetration testing relies on the expertise of ethical hackers to test running applications and find any of the vulnerabilities the rest of your application security testing tools may not have caught. The Open Web Application Security ProjectⓇ (OWASP) works to improve the security of software and provides an excellent overview in their penetration testing guidelines. For smaller applications or those not viewed as high risk, testing might take a week, but for larger, more complex applications, and those warranting increased scrutiny, it could take significantly longer, and the number of results depends on a lot of factors. Regardless of the time penetration testing takes and the number of results it turns up, penetration testing can provide new insight into how an application should respond to different types of attacks.

Multiple tools, many results—how do you make sense of application security testing tool results?

There’s a reason you use all these tools: each has different strengths and helps you find different issues. It’s not a question of which one is better, although you may need to consider which ones are best for the applications your organization delivers. Applications critical to your company’s business goals or those subject to regulatory oversight will likely merit more testing than less critical applications with limited attack surfaces.

Regardless of which tools you’re using, you’ve compiled a lot of results. Testing the applications takes time, and now you need to think about how reliable the results are. These tools might generate 10,000 results that indicate a flaw or bug, and now it’s time to consider what percent are false positives and how many are duplicates. In addition, these results are spread across each of these tools—each with their own descriptions and severity scoring systems. How do you know which results pose the greatest risk, so you can focus your remediation efforts where they’ll make the biggest impact?

The Building Security in Maturity Model BSIMM12 indicates that “the median ratio of full-time SSG [software security group] members to developers is 0.74% (1 SSG person for each 135 developers).” This imbalance puts enormous pressure on application security engineers to quickly find and determine which reported vulnerabilities are truly significant, and then work to resolve them quickly. To do that, they need a single place to view and manage the results from all the testing tools and a simple and consistent way to eliminate false positives and duplicate results, so they can prioritize their most critical issues and resolve them quickly.

Many AppSec testing tool vendors try to bring all the AppSec testing results into a single view. Unfortunately, that only works if you select all your tools from that one vendor, locking you into a single provider—regardless of whether they meet all your testing needs. Instead, you need a solution that brings in results from a wide variety of testing tools and sources, integrates seamlessly into the SDLC, normalizes results and eliminates false positives, and helps SSG teams prioritize and resolve critical issues quickly.

Learn how to centralize AppSec management with Code Dx

Synopsys Editorial Team

Posted by

Synopsys Editorial Team

More from Managing security risks