Application security testing tools help developers understand security concerns, but having too many tools can do more harm than good.
Good tools are essential for building just about anything.
But maybe that needs a bit more clarification: Not just good tools. They also have to be the right tools.
Because the old cliché, “if all you have is a hammer, everything looks like a nail,” is a warning that using the wrong tool can mess everything up. A hammer isn’t only useless—it will do real damage if you try to use it to make a precise cut in a 2×4 or sweat a joint in a water pipe.
That’s true in the digital world as well. Using the wrong tool—or too many tools—to test the security of the software project you’re building can create more problems than it solves. It can slow down and frustrate developers by overwhelming them with defect notices that in some cases aren’t critical or even relevant to the project at hand. And when developers get overwhelmed with pointless notifications, they tend to start ignoring them—including the important ones. That undermines the whole purpose of application security testing (AST) tools.
Meera Rao ought to know. The senior director of product management for the Synopsys Software Integrity Group has for the past 14 years been helping clients avoid tool overload and make better use of those that are necessary.
The problem starts, Rao said, with too little planning and understanding of what different tools do and the controls they need, which leads to “tool creep.”
“Most customers start with SAST (static analysis security testing, an automated tool that looks for defects in code while it’s being written and before it’s running) because that is the easiest,” she said.
“Usually they do a pilot, or a bake-off, with two or three vendors based on how [the tool] works and what programming languages they want to use it for. AST tool vendors install and configure it, get it up and running, scan two or three projects, and that’s it.”
Things generally go smoothly for the next year or so, according to Rao. But eventually some developers start complaining about it.
“Maybe they’ve moved to a different technology or a different framework, or they’re doing DevOps and they say the tool is taking a long time and they want something else,” she said.
“And it’s like rinse and repeat. They go through the same process to get some other tool. But one part of the team is still happy with the original tool while the other part of the team starts using the second one. So now instead of one commercial tool for static analysis, you have two,” Rao said.
But predictably, each AST tool produces different results, which leads to a lot of squabbling and negotiating about what defects to fix. And after that gets settled, there’s still likely to be conflict if neither tool supports the “latest and greatest language, like Go, which a lot of companies are using,” she said.
That may prompt developers to bring in an open source SAST tool, which leads to another round of negotiations over who is responsible for maintaining it and configuring the rules.
“So now there are three tools,” Rao said. “And that’s just the story for static analysis. It’s been happening for many years and is happening even now.”
Another thing that prompts organizations to add tools is disaster headlines. The catastrophic breach of credit reporting giant Equifax in 2017 was enabled by the company’s failure to apply a patch to the popular open source web application framework Apache Struts—a software patch that had been available for several months.
So companies suddenly began scrambling to get software composition analysis (SCA) tools to scan their codebases for vulnerabilities in open source components.
“And then you see more and more kinds of breaches and attacks, and companies start saying, ‘SAST can’t find this, SCA can’t find it, maybe I need a DAST (dynamic analysis security testing) or IAST (interactive analysis security testing) tool, and they go buy that,” Rao said. “And then when all these are up and running, then comes your container and they want a tool for running containers scans.”
Then weak passwords enable something like the major breach of LinkedIn, and because no software testing tools can cure that problem, “everybody says we have to start doing threat modeling, and they want to know what tool they need for that,” she said.
“So during the past 14 years I’ve been in this industry, tool overload has become very common,” Rao said.
And it happens mainly because organizations aren’t strategic about what AST tools they buy and how they deploy them—they simply react to headlines, a bit like panic buying.
“Companies are buying tools without even knowing the context,” Rao said. “They need to ask what an application is actually doing, what is the risk posture of the application, what tools do they need to run, and will the tools be able to find what they need to find. Because if you are always pushing more tools on your developers without knowing if they are necessary, at some point they start to ignore them. The tools produce too many results and developers can’t deal with it.”
How does Rao conduct “interventions” with companies suffering from tool overload? There are at least a five aspects to examine.
An application or service that doesn’t face or interact with the outside world doesn’t need comprehensive testing.
“If you’re just building back-end messaging APIs, if there’s no front end, no UI, and nothing externally facing, you don’t need to run all the tools. You might need to do a manual code review, but that’s it,” Rao said.
“Does it have a database? A front end? What language are you using to write the code? Are you using Java or the Ruby on Rails framework?” Rao asked. “If yes, then SAST and DAST tools might truly help you. If you are using a lot of open source components, then SCA definitely is mandated. For externally facing applications with a lot of business risk, you need to run the AST tools.”
“Just because everyone says you need to run SAST in this stage of your pipeline, SCA as the next stage, and DAST in the final stage, you still need to ask if they are actually finding what they are supposed to find,” Rao said.
“If you configure Intelligent Orchestration (IO) correctly, it will know what tool to run, when to run it, and what rules to apply when it is running,” Rao said.
“With a web application, you probably need to run the OWASP Top 10. If it is a microservice or a back-end messaging API, then it can run by a different set of rules to pick up vulnerabilities specific to the technology and framework. It’s like building a road. You pave the road for everyone, but then you also provide shortcuts for applications that don’t need to take the main road to reach the destination.”
If you’re overloaded with tools, you have too many of them. Get rid of those that are duplicates or that aren’t useful anymore. “Any time you have a tool overload, you have a defect overload,” Rao said.
“Just because there are AST tools out there, it doesn’t make sense to get all of them. When companies say with pride that they have three commercial static analysis tools, I say, ‘Yeah very good; how are you going to manage them?’ Because your development team is using one, your operations team is using another, and your security team is using the other. And then they push them all into some sort of dashboard for vulnerability correlation and deduplication, but then the correlation doesn’t work and there is a lot of back and forth between the development and security teams. It’s a lot of headache.”
Every company wants to get its product to market as quickly as possible. As Rao puts it, “If you delay, that’s where you lose the dollars.”
It’s worth the time and effort to eliminate tool overload. “That’s where the value of IO is,” Rao said. “It helps you configure the tools, realize what tools to run, when to run, how to run, and then also provides feedback to the developers—not all the results the tool finds, but just the ones they care about.”
“The end goal is the same—you want to build secure, high-quality, resilient software faster.”