The ROI of software security is difficult to calculate when the goal is to avoid a breach. Learn where to look for ROI in an AppSec program to maximize your investment.
A common declaration at security conferences is that if organizations invest in software security, it will pay dividends.
Indeed, “investment” implies a dividend. You put money, time, and effort into something—the bank, a stock, an exercise program, an education—with the hope or expectation that you’ll get more back than you put in. It’s a common enough concept to have its own acronym: ROI—return on investment.
But it can be difficult in some cases to measure ROI because it’s not always clear exactly what would have happened if you didn’t make the investment.
That’s especially true of software security. There’s little debate that organizations that “build security in” to their software are much less likely to get breached by hackers.
But how can you put a dollar amount on something that didn’t happen? Maybe you would have been breached 10 times. Maybe a thousand times. Maybe not at all. There’s no way to know for sure.
Which is why Meera Rao, senior director of product management in the Synopsys Software Integrity Group and leader of intelligent orchestration development at Synopsys, regularly hears something like the following from clients: I invested so much in automation, tools, processes, and consulting. What did I get from all the time and dollars I spent?
Perhaps the question would be better phrased as, “What did I avoid?” Because while the exact amount of ROI might be impossible to calculate, the risks of not investing in software security are obvious. There are daily headlines chronicling the disasters—costly disasters—that hackers can cause to individuals, organizations, public utilities, and governments by exploiting vulnerabilities in software.
That’s why Rao urges them to focus on the fact that building more-secure software is no more an unwarranted cost center than the physical security of a building. If there are cheaper, faster, and more effective ways to do that, it yields a worthwhile ROI.
The ROI of software security, according to Rao, comes from what she calls “four buckets”—strategy, people, process, and technology—that work in conjunction to help development teams build security into software without slowing them down.
Rao points out that significant ROI comes from automated security testing like static analysis, which tests code early in the software development life cycle (SDLC), before the code is running and when defects are vastly easier and less expensive to fix.
“We would show [clients] that for years they were finding all these defects at the end of the SDLC when somebody did a penetration test or we ourselves came and did a manual code review,” she said. “I used to do it 12 years back and it would take four to six weeks.”
Now, testing is performed much earlier, when the developer is writing or assembling code. “That reduces your remediation costs. It costs six to seven times as much to fix things late than to fix them early,” she said.
Another strategic move is to configure automated testing tools to flag only the defects that are critical or directly relevant to the application being built. “The tools find so many things,” she said. “But do you need to fix them all? Of course not.”
“Once you have these tools automated in your pipeline, you can prioritize all your defects. You can balance defect discovery, and along with it, the remediation, which can save a lot.”
Another source of ROI in software security is more effective functioning of development teams through training one or two “security champions” within those teams. That means a full security team—software security group (SSG)—doesn’t have to be directly involved as much with the development team. And that reduces conflict.
“Those champions help you reduce triaging time,” Rao said. “If the SSG has to triage all the results, we would have to go three or four times a year to a client, stay there for three or four weeks, and triage all the results and create a baseline. But if you have a developer embedded within each group who is a security champion, that person can do it.”
Developers also trust the members of their own team more than those from an outside team. “It reduces communication overhead, which brings ROI to an organization,” she said.
Instituting “policy as code” can automate rules, quality gates, and other policies along with testing.
The first area to automate is security testing tools. “You know the technology, you know the language, you know the framework, so now you’re optimizing each and every pipeline correctly for all of these tools,” Rao said.
Then there are rule policies. “All the manual decisions we used to make, like you have 10 days to fix critical vulnerabilities or you need to do a penetration test every 90 days—all of that is now automated and brought into your pipeline,” she said.
And last are gates. “All the organizations we talk with have quality gates, and now they’re also bringing security gates into the pipeline. They conform to whatever the organization decides—maybe you don’t want to break the build, you just want to notify someone if a serious vulnerability is found.”
As an example of how policy as code is used, Rao cites a financial client that, when a competitor released an update or new feature, would race to produce the same feature so it wouldn’t lose customers.
“Any time we found a critical vulnerability like XSS or SQL injection in that project, they wanted us to let them know about it, but they still wanted to go to production. They said they would put additional controls in place like updating the firewall rules,” she said. “So with an automated policy in the pipeline, any time there was a critical vulnerability, it would send an email notification to the firewall team.”
Rao notes that anyone who listens to her webinars will know that tracking the results of all testing and policy implementations can help make everything more efficient. “Metrics is key,” she said.
“You need to be able to see the trends—are the developers fixing all of the issues earlier so we have fewer and fewer vulnerabilities, or not?” she said.
If the metrics show trend lines going in the wrong direction, “then you fine-tune. You fail fast, you go back, and you fine-tune. Maybe you had too many rules in static analysis,” she said.
It’s important, however, to use metrics selectively. Too many things to fine-tune from too many directions, just like too many notifications from testing tools, can overwhelm developers.
“We used to give a dump of all these metrics to the developers but now, even though we are still giving them all of the issues that we find, it’s not all at once,” she said.
“If one metric is in a PDF file, another in Jira, and a third in SonarQube, then you need a person to come in every month to gather all the metrics to showcase to your C-level executives,” she said.
“Initially it will take some time for you to decide what your dashboard should look like, but then you will be able to measure and fine-tune. Maybe you are finding the same things you were finding in the beginning, which would mean you might need to do a lot of training.”
Rao notes that although setting up that kind of metrics analysis can take a lot of time at the start—as much as 60 hours for one module—a maturity action plan (MAP) can cut that drastically. “It can take two to three weeks to create a MAP and run a pilot,” she said.
“But once you have that pilot completed and we know the languages, the tools, technology, and everything else, then the rollout is only two to three hours per application. You see a 900% efficiency improvement in static analysis. You see 400% efficiency improvement in dynamic analysis,” Rao said.
Overall, the ROI comes through “faster feedback, making sure you are able to measure, making sure they can remediate faster, and then bringing all those manual decisions, making sure that your process is well-defined,” she said.
Of course, you still will likely never know how much money and headaches you saved by preventing cyber attacks.
But seriously, you don’t want to know.
Taylor Armerding is an award-winning journalist who left the declining field of mainstream newspapers in 2011 to write in the explosively expanding field of information security. He has previously written for CSO Online and the Sophos blog Naked Security. When he’s not writing he hikes, bikes, golfs, and plays bluegrass music.