Instituting “policy as code” can automate rules, quality gates, and other policies along with testing.
The first area to automate is security testing tools. “You know the technology, you know the language, you know the framework, so now you’re optimizing each and every pipeline correctly for all of these tools,” Rao said.
Then there are rule policies. “All the manual decisions we used to make, like you have 10 days to fix critical vulnerabilities or you need to do a penetration test every 90 days—all of that is now automated and brought into your pipeline,” she said.
And last are gates. “All the organizations we talk with have quality gates, and now they’re also bringing security gates into the pipeline. They conform to whatever the organization decides—maybe you don’t want to break the build, you just want to notify someone if a serious vulnerability is found.”
As an example of how policy as code is used, Rao cites a financial client that, when a competitor released an update or new feature, would race to produce the same feature so it wouldn’t lose customers.
“Any time we found a critical vulnerability like XSS or SQL injection in that project, they wanted us to let them know about it, but they still wanted to go to production. They said they would put additional controls in place like updating the firewall rules,” she said. “So with an automated policy in the pipeline, any time there was a critical vulnerability, it would send an email notification to the firewall team.”
Rao notes that anyone who listens to her webinars will know that tracking the results of all testing and policy implementations can help make everything more efficient. “Metrics is key,” she said.
“You need to be able to see the trends—are the developers fixing all of the issues earlier so we have fewer and fewer vulnerabilities, or not?” she said.
If the metrics show trend lines going in the wrong direction, “then you fine-tune. You fail fast, you go back, and you fine-tune. Maybe you had too many rules in static analysis,” she said.
It’s important, however, to use metrics selectively. Too many things to fine-tune from too many directions, just like too many notifications from testing tools, can overwhelm developers.
“We used to give a dump of all these metrics to the developers but now, even though we are still giving them all of the issues that we find, it’s not all at once,” she said.
It’s also simpler with manual activities like pen testing or manual code review to display metrics in one dashboard.
“If one metric is in a PDF file, another in Jira, and a third in SonarQube, then you need a person to come in every month to gather all the metrics to showcase to your C-level executives,” she said.
“Initially it will take some time for you to decide what your dashboard should look like, but then you will be able to measure and fine-tune. Maybe you are finding the same things you were finding in the beginning, which would mean you might need to do a lot of training.”
Rao notes that although setting up that kind of metrics analysis can take a lot of time at the start—as much as 60 hours for one module—a maturity action plan (MAP) can cut that drastically. “It can take two to three weeks to create a MAP and run a pilot,” she said.
“But once you have that pilot completed and we know the languages, the tools, technology, and everything else, then the rollout is only two to three hours per application. You see a 900% efficiency improvement in static analysis. You see 400% efficiency improvement in dynamic analysis,” Rao said.