Developer-Driven Security (noun): the practice of building secure applications starting in development; a phenomenon where engineers take the lead in securing their source code
The application security landscape has seen immense changes within the last decade. New devices have popped up left and right, all with a purported goal to make our lives easier. As the world becomes more reliant on these internet-connected devices, the consequences of trusting our most sensitive information to the internet has become increasingly dire. High-profile security breaches are the norm.
Today’s hackers are so sophisticated that they are able to attack not only devices connected to public networks, but also those secured behind a private connection like smartphones, home security networks, and even toys. It doesn’t seem like much is being done about it, either: an HP study reported that 70% of IoT devices contain serious security bugs. And when a breach does happen, it’s too often the developers who take the blame.
Unfortunately, there aren’t many security tools for developers to combat this. If anything, they’re given a big-box solution that was sold to their CISO over drinks on hole 19… and those “solutions” are more often volumes of false positive reports from a sprint that ended 10 weeks ago. Retrofitting patches on all of those bugs just isn’t realistic.
Not that long ago, in a land far, far away, developers used to view quality as a hindrance to their production in much the same way they now view most security measures. They didn’t need to worry about the quality of their code—that was why they had QA. But they found that thorough QA testing led to lots of bugs that could have been stamped out earlier, saving both teams time. Now, unit testing is as much a part of development as writing code. We need to treat security the same way.
The later in the development process an issue is found, the longer it takes to identify the exact nature of the problem and the more code that needs to be revised (or worse, rewritten) to fix it. According to the National Institute of Standards and Technology, it’s up to 30x more expensive to fix a vulnerability during post-production than during the design, requirement identification, and architecture stages.
It makes good sense to enable development teams to suss out the risky code from the strong, but so few companies are doing much about it. According to the 2016 State of Application Security, only 30% of folks require their development team to do security testing of any kind.
So what’s stalling the developer-driven movement? David Kennedy of TrustedSec explains: “Frankly, I think it’s that they haven’t yet had an incident serious enough for the penny to drop that developer training is actually a worthwhile investment. The problem we continue to face is that whilst the investment in security is often a very tangible figure, the return and in particular the likelihood of it paying dividends is far more hypothetical. It often takes a breach for the ROI on developer training to be realized.”
It’s time to flip the script on the paradigm. Developers are a uniquely sharp crowd: eager to learn, but most importantly, eager to find smarter ways to do their job. All they need are the right tools and training to do it.
Unless you’re a scientific anomaly, it’s going to take you longer to do something twice than it would take you to do it just once. At its core, developer-driven security is about writing secure code from the very beginning, because you’ve already learned how to do it and you’re continuing to learn and improve as you go.
Security tools like Code Sight work in the development environment through the IDE to highlight potentially risky snippets of code before they ever leave the developer’s desktop. That means each dev can learn about secure coding practices while they’re on the job, and they also get an incredibly tight feedback loop on the work they’ve already done.
Developer-driven security is more than developing a new set of skills; it means stronger, safer applications for users everywhere.