When it comes to computer security, software security training can be a controversial subject. We’re not sure why. Maybe what we’re seeing is an artificial controversy trumped up by pundits?! You see, some pundits argue (lamely) that training is completely useless. We disagree.
Lets make this as clear as we can: we believe that software security training is an essential and integral part of a software security initiative. We believe this based on both our own experience as a firm that offers an extensive software security training curriculum and because of the data reported in the BSIMM. Now, we don’t believe training is the be all and all of software security. Training is simply part of a much bigger picture; and it’s a part you can’t leave out.
Seasoned senior executives don’t worry about whether to do training at all (they know they should), but rather how much time and effort to pour into training. They think about this in terms of cost/benefit. Developers are expensive people whose time is valuable. When you take them “off line” for training, they don’t cut you any new code or work on their huge pile of existing projects. If we put a fine point on it from a business perspective, we can boil things down to this question: how can we justify a training program for developers to the people who are responsible for making sure the developers actually get stuff done? Are there any useful metrics?
We’ve been giving this some thought over beers, and here is a metric we think makes sense enough to post on a blog. We’re interested to know what you think about it.
One way to think about training’s impact is to add up the time “lost” to software security training when developers are being trained. We know one CIO who added up how much it would cost him if every developer in his firm spent 3 hours every year taking a single CBT course. Lets just say it was an impressive, kinda high number.
Interestingly, we can use this same kind of reasoning to show why training makes economic sense even if it is only partly effective. We have three clients (a bank, a financial services firm, and a government agency) who have determined how much it costs to fix a security defect found at the end of the dev cycle by the security group (usually during final pen testing). All three independently came up with $10,000/defect as a ballpark figure. This estimate is based on a time cost of around 100 hours on average to fix a security defect, including: coming up with the code fix, coding it up, writing a unit test, re-testing with the fix, running the fixed code back through QA to make sure nothing unexpected broke, and finally re-running the pen test. (Of course you have to throw in some management overhead throughout the the process as well.)
Now we get to the back of the napkin part of our idea. Lets assume that the time cost involvement per defect for just the dev & QA groups is 25 hours (a conservative estimation). Take the number of security defects your firm has to fix each year of the “critical” variety and multiply that by 25. For an easy calculation, we’ll pick 1000 defects. That’s 25,000 development hours we’re talking about here just fixing critical security bugs. 25,000 hours is around 14 full time people. We can consider that our “bank” or our “upper limit” as it were under which training must fit to make economic sense.
Now the kicker. Assume that a CBT-trained developer makes ONE less mistake *per year*. That one problem avoided saves 25 hours for the price of 3 they spent taking the course (a 22 hour savings). That’s an ROI with a factor of 8.
Add in SecureAssist which helps train developers in real-time through the IDE (with maybe one hour to figure out how it works). Lets say a developer avoids creating one bug *per month*, just one, by being told to avoid it by the tool. That bug never makes it into production at all, and will never be found in a pen test. From a dev perspective alone, you save 24.9 hours per month which is 299 hours annually. That’s an ROI with a factor of around 300. (Don’t forget, since there is no bug at all, the firm actually gets back 100 hours per defect per month but we’re only concentrating on dev here.)
With our 1000 defect “upper limit budget” posited above, we’re talking 299,000 hours saved in dev. This makes rolling out training somewhat of a no brainer from an economic perspective. Even if you divide by a factor of 3 or 5, there is still room to maneuver here.
Of course people will grouse: how can you guarantee that just because I train my staff or give them a real-time tool like SecureAssist that they will fix anything? We buy that. But the numbers are so high here that there is plenty of wiggle room. Throw in other software security mechanisms like source code analysis, ARA and the like and training still makes economic sense.
We’re interested in sussing out the reality of the situation here, of course, and intend to have more detailed conversations with our clients in order to dig into this further. We want to build a model describing “defect savings” in more detail and figure out what impact different techniques have on the model. (We would love your help doing so.)