Products + All Products + Software Integrity + Semiconductor IP + Verification + Design + Silicon Engineering
Posted by John Steven on April 9, 2008
Some people start “Security Testing” by buying and using a pen-test tool on project. Such tools uncover security vulnerabilities (though they seldom help with root cause analysis or even obtaining double-digit code coverage).
These tools are degenerate, at best, in facilitating a security testing strategy. Why? Because, these tools are “black box” tools. What are black box tools?
The term “black box” stems from old testing literature and means “without internal knowledge”. An external perspective has always excluded “code” but sometimes goes as far as (in my opinion appropriately) software design. Obviously, you need to know something about the application’s architecture and design to test it though and the slope gets slippery.
In the realm of penetration testing, the term “black box” often has meaning beyond the tester’s knowledge of the product. OSSTM defines “black box” to mean that neither attacker nor defender are given knowledge of each other. They mean for this test procedure to accurately represent the kinds of opportunities, need for stealth, accessibility, and exploit that a real attacker would have, and also to evaluate the defender’s abilities to identify and prevent or recover from attack.
Because black box tools to a large extent run canned tests they will not satisfy my security testing goal of having run tests that one traces back to requirements. ‘Requirements that one created as a result of doing risk analysis that determines exactly what behaviors (and their impacts) should be avoided were the software attacked.
Arguably, security folk have “cached” this risk analysis and these implicit requirements in the pen testing tool. Fine, this is that small benefit that I mentioned pen tests do provide. And, they DO find bugs. Again, this is at best a degenerate case of security testing in the same way running a fuzz testing tool is a degenerate way of conducting functional testing.
For QA folk wary of accepting the previous statement, it will suffice to say that you wouldn’t defend your job based on achieving less than fifty percent coverage would you?
Vendors have begun using hybrid approaches (this will only become more common). Do these approaches solve our coverage problem and allay our concerns?
I demo’d Compuware’s DevPartner, which has a poorly advertised (and perhaps now nacent) security scanning capability, a few years back and was pretty impressed with the start they had made in hybrid .NET analysis. I’m not sure where it’s gone since then. Fortify’s PTA also combines static and dynamic analyses to help prioritize static findings and provide root cause analysis for dynamic ones.
These hybrid tools don’t get our security testers off the hook either though, as they’re still not addressing the project-specific risk analysis nor are they anchoring tests in requirements.