Posted by Synopsys Editorial Team on October 23, 2015
Security testing is important. Conducting specialized penetration tests at the end of the software development life cycle (SDLC) can be a rewarding security activity for your organization. Penetration testing is, after all, the most frequently and commonly applied of all software security practices. But, this isn’t necessarily a good thing.
This is why penetration testing makes the list as our third myth of software security. Just like a tool can’t solve the software security problem by itself, neither can penetration testing.
Wondering what the previous myths are? Start from the beginning by exploring the first myth.
The seven software security myths we’ll explore over the next few weeks are common misconceptions about software security best practices. These myths explore how software security initiatives should work, and aren’t simply about how to secure a particular application. Let’s kick things off with an exploration of the two main reasons why penetration testing isn’t by itself a solution to the software security problem.
One element that’s critical to the effectiveness of penetration testing involves who carries out the testing. Be very wary of “reformed hackers” whose only claim to being reformed is the fact that they told you they were reformed.
Fact is, you can’t validate results of a test you don’t understand. If a reformed hacker turns out to be malicious, you’re in trouble. To give you an example: an organization hires a group of reformed hackers. You know they’re reformed because they told you they were. You give them a set time period (let’s say one week) to perform a penetration test. At the end of the week, the reformed hackers have discovered five bugs. They tell you about four of them. Of the four you know about, only one is easy to fix, but your team manages to fix two. The other two—or was that three?—must wait. And you never even heard about one of them.
And that, ladies and gentlemen, is an example of how not to approach penetration testing.
Be aware that there is a difference between network penetration tests and application/software-facing penetration tests. Additionally, a majority of software security defects and vulnerabilities don’t directly relate to security functionality. Rather, security issues often involve unexpected (albeit intentional) misuse of an application discovered by an attacker. If we characterize functional testing as “testing for positives” (as in verifying that a feature performs a specific task as it’s intended), then penetration is in some sense “testing for negatives.” A security tester must probe into the security risks (driven by abuse cases and architectural risks) in order to determine how the system responds to an attack.
Testing for a negative poses a far greater challenge than verifying a positive. It’s a simple task to test whether a feature works or not. It’s much more difficult to show whether or not a system is secure enough under malicious attack. How many tests do you do before you give up and declare “secure enough”?
If negative tests don’t uncover any faults, this is only proof that no faults occur under particular test conditions, and by no means proves that no faults exist. “Passing” a software penetration test provides little assurance that an application is secure enough to withstand an attack. Many organizations misunderstand this point. As a result, passing a penetration test leaves organizations with a false sense of security. Don’t forget to focus on risk and manage that appropriately.
We’d all love to have a security meter that tells us if our software is secure. But, there’s no such thing as a security meter. There’s only a badness-ometer. How’s it work, you ask? First, you take smarts built in to a hacker who can perform reasonable black box tests, and you make a collection of those tests and put them in a can. Next, take the can of black box tests (that, incidentally, don’t know anything about software) and run them against program A. Imagine the canned tests break program A. What did you learn about program A? It’s bad. So bad that a canned test that knows nothing about program A can break it!
You use the same canned black box tests against another program, call it B. If the canned tests don’t break program B, what do you learn about program B? Not much. You ran some tests and they didn’t find anything. Does this mean program B is secure? Nope. It means you ran a single set of tests, which for any number of reasons, or no reason at all, can’t break program B.
Take a few minutes to learn more about all seven software security myths that organizations all too often take as gospel.
Problems are more expensive to fix at the end of the life cycle. Economics dictates finding defects as early as you possibly can. Have a flaw in your idea? Redesign it in your mind (presumably free). Have a bug in your code? Find it while it’s being typed in, not later during the build process. Want to find vulnerabilities in your software? Why wait until it’s shipped?
Penetration testing is about testing a system in its final production environment. As such, it’s best suited to probing configuration problems and other environmental factors that deeply impact software security. Driving tests that concentrate on other factors such as some knowledge of risk analysis results is the most effective approach.
One reason why so many organizations turn to penetration testing first is that it’s an attractive late-life cycle activity that can be carried out in an outside-in manner. Like the canned test, in some instances, you don’t really need to know that much about the software being tested. As a result, basic penetration testing is a common activity that can be carried out on a completed application, under time constraints specified by the operations team, to fill a security testing checkbox at the end of the SDLC. Of course, fixing things at this stage of the game is, more often than not, prohibitively expensive (and in some cases involves configuration Band-Aids rather than construction-based cures).
Bottom line: outside-in testing is great as long as it’s not the only testing you do.
Organizations that fail to integrate security throughout the development process are often unpleasantly surprised to find that their software suffers from systemic faults both at the design level and in the implementation. In many cases, the defects uncovered in penetration testing could have been found more easily through other techniques earlier in the life cycle.
Testers who use architecture analysis results to direct their work often reap great benefit. For instance, if architecture analysis concludes that the security of the system hinges on transactions being atomic, then torn transactions become a primary target in adversarial testing. Adversarial tests like this can be developed according to the risk profile. Hint: high-risk flaws should be resolved first.
There is, of course, some real value penetration testing has which stems from probing a system in its final operating environment. Uncovering environment and configuration problems (and concerns) is the best result of any penetration test.
So, should you conduct penetration testing? Absolutely. It’s an important and necessary security activity. But any kind of “penetrate and patch” mentality is insufficient as a standalone software security approach. It is much more powerful in tandem with training—partially based on penetration testing results—design review, code review and security testing at the integration level. A well-structured software security initiative does all of those things and uses penetration testing to demonstrate that all those other things generated the expected level of quality.
The Building Security In Maturity Model’s sixth iteration, the BSIMM6, is the latest measurement of how real-world organizations are implementing software security. Download the BSIMM report to see how penetration testing and other elements of software security play into the software security initiatives of participating firms.