Leading a software security group (SSG) is a balancing act. Most decisions come down to how to apply an extremely limited amount of resources to what seems like an insurmountable problem. To give you an example, a question I have been asked in past roles, and continue to hear from clients today is: “Is it better to go looking for new vulnerabilities or to fix the ones we already know about?” In other words, how should priorities line up between the remediation of issues and the expansion and management of defect discovery?
One of the scariest arguments I’ve heard is that if a vulnerability isn’t discovered, it somehow generates less security liability than an issue that has been discovered and isn’t resolved—and is eventually exploited. This is an argument for putting on blinders and running into whatever is waiting ahead. There is certainly a need for a solid risk-based approach when it comes to remediating known issues. This includes a strategy for applying those limited resources and resolving the riskiest security issues. However, don’t blind yourself from the issues leaving your software vulnerable.
Coming across organization after organization that assumes this “ignorance is bliss” approach has helped me arrive at the following conclusions:
One way to address this problem is by choosing a specialization approach. This means having different resources aligned to defect discovery than those aligned with defect remediation. A divide and conquer approach tends to work quite well. In most cases that I’ve come across, the traits of an engineer who is best at finding issues don’t align with the engineer who’s best at fixing issues.
Remediation programs need leaders who can stay organized, gather support from stakeholders and system owners, and above all, prioritize efforts. This leads me to the best way to actually fix things. That doesn’t mean running down a list of known vulnerabilities and demanding individual remediation plans. It means finding ways to avoid the risk in the first place. Think about the known vulnerabilities by class of issue versus the individual instance. This helps a remediation team to make more widespread improvements like fixing an existing—or implementing a new—framework to avoid cross-site scripting (XSS) or cross-site request forgery (CSRF) issues. Quite simply, systemic issues require systemic solutions.
What do you do when you still have too many issues and not enough time to dig your way out of that remediation hole? If forced into a decision to identify and fix vulnerabilities, take a risk-based approach to tooling and prioritizing the highest priority issues first. For example, you may be focusing on a high-risk threat perspective such as that of an external attacker. In doing so, your focus is exhausting all potential issues through that lens. You can also potentially tune your testing tools to high and critical risks first. With tools executing the highest priorities first, you gain broader coverage—albeit losing information on medium and low risks.
Never avoid testing an application or piece of software because you know it’s going to cause too much remediation work on the back end. The most effective approach I’ve seen is to divide and conquer when it comes to discovering defects and remediating those you already know are an issue. Keep your eyes open, be proactive, and remain focused on staying ahead of the bad people!
Kevin Nassery is a managing principal at Synopsys. With over 18 years of experience building and breaking information systems, he specializes in software security program design, infrastructure security, security architecture, denial of service issues, and penetration testing. Kevin holds a Master's from Depaul University where his focus was on network protocol design and security. He has maintained his CISSP since 2002.