Posted by Synopsys Editorial Team on May 16, 2016
How often do you read those all-too-familiar “vulnerability discovered in X software” headlines? More importantly, how quickly have you as a security director, application owner, or engineer been able to respond to them? You may have good control over all the applications you run, but do you know every version of every library in all of your projects?
If not, do you trust all the open source projects for which source code is included in your applications? Perhaps you do. After all, according to Eric Raymond’s The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, Linus’s Law states that “given enough eyeballs, all bugs are shallow.”
But are software vulnerabilities covered in Linus’s Law? Perhaps the law doesn’t apply because software vulnerabilities outstretch mere bugs and encompass everything from architectural issues to emergent properties of complex systems. Or, perhaps secure coding is covered by Linus’s Law where free and open source software (FOSS) security failures are actually part of the feedback loop (baked into the law). Either way, your organization, shareholders, and clients shouldn’t be forced to participate in anything that puts them at this much cyber risk.
So, how can you solve this issue of free and open source software vulnerability management? Unfortunately, there isn’t a single industry-recognized tool that does it all on its own. In order for your developers to leverage all that bootstrappable code, you’ll need to do some heavy lifting at first. The end goal is a mature FOSS vulnerability management program—but what does that even look like?
While nobody can explicitly say what a mature program looks like for your organization, here are four best practices for managing open source code from a security perspective.
When a team needs that shiny new piece of software—that magical client side, redundant, scalable, performant framework that increases UX by over 9000%, all while mining user data and analyzing sentiments from stock trends and weather patterns—it needs to be approved. All jokes aside, it’s time to remove the ‘wild west’ mentality and bring some governance into FOSS where it really matters.
If you’ve ever added, or were convinced to approve the addition of an open source library into a project, you probably evaluated the library for more than functionality. But was that process repeatable and sufficiently discriminant? More importantly, was security one of those criteria?
If you answered ‘no,’ then it may help to walk through a basic and entirely manual approach to FOSS intake:
Automation of this process, as you can imagine, is reasonably straightforward—requiring a submission platform including response functionality. Once this workflow is in place (no matter whether it’s manual or automated), you’ve taken the first steps.
Managing incoming software is great, but you probably already have hundreds, if not thousands, of projects already in production. While you’re able to check incoming software through a vetting process, rogue inclusions are also possible.
The second best practice requires the need to scan all projects and detect existing public-facing issues. There are a number of FOSS scanners out there, free and paid. Depending on your tech stacks, resources, and budget, it’s a good idea to choose one. I’d suggest running a proof of concept with different free and paid solutions on a representative subset of your existing projects. The goal of the FOSS vulnerability scanner is to accurately detect 100% of your included libraries, identify 100% of the publicly-known issues in those libraries, and tie it into your management program. This should be run every day/week/month, depending on your release schedules and fix life cycles.
Yes, this is another security bug generator, but if you have the software in your codebase then you have the associated vulnerability. Free and open source software vulnerability scanners don’t replace static application security testing (SAST) tools or even do the same work; they work together as pieces in your organization’s security program.
Perhaps you are an organization with few products or applications. An option for you is to implement your own vulnerability scanner. Without going into too much depth, a good scanner correctly fingerprints all existing FOSS libraries and matches them to consistently updated lists of CVEs. The trusted source for CVEs is NIST’s own National Vulnerability Database (NVD).
Now that you have a growing list of (painstakingly) approved FOSS, it’s time to communicate this list to developers. Create or modify a repository of software that exists inside your organization that is accessible to teams with the push of a button. If you have an internal repo for external libraries, mark the acceptable ones as vetted. Acceptance depends on your organization’s risk appetite.
Once your security approved repo becomes well-known to your teams and robust in software availability, teams will prefer this collection of maintained software and you will have positively affected your firm’s security behavior.
Who says that security must contradict usability?!
The expertise required to research vulnerabilities and make organization-wide decisions about security should be handled with care. Creating protocol and governance around remediation timelines, severity ratings, and risk acceptance should come from the highest levels.
Handling edge cases is where SMEs will be tapped. Imagine a library that is so tightly integrated into an application that removing it due to a few CVEs would take serious development time. Some CVEs also require some pretty unlikely situations to exploit, like having an open port on a secondary backup system which must update during first waning day of every other new moon…you get the point. SMEs can do a deep dive into the vulnerabilities and guide teams to implement compensating controls.
Another key point about research regarding publicly-known vulnerabilities are public exploits. Exploits against CVEs often exist on websites or forums that shouldn’t be considered trusted. These exploits can and should change your severity rating. Knowing that a middle schooler with a shiny new Metasploit installation can breach your flagship application is not something you’ll want to ignore. No one can guarantee the work of others, and that’s kind of the reason why we’re here. Exploits may or may not work on your applications or even at all. You should decide if and how you will research these dangerous little programs.
If public exploits weren’t complicated enough, the list of CVEs that affect your systems should be considered highly confidential and guarded from disclosure. Researching these untrusted websites from within your organization is disclosure of that private and highly sensitive data. What if you hit an untrusted website with your big list of CVEs and the site admins tie your organization to those requests? Your publicly-known software vulnerabilities are now known to outsiders.
The idea of bootstrapping existing technologies and ideas is practically primordial. As an industry, technology, or field, it’s good to strive to learn how to bootstrap more efficiently and with fewer issues. Free and open source software is no different, but there may be an undeserved trust associated with even well maintained FOSS software. Open or closed, source code is source code and vulnerabilities are inherited. The main difference is that CVEs are there to help you and your organization. With these and the aforementioned steps, you can create an issue-resistant FOSS vulnerability management program while remaining fast and agile.
Imagine your organization with a seamless provisioning service that developers can use to incorporate FOSS. Your now mature framework is one that developers, managers, clients, and security teams can all agree upon.
Get the latest Software Integrity news, thought leadership, and more.