Consider these three operational open source risk factors when using open source components: version currency, version proliferation, and project activity.
A decade ago, companies managing open source risk were squarely focused on license risk associated with open source licenses. Beginning in 2014, when open source vulnerabilities began to get names (like Heartbleed, Shellshock, and Poodle), open source security rose in importance as companies started addressing these vulnerabilities in their code. Synopsys suddenly saw a much broader range of companies interested in controlling open source.
But organizations have paid less attention to a third area: operational risk. Operational risk comprises three elements you need to know about.
Synopsys wraps three concepts into operational open source risk: version currency, version proliferation, and project activity. In short, to reduce the operational risk of using open source, you have to keep your open source components updated, use them consistently, and seek out components that have active development communities.
The first factor of operational open source risk, version currency, is a close cousin to security risk. It simply reflects whether the code is up to date on releases of various components. Just like commercial software, good open source projects evolve because contributors from the community (known as “committers” in the open source field) continually improve it. A codebase using component versions that go back a few years can’t take advantage of those improvements. Many “improvements” are security fixes, which is why there is a relationship between operational risk and security risk. Older code tends to be less secure.
In a study we conducted earlier this year, 60% of the 1,200 codebases we examined contained components with known security vulnerabilities. In many cases, developers had plugged those security holes in subsequent releases, but users had not updated their components. Shockingly, the average age of vulnerabilities was more than six years.
In fairness though, a company must be extremely well organized to ensure the open source they’re using is current, because they’re typically on their own. There’s usually no vendor pushing security patches. Bear in mind, most companies don’t even have a record that they’re using the component in question.
Another operational open source risk issue is version proliferation. One bank CIO told me, “I want everyone using Apache Tomcat, but I have discovered at least six different versions across the company.” That’s a nightmare for an organization supporting that software or testing applications to run on it. But again, knowing that most organizations don’t maintain an inventory of open source (even if they have a policy about it), this consequence is completely understandable.
Project activity is the last element of operational open source risk. I recently heard a technical due diligence consultant call the problem “stranded code.” What did he mean by “stranded code”? Use of a component that no one is improving or even maintaining anymore. The beauty of an active, vibrant project is that lots of people are working on it, often finding and fixing issues before you even know you have them. And the open source culture is one where these folks will pitch in if you have an issue. About 2,000 developers work together on the Linux kernel in any given year! By contrast, some components began as pet projects and were abandoned at some point. Developers relying on those components are on their own to find and fix any issues, which can be difficult for developers who haven’t been involved with developing the code.
To get the biggest bang out of open source, you should maintain current and consistent versions and ensure that your developers look to use components supported by an active community. But cultivating this environment requires some sophistication. Most companies aren’t so sophisticated in their use of open source, which creates operational risk. If you’re buying a company, you need to work remediation of operational open source risk into your calculus.
The Synopsys application security portfolio helps companies track and manage all three kinds of risks. And our Black Duck Audits can quickly pinpoint the risks on a one-time basis—for example, to evaluate an M&A transaction. But we also offer valuable risk information on hundreds of thousands of components for free in the Black Duck Open Hub. Check out your favorite projects there, and you’ll find the license, security ratings, version information, and activity metrics. Simply sending your developers to the Open Hub to do their diligence before selecting a component will go a long way toward avoiding future risks.
This post was originally published on Dec. 20, 2016, and refreshed on June 26, 2019.
Phil is the general manager of Synopsys’s Black Duck Audit business auditing the composition, security and quality of software for companies on both sides of M&A transactions. He focuses on software due diligence best practices and the M&A market. He also works closely with the company’s law firm partners and the open source community and is a frequent speaker on open source management and M&A. Phil chairs the Linux Foundation's Software Package Data Exchange (SPDX) working group which created an ISO standard for Software Bills of Materials (SBOMs). With decades of software industry experience, Phil held senior management positions at Hammer/Empirix and High Performance Systems, a startup in computer simulation modeling. He began his career in marketing and sales with Teradyne's electronic design and test automation (EDA) software group. He’s also written a book on fly fishing. Phil has an AB and an MS in engineering from Dartmouth College.