“Vulnerabilities in the Core,” a report from the Linux Foundation and the Laboratory for Innovation Science at Harvard, offers insight into open source use.
Today, the Linux Foundation and the Laboratory for Innovation Science at Harvard released “Vulnerabilities in the Core,” a preliminary census of open source software with data provided by Synopsys and other partner software composition analysis companies. The anonymized data that Synopsys supplied for the report largely comes from open source audits conducted by Synopsys’ Black Duck Audit Services group in conjunction with merger and acquisition transactions and published in the Open Source Security and Risk Analysis report, an in-depth year-to-year snapshot of the state of open source security, compliance, and code-quality risk in commercial software.
The goal of the census report is to identify and measure how widely open source software is deployed within applications by private and public organizations, and specifically to:
As the report notes, “Vulnerabilities in the Core” should be considered a building block for future reports, not a definitive finding on the use of open source packages in commercial applications at this time. This initial report is “the beginning,” as the authors state, “of a larger dialogue on how to identify crucial packages and ensure they receive adequate resources and support.”
Even with that caveat, “Vulnerabilities in the Core” draws conclusions that are noteworthy for anyone using open source in proprietary software.
“The lack of a standardized software component naming schema threatens to stymie efforts by industry and government to better protect themselves from software-based incidents,” the Linux Foundation report argues of the critical need for a standardized software component naming schema. As anyone who has attempted to find the “correct version” of a given component can attest, many projects with similar names exist, often with differing functionality.
The census report found that seven of the top ten most-used open source software packages were hosted under individual developer accounts, exposing those packages to increased risk of a takeover of a developer account and the use of malicious code into the original open source package that introduces a “backdoor” for hackers to enter once the host package is installed.
“In the contexts of both security and general risk management,” the report states, “it is critical that developer accounts be understood and protected to the greatest degree possible.”
Open source hasn’t escaped the problem of legacy technology, in this case components in use that may be several versions behind the most current. As the “Vulnerabilities in the Core” authors point out, it’s not unusual for there to be compatibility bugs between versions, making organizations reluctant to upgrade. There is also the pragmatic argument that the financial and time-related costs of switching to new software aren’t worth whatever benefits a newer version may offer.
But that argument ignores the reality that all technology—in particular, open source—loses support as it ages. The number of developers working to ensure updates—including feature improvements as well as security and stability updates—decreases over time. Often, the report notes, developers choose to dedicate their time and talents to newer packages. As a consequence, legacy software packages become more likely to break without the guarantee of support on-hand to provide fixes.
“Without processes and procedures in place to address the risks created by legacy [open source], organizations open themselves up to the possibility of hard-to-detect issues within their software bases,” the Linux Foundation report concludes.
Those last two conclusions of the report bring up obvious questions: How do you know if you’re using the most current version of an open source package that is patched against known vulnerabilities? What policies and processes can you put in place to address risk? How do you ensure your “golden master” cached version of a component doesn’t get so old as to make patching problematic?
These questions speak to a level of open source governance that varies in commercial software organizations. While corporate standards may dictate that all software be fully patched, applying those standards to software development teams is particularly challenging. The adoption of open source software typically occurs outside the commercial procurement process. Consequently, developers aren’t likely to be aware of patch releases unless they actively participate in the communities developing the components powering their applications.
This lack of awareness of patch releases becomes particularly problematic as development teams grow. Commercial developers may decide which components to select while developing a feature, but they usually aren’t in a position to implement governance controls like patch management. To help address this issue, vendors of software composition analysis tools all offer the capability to scan the source code periodically. Many offer proprietary information feeds that augment the normal CVE process used to disclose vulnerabilities.
The Black Duck Security Advisories (BDSA) available to Synopsys Black Duck customers is an example of such an augmented data feed. The typical BDSA record includes richer technical details than are in CVE records. More importantly, BDSAs also include workaround and exploit information in addition to a link to the official patch location for the vulnerability. This information helps address the real-world requirements of software patch management: timely data to evaluate the impacts of vulnerabilities and triage them, workarounds pending the opportunity to apply a patch, and pointers to official patches, not those cached by third parties.
While having detailed information flow after selecting a component is important, so too is selecting the correct component from the outset. Synopsys recently released a new version of the Polaris Software Integrity Platform that allows developers to identify open source usage issues directly in their IDE. Building on the Code Sight IDE plugin for the Polaris platform, these new capabilities enable developers to proactively find and fix security weaknesses in proprietary code and identify known vulnerabilities in open source dependencies, without switching tools or interrupting their workflow.
When it comes to open source risk management, there’s no such thing as too much information to help developers ensure their software is both secure and bug-free. With vulnerability information at hand during coding, and security advisories to help address patch management, developers building applications can avoid risks that otherwise could go undetected and unpatched.