Within a month of the GitHub security alerts’ launch in November 2017, the security scan turned up over 4 million bugs in over 500,000 repositories. Let’s dig deeper into the GitHub security alerts numbers.
“By December 1st, over 450,000 identified vulnerabilities were resolved by repository owners either removing the dependency or changing to a secure version,” GitHub recently wrote on its blog.
In general, we support initiatives like this as they help open source project teams produce more secure code. In some ways, the GitHub security alerts capability is similar to Black Duck CoPilot, our free offering for open source project teams. CoPilot takes a similar approach to GitHub in the identification of vulnerabilities, but it supports a wider array of languages and package managers. And while the GitHub security alerts are similar to what Black Duck Hub provides to its users, the full Hub solution provides broader support and deeper insight.
The GitHub numbers are interesting; specifically the numbers 450,000 resolved vulnerabilities out of 4,000,000 discovered. We know that the National Vulnerability Database (NVD) doesn’t contain anywhere near that many disclosures, so how are they arriving at that number? GitHub is likely taking the number of vulnerabilities and applying it to all the forks and versions within GitHub using that code. That makes their metric an interesting one, as I said, but masks the real problem — knowing which code has been patched in which fork. Consumers of open source projects may themselves create a fork, and that fork could very easily be outside of GitHub’s visibility.
Given that the ‘git’ model used by GitHub lacks any easy way of knowing if there are significant patches upstream to the point of the fork, consumers have no real method to know when a patch has been applied. Put another way, the best a user can do in ‘git’ is to proactively compare two branches and determine the differences in code. This implies the user is actively involved in the project, and also is sufficiently skilled to assess the potential impact of merging the security changes into their development branch.
I would also point out that the phrase ‘the majority belong to repositories that have not had a contribution in the last 90 days’ in the blog post really translates into ‘the majority belong to a code fork that may not be actively maintained for the past 90 days, but that we have no way of knowing if it’s in active use.’ There’s a reasonable chance many were forks that now live in a binary repo someplace.
In other words, while GitHub has provided alerts on a select number of projects, and are clearly working to improve security awareness within their substantial user base, in order to consume these alerts one needs to be part of the GitHub ecosystem.
As part of governance policies, enterprises cache ‘known good’ versions of components they depend upon in local repositories. This is done in part to ensure functional and API compatibility, but also acts as a protection from the external repository being removed for any reason. Hub leverages package manager information when present, but was designed with an understanding that package managers often do not have a complete view of the open source in use and by extension the associated dependencies. This allows Synopsys to have a full view of when to alert on security issues independent of language, package manager, repository, build process, component version and component origin.
I don’t want to detract from the GitHub security alerts effort. As I said, any effort that helps developer teams produce more secure code is a good effort. But it’s also important to understand that while the 4,000,000 / 500,000 / 450,000 numbers make impressive media stories, GitHub security alerts aren’t a magic bullet that miraculously cures open source security management.
For instance, these challenges still remain:
While GitHub security alerts are similar to what Hub provides to its users, the Synopsys solution provides broader support and deeper insight, including:
Synopsys solutions are designed to help consumers of open source technologies better manage their processes, and we have a suite of services tailored to your role:
Tim Mackey is the Head of Software Supply Chain Risk Strategy within the Synopsys Software Integrity Group. He joined Synopsys as part of the Black Duck Software acquisition where he worked to bring integrated security scanning technology to Red Hat OpenShift and the Kubernetes container orchestration platforms. In this role, Tim applies his skills in distributed systems engineering, mission critical engineering, performance monitoring, large-scale data center operations, and global data privacy regulations to customer problems. He takes the lessons learned from those activities and delivers talks globally at well-known events such as RSA, Black Hat, Open Source Summit, KubeCon, OSCON, DevSecCon, DevOpsCon, Red Hat Summit, and Interop. Tim is also an O'Reilly Media published author and has been covered in publications around the globe including USA Today, Fortune, NBC News, CNN, Forbes, Dark Reading, TEISS, InfoSecurity Magazine, and The Straits Times Follow Tim at @TimInTech on Twitter and at mackeytim on LinkedIn.