Open source election software is exposed to many eyes that check it for vulnerabilities. But does that mean it’s more secure? What are the pros and cons of open sourcing election software?
The drama around Russian meddling with the U.S. elections has pushed election security into the spotlight. There have been many ideas of how to prevent such tampering in the future, including a New York Times op-ed by R. James Woolsey and Brian Fox about the security benefits of open sourcing election software. They assert that open source software exposes the code to the larger developer community, allowing many eyes to comb through that code for security vulnerabilities.
This “many eyes” theory, also known as Linus’ Law, postulates that given enough expertise within a broad developer community—all bugs become readily apparent. The argument being that due to the closed nature of software developed by commercial organizations, open source software is more secure due to its transparency. The reality is that all software, whether developed in a transparent manner or otherwise, contains defects. Regardless of available resources and expertise, uncovering a defect can be challenging.
As an example, let’s examine a class of vulnerability known as a “race condition.” Race conditions exist when the result of two parallel operations depend not on the data; but on which operation completes first. CVE-2016-5195 (also known as Dirty Cow) is an example of a race condition resulting in a security vulnerability. CVE-2016-5195 impacted the Linux kernel, and encompassed all Linux variants for the preceding 11 years. The core issue was known 11 years prior, but was problematic to fix. Linux of course is a very popular open source operating system, so the impact of this vulnerability was quite large. In preparing the fix for CVE-2016-5195, Linus Torvalds noted that “what was once a theoretical race condition became easier to trigger.” Effectively we have a decision made 11 years prior manifesting itself as a security vulnerability today. So, what changed? The biggest change occurred in the hardware where modern CPUs can execute significantly more parallel operations than the processors of the last decade. In other words, a decision made using software best practices of the time became a major security issue a decade later.
The same scenario applies to the Equifax breach, where code introduced in 2012 within the Apache Struts project became the attack vector for a major data exfiltration event. Even though the Apache community resolved the issue months prior to the Equifax breach, Equifax failed to take the necessary steps to locate that vulnerability within its codebase, which led to the breach. Similar to Equifax, the Federal Election Commission (FEC) has critical data to protect. If the FEC were to open source its software, there are some lessons to learn from Equifax regarding security.
While Woolsey and Fox assert that open source components receive more security reviews than proprietary software, relying on the open source community to find and fix vulnerabilities leaves gaps in the application security process. Rather than banking on the open source community to identify and remediate every vulnerability, security efforts should be complimented with dedicated application security tools that automate the process of identifying and remediating vulnerabilities.
Here are three important reasons automated security solutions are necessary additions to the security benefits offered by open source communities.
The FEC is currently using Microsoft’s proprietary operating system software, so it knows exactly how Microsoft addresses application security. Microsoft itself has now passed the sixteen year mark for their Trustworthy Computing Initiative, and we can safely state that Microsoft’s efforts have been quite successful. If proprietary election software were open sourced, the security burden then shifts to the FEC, because whoever uses open source software assumes responsibility for any associated security risks.
Assuming election software is like most applications, it is already composed of open source components—many of which contain security vulnerabilities. Simply making proprietary code public to the developer community would not give the FEC visibility into the composition of code within the application. A tool capable of scanning the entire application and mapping the open source components with their associated vulnerabilities would give the FEC a more robust picture of their application security.
The most commonly used database for open source vulnerability data is the National Vulnerability Database (NVD). However, relying on the NVD for open source vulnerability data can be risky because its records are not always complete, nor its data as timely as one might wish. The NVD can become backlogged, preventing it from disclosing timely vulnerability information. Additionally, the NVD doesn’t disclose every vulnerability; there are proprietary databases that have more complete records of open source data. In order to fully understand what is inside their code, the FEC would need a proprietary database to be sure it accounts for every vulnerability.
While open sourcing election software would expose code to more well-intentioned developers, it would open the door for hackers as well. Whenever a new vulnerability is disclosed by the NVD, that disclosure launches a race between hackers and application security teams to find that vulnerability. Whoever wins determines whether the application will be hacked or patched for any given deployment. According to a study done by Recorded Future in 2014: “7.5 is the median number of days it takes for a vulnerability to be exploited as reported by public/web data.” In other words, software users have a week following disclosure to patch a vulnerability; regardless of whether proprietary or open source. Further, IBM conducted a survey with the Ponemon Institute that found, globally, it took 191 days to identify a data breach. When applied to an election cycle such a delay could directly impact election results.
When data as sensitive as election results are on the line, it’s important that election officials win any security race. This is why they need to be alerted of new vulnerabilities before hackers find them. The stakes are too high for the FEC to hope that the well-intentioned developers fix any vulnerability before hackers can exploit it. Implementing a security tool dedicated to continuously monitoring an application for newly reported vulnerabilities would be a far more proactive approach to quickly patching vulnerabilities.
In 2017 alone, an average of over 30 new vulnerabilities were added to the National Vulnerability Database each day. While the open source community finds many of them, it only takes one vulnerability to cause a breach—which is why the FEC should invest in technologies to catch whatever the developers miss. With an automated security solution, the FEC could set and enforce policies to flag vulnerabilities at scale before they slip through the cracks.
There is a reason enterprises are implementing automated security tools rather than hiring people to track and patch vulnerabilities manually. Software Composition Analysis, SAST, DAST, and IAST technologies can be integrated into development or production environments to identify and remediate vulnerabilities at scale.
At the application level, we need to be sure to harness the best technology available to combat potential tampering with our elections in the future. While it is true that open sourcing election software would provide a larger community to look for security flaws, the stakes are certainly high enough to add an automated security layer to find what developers might miss. Enterprise organizations are moving towards DevSecOps: an approach to application security designed to build security in as software is developed and feedback security lessons during deployment to development teams. By integrating tools into their software development life cycle, they prevent vulnerable code from making its way into production. While the FEC is in the security hot seat, it would benefit by learning from others safeguarding sensitive data.
Open source voting applications are already playing a role in elections in New Hampshire. San Francisco, Los Angeles, and Travis County, Texas, are allocating funds to move toward open source voting systems as well. If the FEC does replace proprietary software with open source, it should consider automated security tools in addition to the open source community to provide a more complete application security picture.
Transparency of development in election systems is something we should all want. Having access to source code and seeing the individuals making changes in the code should increase confidence in the election process. Open source software has a strong role to play in the future of election software.
Tim Mackey is the Head of Software Supply Chain Risk Strategy within the Synopsys Software Integrity Group. He joined Synopsys as part of the Black Duck Software acquisition where he worked to bring integrated security scanning technology to Red Hat OpenShift and the Kubernetes container orchestration platforms. In this role, Tim applies his skills in distributed systems engineering, mission critical engineering, performance monitoring, large-scale data center operations, and global data privacy regulations to customer problems. He takes the lessons learned from those activities and delivers talks globally at well-known events such as RSA, Black Hat, Open Source Summit, KubeCon, OSCON, DevSecCon, DevOpsCon, Red Hat Summit, and Interop. Tim is also an O'Reilly Media published author and has been covered in publications around the globe including USA Today, Fortune, NBC News, CNN, Forbes, Dark Reading, TEISS, InfoSecurity Magazine, and The Straits Times Follow Tim at @TimInTech on Twitter and at mackeytim on LinkedIn.