Products + All Products + Software Integrity + Semiconductor IP + Verification + Design + Silicon Engineering
Posted by Robert Vamosi on March 20, 2017
Earlier this month WikiLeaks announced it had in its possession a cache of zero days allegedly from the Central Intelligence Agency. These unpatched vulnerabilities, it said, could affect Apple and Android devices (including TVs). It is suspected that exploitation of these vulnerabilities could allow the spy agency – or anyone else who knows about them — to surveil targets by activating microphones and receivers as well as eavesdropping on communications.
The WikiLeaks disclosure coincides with a new zero day report from the RAND corporation. The study, Zero Days, Thousands of Nights: The Life and Times of Zero-Day Vulnerabilities and Their Exploits aims to shed light on this dark practice. For the report authors, Lilian Albon and Andy Bogart, had access to an unnamed private zero day vulnerability arsenal covering the interval from 2003-2016. Using that data set they were able to derive some interesting insights into the zero day world.
Zero days are simply software vulnerabilities that have no public patch or workaround. They may even be known to the software vendor. They are certainly known to the bad actors on the internet. There’s even a market for criminals, militaries, and even governments to purchase these, estimated between $4 and $10 million dollars annually.
To some, zero days have value because as mentioned they allow remote code exploitation or electronic surveillance without detection for long periods of time. The RAND report sheds light on the relative length of time a zero day vulnerability might have before public disclosure. It also touches upon why such arsenals even exist.
Whether to disclose a vulnerability or not is known as an “equities” problem. Put it simply, the person or entity in possession of a zero day must decide whether to release it and have it patched, or to retain it. There’s an argument that if everyone discloses their zero days, there would be no value in keeping an arsenal. In a 2014 blog post, Michael Daniel, the former cybersecurity coordinator for the White House under President Obama, said “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.”
Rather, the government set up a Vulnerability Equities Process or VEP. An article in IEEE Spectrum neatly summarizes the process. Basically, there’s an Equities Review Board that decides on a case by case basic whether the U.S. Government should disclose a zero day or not.
Ultimately, someone else will find the same vulnerability. When two or more researchers discover the same vulnerability, the industry uses the term “collision” to describe the event; it is also said there is now an “overlap” when two or more researchers find the same vulnerable. The report finds that for a 90-day interval there’s an .87 percent overlap. Over 365-day interval, there is a median value of 5.76 percent overlap in vulnerabilities publicly and privately known. Played out against the larger 14-year interval of the data set, this results in a 40 percent overlap.
Thus, almost half of the zero day vulnerabilities will be disclosed publicly given enough time. The researchers said there is no predictive characteristic that determines the length of life for a given vulnerability, although they did observe that the amount of time a vulnerability remained undisclosed was longer pre-2008 than post-2008. The report authors don’t state it but 2008 might be the intersection with the increasing use of static code analysis and fuzz testing tools by vendors, enterprises, and researchers.
The RAND researchers found that within their data set exploitable vulnerabilities have on average 6.9 years of use before public disclosure. The life range for any given exploit falls between 5.39 and 8.84 years. The researchers were further able to determine that within the first year and half of a zero day vulnerability’s lifecycle, 25 percent will be disclosed publicly. Of the remaining, only 25 percent will survive after nine years.
Given the likelihood of independent discovery, what value remains in keeping zero days in the first place? The authors make a few interesting observations.
First, finding a zero day vulnerability by itself is not enough. There’s a second level of understanding necessary — whether or not the zero day vulnerability is even capable of being exploited. Not all vulnerabilities are useful or require the assistance of other vulnerabilities to become exploitable. Sometimes that determination takes time.
The RAND researchers estimate a median time of 22 days to weaponize any zero day vulnerability. There is expense here. Hiring half a dozen exploit developers, for example, each making in the “mid to high six-figures,” still might only net $1-2 million on the zero day market, according to the researchers. One company told the researchers that 2015 was in fact a negative payout year, yet they continued to create exploits because it was, in their words, “a labor of love.”
What’s interesting is the RAND researchers found it is cost effective to retain apparently non-exploitable zero day vulnerabilities. One need only to periodically check these to see whether they can be exploited rather than hunt for new zero days. The researchers cited an example.
A group of zero day researchers found a software design flaw vulnerability that, when exploited, could allow for remote execution of code. However, this vulnerability needed to be combined with another to enable write capabilities for it to be a truly effective exploit. This is a gamble: the second vulnerability may never be found or the first may be discovered. In this case, it paid off.
Once an exploit has been created, other questions remain. Is it stable (will it crash the target system and therefore be found)? Or, more importantly, is it noisy (i.e. detectable)?
Testing an exploitable zero day in an operational setting is not always possible. If you own a secret vulnerability, you risk exposure by trying it out in the real world. Conversely most people retaining zero days don’t have the resources to fully simulate their target environments either. That said, the researchers claim that some zero day researchers create spreadsheets that matrix different versions of software and various configurations tested, testing each one thousand times or more.
The RAND researchers also cautioned against calling any zero day vulnerability “alive” or “dead”. They point out that even “zombies” -zero days that been patched – can still provide value in that some organizations don’t patch their systems regularly. “Software rot” occurs when software code is not maintained or updated. A patched component nestled within a software application might not be patched if the vendor has not continued its maintenance. The report author did not state it, but software composition analysis should be able to determine the vulnerability of any component acquired through the cyber supply chain.
And, as previously noted, a non-exploitable vulnerability today might change status with a combination with another vulnerability — or platform. The RAND researchers found that legacy software being used on the Internet of Things (IoT) today might be vulnerable in previously unforeseen ways. They note that many IoT devices lack means of updating software, creating possibilities for these zombie zero days.
As thorough as the 133-page document is, the RAND researchers acknowledge there is still room for future research. They cite the fact that zero days found in Linux might have longer lifetimes. They speculate why but admit they don’t have enough research to say for sure.
The RAND report should be a wake-up call for any organization. For those developing their own code, begin and commit to a steady improvement in the secure software development lifecycle, starting with secure architecture and design through post-release testing. For organizations that acquire pre-built software, begin and commit to software composition analysis, pen testing, and even fuzz testing to make sure the software being used and the networks that run them are as secure as possible.
After all the report authors did see a shortening of zero day lifespans after 2008. Let’s see if even more software testing can reduce that lifespan even further.