Posted by Robert Vamosi on June 21, 2016
Two years after the vulnerability in OpenSSL known as Heartbleed there remain valuable lessons still to be learned both about how vulnerabilities are discovered and how the security community should respond.
This week my guest is Billy Rios, founder of WhiteScope, an embedded security company, with part two of our discussion around Heartbleed, two years later.
In this podcast, Billy discusses some of the good and the bad that comes from vulnerability disclosures in general. For example, Heartbleed was as joint discovery between Codenomicon, which now part of Synopsys, and Google. I asked Billy whether this was common or unusual?
Vamosi: In addition to the data you’re going to present there are also some unique qualities about Heartbleed that have to do with vulnerabilities in general. For example, Heartbleed was as joint discovery between Codenomicon, which now part of Synopsys, and Google. You’ve said you’ve heard of other cases. Is this common or unusual?
Rios: You know I think it’s more common than people think. I had a chance to work at Microsoft. I was the security program manager at Internet Explorer. Trust me I saw my fair share of bugs, right. It wasn’t uncommon for two people to discover the same bug. So people who had been on the engineering side, I’m sure they have certainly seen this before. But I think it surprised a lot of people when they saw that for a such a high profile bug that two different organizations had independently discovered the same bug almost at the same time. And what’s even more interesting is that organizations from what I understand actually used two very different approaches to discover the same bug. How this happens and why this happens I don’t really understand why or how it happens. This is a discussion we have when I was at Microsoft. As far why something like this would happen, we actually saw it more frequently then you would think we would see it, so we were trying to understand what characteristics would cause what we would call a research collision to happen where two researchers would discover the same bug, and so from the perspective from the defender or if you are on the engineering team, this raises a lot of challenges because when you are dealing with disclosure and timelines and trying to coordinate certain things, if you are dealing with two different researchers who have two different personalities or maybe going through two different chains to report this issue, then yo have to manage that as well. But its something that this interesting. But I know that every time I saw this at Microsoft it raised my eyebrows as to how this could happen. And I thought it was very interesting to see this happen for such a high profile bug like Heartbleed.
Vamosi: That fact that two different researchers reported it to OpenSSL organization prompted them to move faster to get a patch out there and also make the disclosure. I know when it was initially out there it had a common vulnerability enumeration scoring that only ranked it as a five and yet we’ve discussed previously how this was a rather significant, almost a beautiful vulnerability. How could it get such a mediocre criticality score and yet be so serious?
Rios: That’s a great point. I think with Heartbleed, there’s a lot of things we can learn from Heartbleed that’s for sure. There are still 200,000 servers out there that are not patched, right, so that kind of represents a failure in the way that we patch at internet scale. Our response in terms of having a logo, that’s very interesting and allowed us to get a lot of traction, a lot of visibility, get board level attention for these types of issues. But when you look at something like the CVSS score, the CVSS base score, like you said, that is Common Vulnerability Scoring System, right, a lot of organizations use those scores to do risk management. And so if you look at the process for a lot of mature organizations, who have mature vulnerability management processes and capabilities, they’ll use these scores to help them find what to do and so the highest you can have is a CVSScore of 10, and so most organizations have a good process. So if they get something with a CVS score of 10, they pretty much drop everything and try to patch or mitigate as soon as possible, and then the response time and response behavior so of lessens as the score goes down. Heartbleed had a CVS score of 5. Let’s say there was no logo, there was no webpage and it didn’t get the attention that it deserved. IT honestly did deserve the attention that it got. A lot of people probably wouldn’t have patched it. And this is a pretty serious bug and shows a little bit of a kind of failure in some of the automation that we have. If you rely purely on the CVS score you will probably make the wrong risk decision, probably. Maybe you are a one-off or an exception but for the most part, If you are a bank, Heartbleed was a very, very serious risk to your infrastructure from at technology standpoint. Not just a technology standpoint but from a political and corporate standpoint s well. From a pure technology risk stand point it was a 10. It was going to get your SSL key stolen, your data stolen, bank accounts and usernames and passwords are going to go stolen because of this bugs, right, so it shows you that we still have a lot to learn about how we manage vulnerabilities. If you were a bank and you relied purely on CVSScore you probably would have made the wrong decisions as to how you should handle Heartbleed. That’ something we definitely need to work on and some people may even say that’s a failure in the way that we do things if you’re so rigid and stuck to process. The good thing is that the bug did have a lot of visibility; I think people recognize that the CVS score probably wasn’t a good indicator of real risk. CVSS has a certain set of guidelines to help you assign a score to it. There are certain things in CVSS that have to be present like code execution, the ability for some to run their own malware on your server and things like that for the CVSScore to go up higher. Heartbleed didn’t have that. I think it shows that for most organizations it doesn’t matter whether or not someone can execute code on the server. It just matters whether they can grab the data, right, and so I’m glad that a lot of organizations took the CVS score and realized that it wasn’t accurate for their particular industry and they moved accordingly. In some sense it was a failure in infosec and we probably categorized the risk for this in CVS score, which is such a widely used metric, we probably categorized this incorrectly but I’m also glad there are enough security teams who had their heads screwed on straight to realize no, we’re going to move on this bug with a sense of urgency. We’re go to patch this as soon as we can.
Vamosi: Is it possible there might be other serious vulnerabilities that have a mediocre or low CVSScore.
Rios: Certainly is. Right, so, like I said there is criteria to get bumped up and its not a subjective decision that someone you know puts their finger in the air and sees which way the wind is blowing and certain state of the moon is before assigning a CVS score. CVSS has a certain set of definitions that say if you meet this criteria, then your CVSScore is going to be higher, right, so remote code execution always pushes a CVS score up high, especially if it is an unauthenticated exposure. So things like Information theft which Heartbleed was, where you could actually steal the contents of memory from a server, those do not necessarily represent code execution but I can tell you right now that if there is a vulnerability much like Heartbleed that would allow you to touch a bank server and steal the private key off the server, they will consider that an extremely high risk.[19:19] regardless of what the CVSScroe is. It’s a great scoring mechanism, don’t get me wrong. I’m not saying that it is bad It’s great. I’ve been involved in many different vulnerability management programs and CVSScores is an important piece of these programs. But we should not be so rigid in our decision making that that is the only metric only to determine what our process should be. We do still need vulnerability manager who can understand vulnerabilities, understand how they affect our business or your business and what that means to your business and basically adjust the response appropriately so we can’t just blindly follow scores like this in order to understand what vulnerabilities mean to our organizations, we have to apply some brain power to it once and a while.
Vamosi: And I appreciate that there’s criteria associated with CVSS. Flipping it around to the otherside, though, after Heartbleed there have been other campaigns and logos and names – for example POODLE and GHOST – are we creating noise in the other direction now. We’ve figured out how to isolate CVEs to say pay attention to this one, this one’s important by giving it a name but now other people are coming along and saying Well, mine is just as important as yours and look it has a name.
Rios: Right, yeah, I mean I’ve done a lot of research so I’ll tell you this every bog that I’ve discovered is definitely my baby. And is the most important thing in the world. And that’s okay, so, And like I said, vulnerability management, if you’re a bank, if you are a government, you have to understand hat there is all these signals. And you can’t stop those signals. So researchers, publishing research, firms creating logos, webpages for the vulnerabilities that they have discovered qand things like that. That is going to happen. That is just the reality of the world in which we live in. It’s really up to your vulnerability managers to understand when something is important to the business and when it is not. Right, and so they have a lot of different signals hey can use some of them, some are more noisy than others, but understanding what a vulnerability means to a business. IT still takes a person at the end of the day. We can handle a lot of things via CVSS and other scoring mechanisms but at the end of the day you want someone who understands your business to be running your vulnerability management program. Because they will have to make decisions likes this and like we saw with Heartbleed you know there could be another day where your vulnerability manager is going to have to talk t your board or talk to your CEO about a vulnerability and just understanding the technical pieces of the vulnerability isn’t enough, you have to understand how that vulnerability impacts your business. And so walking up to someone and saying I know that this doesn’t have a logo or a big campaign behind it but it actually represents a huge risk to our business and I need an except ion to move to the vulnerability process faster than we would normally would is a discussion and an argument that you have to make o business owners. And maybe on the flip side, maybe the vulnerability gets some unusual attention but it really isn’t a risk to your business and so you might have to explain to someone Hey, we’re going to keep this in our normal vulnerability management process and not accelerate our patching because this doesn’t really represent a real risk to our business. So understanding that takes someone who understand now just the technical pieces but the business as well.
Vamosi: I’m glad you’re arguing that it’s a hybrid of automation and human intelligence here, that you can’t just turn it all over to automation that you still need a human to decipher whether it is critical to your context of business model as opposed to the general internet. I’m glad to hear that.
Rios: Yeah, certainly, and if you are an executive, a CEO, on the board, not a techie or a geek or something like that, you have to trust the people who are running your vulnerability management program. So they are not going to be able to teach you vulnerabilities in an hour or thirty minutes in a brief, you just can’t do that right? So when someone says hey, I know this is a CVS score of 5 but this is really a ten for us. You have to trust them when they make those decisions right. So when someone says hey, I know this thing has a logo, it was just on CNN, but we’re just going to handle it though our regular vulnerability management process, you have to trust them. On the flip side, if you’re a vulnerability manager, it’s your responsibility in the business as well. So understanding the technical details behind a vulnerability and knowing what the CVSS score is, all that sort of stuff is important. But you have to understand what those things mean to your business. Because you are going to asked one day what that means to a business and you are going to have to explain to people who are not technical or don’t have an engineering background as far as what the risk to the business is. It’s a very, very important skill set to have.
Vamosi:Well Billy, thank you for your time today, I appreciate it.
Rios: Yeah, no worries. Anytime.
Get the latest Software Integrity news, thought leadership, and more.