Posted by Robert Vamosi on April 5, 2016
It’s been two years since a critical vulnerability, CVE-2014-0160 better known as Heartbleed, was first disclosed. The flaw, found in certain older versions of OpenSSL, did not properly handle Heartbeat Extension packets, protocol is to determine the persistence of the another machine in a transaction, in this case the encryption between a client and a server. It affected hundreds of thousands of popular websites, and allowed an attacker to request more than a simple response; it could allow for the leakage of passphrases and encryption keys.
Heartbleed was co-discovered by the Synopsys research team in Finland (formerly Codenomicon), and independently discovered by Neel Mehta of Google’s security team.
In this Synopsys Code Review podcast, Rauli Kaksonen, Global Director with Synopsys, explains how researchers with Synopsys in Finland (formerly Codenomicon) were performing a routine test of a new feature in Defensics, a fuzz testing tool available from Synopsys, and that lead to the independent discovery of Heartbleed. At that time more than 600 thousand IP addresses were vulnerable. Today, two years later, roughly one third of the original list of the world’s vulnerable IP addresses remains vulnerable to Heartbleed today.
You can listen to the podcast on SoundCloud or read the transcript below.
In this podcast, host Robert Vamosi, CISSP and Security Strategist with Synopsys spoke with Kaksonen by phone and asked what was going on with Defensics at the time of Heartbleed’s discovery.
Kaksonen: At the time we were developing the Safeguard feature called Defensics Test Suite and we used OpenSSL and some of the other open source software in the lab because they are high quality, important use piece of software. When we are developing stuff we need to test ourselves so open source is great target for that. And if we find problems we can make the world a better place by reporting them and getting them fixed for a positive impact.
Vamosi: Okay, so for someone who doesn’t know about Defensics, what is it and how do you go about testing it?
Kaksonen: Defensics is a dynamic security testing tool set. I think most people know this as fuzzing. Defensics is model based, smart fuzzer, and it generates a lot amount of different test cases where we try to find inconsistencies from the test feed, usually network systems. By finding the problems and getting them fixed it raises the bar to compromise security of that tested system.
Vamosi: So you mentioned a test suite, so in that what is the criteria that you use to develop a particular suite?
Kaksonen: We picked test suites so that they are cover standard protocols implemented by different companies or projects so that those having a high quality implementation of those protocols are making an impact. Of course we have customers who have demands for the protocols they want to have tested.
Vamosi: And so at the time when you were looking at OpenSSL, where looking specifically at the heartbeat protocol or a series of protocols that OpenSSL would cover?
Kaksonen: We were actually testing TLS protocol which actually contains the heartbeat functionality as one of the subprotocols if you wish.
Vamosi: And what suggested in the results that you were seeing that there was a vulnerability?
Kaksonen: It was all about the SafeGuard functionality. Safeguard, we designed at the time, to predict a sort of vulnerable behavior beyond usual crashing of the service. So the SafeGuard functionality at that point directed that OpenSSL responded with a much larger response packet than what was expected and triggered us to look into the responses from OpenSSL.
Vamosi: It has been suggested that an automated process would catch this but a human process wouldn’t. How would an automated process single this out that this was an abnormality?
Kaksonen: The benefit of automated process is that it doesn’t get tired and it runs much faster. So Defensics can run for days, or weeks, or months, and generate hundreds of millions of test cases. As it happened here, what happened here is that one of those test cases, a few of those cases, that triggered this behavior, so a few from millions of test cases which trigger this so it is possible to try those out without automation.
Vamosi: And so how did you go about verifying and isolating that this was a problem with the heartbeat protocol?
Kaksonen: Well, we were going through the results from the early SafeGuard functionality and noticed the difference between the responses for this particular test case. And then we were looking at the logs and realized that OpenSSL is responding with more content than it definitely should not do. So it was a manual verification after the automation had found the basic symptom.
Vamosi: After it was confirmed that there was something there what was the next step?
Kaksonen: Once we have convinced ourselves that this was a vulnerability, we took two approaches. First we wanted to fix our own servers because we were also using OpenSSL to protect our data. And we also wanted to make sure that this was an appropriate fix for the rest of the world so we contacted CERT-FI and they then used their further relationships with the security world to get word out that there was a problem which needs to be addressed as quickly as possible.
Vamosi: So CERT-FI was looking at the vulnerability and determining if it was valid and reaching out to the appropriate parties. How did the name Heartbleed come about?
Kaksonen: Well, as it was the heartbeat functionality that contained the vulnerability, one of our engineers thought that well since this leaks data so it is kind of a heartbleed. A logical name for it.
Vamosi: With out there n the public large number of affected parties took advantage of the patch, mitigated the problem, and we went from something like 600K infected IP addresses down to 300K in short amount of time, about one month. But then, today we still have 200K vulnerable IP addresses. What are some reasons do you think why these are still vulnerable today? What might be keeping them from being patched or fixed?
Kaksonen: It’s very hard to say. I’m certain there are many reasons, maybe some of those are not actively maintained by anybody.
Vamosi: I understand that it is an implementation flaw within OpenSSL.
Vamosi: Within TLS the implementation of the protocol is fine. It’s just the way it was implemented within OpenSSL.
Kaksonen: To tell you the truth, I think TLS and SSL its predecessor are extremely complex to generate the data on the cryptographic protection. I would argue that it is a sort of design flaw in those protocols. They could be much simpler and that they would be easier to implement. As it is they are fairly complicated. It requires a lot of work to make them function, and that leaves room for mistakes.
Vamosi: Briefly, what is the purpose of heartbeat protocol?
Kaksonen: As far as heartbeat protocol, it is intended to verify the proper functionality between client and server. So I think it’s most useful when the TLS is used for datagrams, for instance.
Vamosi: Heartbleed wasn’t the only vulnerability discovered through Defensics testing. Are there others?
Kaksonen: Over the years we have found many, many vulnerabilities from OpenSSL and from many other open source libraries and we have always have took the same path that we want them to be fixed. And that’s really a priority for us. Not to mention that our customers have of course used our tools over the years in many, many contexts so. So we have no idea how many vulnerabilities have been fixed using Defensics over the years.
Vamosi: And how is the relationship with open source organizations? Are they friendly? Or are they kind of resistant to the vulnerabilities that you bring to them?
Kaksonen: The open source community is very receptive to get vulnerability information, and usually very fast to fix it. I haven’t seen problems there.
Vamosi: Why do you think the open source product OpenSSL had this vulnerability?
Kaksonen: Of course, given that any real world public software has bugs in them, so in a sense it is a given. Many feel that about OpenSSL as well; it is open source and widely used, certainly people have reviewed it already many times and surely the other users have made sure that it is high quality so that I could trust it. As it appears, OpenSSL contains a lot of different functionality, even beyond the basics of what is required and it is fairly complex and has accumulated over long periods of time. So people are realizing that if something is open source it doesn’t mean that it is necessarily high quality, it doesn’t necessarily mean it has been reviewed by people. So I think any software including open source software has bugs and you should be critical when you are using it. If possible spend time yourself reviewing them, having a team review those components because being open source is not really a guarantee to high quality. Of course being open source it means that you can take a look and form an opinion if it looks like a good software or bad. So that’s the benefit of open source. Not that it is high quality
Get the latest Software Integrity news, thought leadership, and more.