Posted by Synopsys Editorial Team on April 9, 2014
By now, you’ve surely heard about the Heartbleed vulnerability (CVE-2014-0160) in OpenSSL 1.0.1 through 1.0.1f (inclusive). The vulnerability has been present in OpenSSL since December 2011. Many websites have discussed the details of the bug, and I will not go into the deep technical details here. I will describe the bug at a high level, and then discuss the impact of the bug and what you should do about it. In the remainder of this post, I’ll refer to “vulnerable versions of OpenSSL” as simply OpenSSL.
Although the bug is in the OpenSSL library, it has nothing to do with the SSL/TLS protocols themselves. It involves code that handles the heartbeat extension (RFC 6520) for TLS/DTLS. The heartbeat messages can be sent even before a TLS handshake is completed. RFC 6520 states:
However, a HeartbeatRequest message SHOULD NOT be sent during handshakes… The receiving peer SHOULD discard the message silently, if it arrives during the handshake.
Due to the use of ‘SHOULD,’ these are recommendations and not requirements. OpenSSL apparently responds to heartbeat requests even before the handshake is completed. So, even servers that require client certificates for authentication are vulnerable.
The vulnerability is in how OpenSSL handles generating heartbeat responses. According to RFC 6520, a heartbeat response needs to contain the exact copy of the payload from the heartbeat request. So, the client or server responding to a heartbeat request needs to copy over the request contents into the response. Heartbeat requests contain two fields that are relevant here: length and payload. The length field is meant to be the length of the payload. OpenSSL allocates memory for the response based on length and then copies the payload over into the response using memcpy(). Now, what happens if payload contains fewer than length bytes? Whatever happens to be in memory after the request payload gets copied into the response. And, that’s the vulnerability. Attackers can send heartbeat requests where the length field is greater than the actual length of the payload, and the remote host will return length bytes in the response payload. The extra bytes will be other data in the remote host’s memory.
The attacker can read blocks of memory from the client or server process’s heap. With each request, the attacker can read up to 64KB of memory. The attacker doesn’t control which 64KB chunk he/she gets to read with each request, but given enough of these requests, the attacker can get access to a lot of sensitive information including session identifiers, usernames and passwords, credit card numbers, and so on – basically, any information being handled by the client or server process including complete requests and responses.
Note that this is a very practical attack and exploits are publicly available. Several people have written about getting access to other users’ session identifiers, search queries, passwords, etc. by exploiting this.
There have also been reports of attackers getting access to servers’ private keys. Accessing a server’s private key using this bug is unlikely because it is only read into memory once when the server starts up. There will not be too many freed heap blocks in memory before the private key, and any free blocks will likely get taken up pretty quickly after the server starts up. So, it is very unlikely that an attacker’s request will end up occupying one of the free blocks before the private key unless the attacker happens to start sending requests very soon after the remote server process starts. And, if the attacker cannot get a request in memory before the location of the private key, he/she will not be able to get to the private key.
It is also possible that in cases where private keys were stolen, a separate vulnerability was exploited to read the key file into memory first.
If your web application is exposed to the Internet, use https://www.ssllabs.com/ssltest/ to determine if you’re vulnerable. If your web application is not exposed to the Internet, or you would rather not test using a publicly accessible service, you can use the Python script at https://gist.github.com/takeshixx/10107280 instead.
Note that your server doesn’t necessarily need to be using OpenSSL to be vulnerable. The server or appliance where SSL connections terminate may be using OpenSSL even if your software is not running on an application server with OpenSSL.
To see if your client software is vulnerable, use the Python script at https://github.com/Lekensteyn/pacemaker. Run the Python script and use your client to try to connect to the machine running the script.
Of course, if you can check to see which version of OpenSSL your software is using easily, you don’t need to perform the above steps. However, just checking the version may not be 100% accurate. OpenSSL can technically be compiled without heartbeat message support. So, just having a vulnerable version of OpenSSL doesn’t necessarily mean that you’re vulnerable.
The first logical step is to patch OpenSSL. If your software is using OpenSSL 1.0.1 – 1.0.1f, you have two options:
If third-party software that you use is vulnerable, you will need to contact the vendor to obtain a fix.
On the server side, it is possible to set up IDS/IPS rules to detect attempts to exploit this issue. For example, see http://blog.fox-it.com/2014/04/08/openssl-heartbleed-bug-live-blog/. However, I would not recommend this approach. If you can’t deploy a fix to your vulnerable server right away, I would highly recommend taking it offline until you deploy the fix.
From the client side, do not try to connect to servers or networks that you don’t trust until you have a fix. That is common security advice, but it is especially relevant given the easily exploitable publicly known vulnerability. Remember that this vulnerability can be exploited before the SSL handshake is completed (i.e. before any authentication has taken place). So, attackers on untrusted networks can cause your client to connect to arbitrary hosts so that they can extract data from your client’s memory.
As I mentioned above, compromise of private TLS keys is unlikely. However, just to be on the safe side, generate new key pairs (after deploying a non-vulnerable version of OpenSSL of course) and get new certificates from your certificate authority. Ensure that the certificate authority revokes your old certificates. Now, the unfortunate part is that revocation doesn’t work very well in practice. Most browsers for example probably will not perform certificate revocation checks anyway (see http://news.netcraft.com/archives/2013/05/13/how-certificate-revocation-doesnt-work-in-practice.html). So, if somebody has stolen your private key, they may be able to continue using it until your old certificate expires. There is little we can do about this.
As for what you should do regarding other information that may have already been compromised (e.g. your users’ credentials, session identifiers, etc.), there is no easy answer. If you have good application-layer monitoring controls in place, keep an eye out for unusual behavior – whatever that may be for your application. Keep in mind that the stolen information could be used in social engineering attacks against your customer service representatives. So, make sure that they’re aware of what attackers could attempt over the next few days/weeks/months.
An interesting question is what you should do if you’re using OpenSSL 0.9.8, which happens not to be vulnerable to Heartbleed, and are considering an upgrade to OpenSSL 1.0.1. Looking at the OpenSSL changelog, it’s easy to see why OpenSSL 0.9.8 is not vulnerable. TLS/DTLS heartbeat support was added in OpenSSL 1.0.1.
There are compelling reasons to upgrade to OpenSSL 1.0.1 including its support for TLS 1.1 and TLS 1.2, which can help protect you from attacks such as BEAST. If you have been testing with OpenSSL 1.0.1 and are planning to upgrade, there’s no reason not to. Just make sure that you upgrade to OpenSSL 1.0.1g.
My knee-jerk reaction is: there are safer languages such as Java that you should use instead of C/C++ whenever possible. If a Java library contained the same mistake, processing the request would have resulted in an ArrayIndexOutOfBoundsException being thrown. Under no circumstances would other data in memory have gotten copied into the response accidentally. As much as I would love to say that we should stop writing C/C++ code because it’s too easy to make dangerous mistakes like this, that’s not a practical solution.
So, how do we solve this? Good software development practices can help. In this case, the developer accidentally trusted a value received over an untrusted network. This type of issue is one of the most common programming mistakes that I see. It can often (but not always) be found using static analysis tools. On the security testing side, fuzzing tools can help. It’s not surprising that Codenomicon researchers helped discover this issue while improving their fuzzing tools. Do your developers receive regular security training? Do you use static analysis tools to find security issues and enforce your coding standards? Do you use fuzzing tools? If your answer to any of these questions was ‘no’, your code may also contain vulnerabilities like this.
Get the latest Software Integrity news, thought leadership, and more.