Over the past few years, we’ve seen a variety of TLS vulnerabilities steadily surface. In general, we brand each one as “just another TLS vulnerability,” but the intricacies of each are rather distinct, though not horribly convoluted. Let’s walk through a few together.
Heartbleed affects the OpenSSL library’s implementation of a TLS extension—the TLS heartbeat. A TLS heartbeat works as follows: The client (or server) sends some amount of data in a heartbeat request to its peer to verify the peer’s presence or keep the connection alive. The peer then echoes the data back to the sender to verify that the peer is reachable and alive. If you want the nitty-gritty details of the heartbeat extension, feel free to read the IETF’s description. Exploitation of Heartbleed, a faulty heartbeat implementation, can allow an attacker to read up to 64 KB of memory at a time from a peer running a vulnerable version of OpenSSL. Here’s how:
When sending a heartbeat request, the sender controls all fields in the request, including some data, as well as a field that provides the length of this data—meaning the onus is on the sender to communicate the number of bytes being sent. In pseudocode, this looks like the following:
This is handled by the peer roughly as “You just sent me a message. After copying your message into a buffer in memory, I’ll send the first 5 bytes of that buffer back to you.”
The peer receives this heartbeat, copies “hello” into a buffer, and sends the first 5 bytes of that buffer back. Simple.
But what if we crafted the following?
This is interpreted by the peer roughly as “You just sent me a message. After copying your message into a buffer in memory, I’ll send the first 4,096 bytes of that buffer back to you.”
See the problem? The peer is too trusting of the sender’s heartbeat message. In other words, the sender can lie about the size of the heartbeat message, and the peer will trust the sender. Yes, the first 5 bytes do contain our message, but the following 4,091 bytes contain other data in memory. If we continuously exploit this, extracting 64 KB of data at a time, we’re bound to find something juicy. Lather, rinse, and repeat to grab encryption keys from memory.
The POODLE vulnerability marked the end of SSL 3.0. A poodle is indeed a dog breed, but POODLE stands for “padding oracle on downgraded legacy encryption.” While one barks and requires tender loving care, the other allows a TLS connection to downgrade to SSL 3.0 if the handshake fails, which is trivial for a man-in-the-middle (MitM) to make happen. The downgrade is conceptually simple and works as follows (assume I’m an active MitM between a victim web browser and a server):
*Connection is established to the server using SSL 3.0*
At this point, the client and server have agreed on SSL 3.0, both unaware of what just transpired. The client believes that SSL 3.0 is the strongest protocol supported by the server, owing to the TLS 1.0, 1.1, and 1.2 handshakes failing. But the server only ever received the SSL 3.0 handshake (the MitM rejected the former three connections) and therefore assumes SSL 3.0 is the strongest protocol supported by the client! Alas, the connection is made, and we’re subjected to the underlying security of SSL 3.0.
Here’s the problem: Any SSL 3.0 configuration that uses block ciphers is susceptible to a nasty bug, namely the padding oracle, which allows an active adversary to decrypt 1 byte of an encrypted message by making (at most) 256 requests from the victim’s browser. Decrypting a block of data requires it to be sent over and over again––which makes session cookies a great target! If you decide to disable all block cipher support (as mentioned, exploiting POODLE does require block ciphers) and use only stream ciphers (e.g., RC4), you’ll end up defenseless against a completely different type of vulnerability. Catch-22! Lather, rinse, and repeat to decrypt sensitive blocks of data.
Let’s welcome in 2015 by shifting gears into a quick history lesson. After Alan Turing successfully performed cryptanalysis on Enigma during World War II (cue “The Imitation Game”), providing unparalleled aid to the Allies and resulting in an Axis surrender, the value of cryptography became apparent. In the name of national security, the U.S. government immediately began to regulate encrypted traffic leaving the country. In the 1990s, this equated to forbidding the export of “strong” cryptography.
Enter export cryptography stage left. Enter FREAK and Logjam stage right. At a high level, FREAK and Logjam are conceptually similar. Both involve a MitM intercepting the plaintext “client hello” to the server and modifying it to explicitly request parameters that are brute-forceable (e.g., export cipher suites):
Remember, the client hello is sent unsigned in plaintext, so the server has no way of knowing it’s been tampered with by our MitM. The server assumes the export request is genuine. The TLS Protocol does include protections to detect tampering of handshake messages. However, if the attacker can precompute some information, which is possible for export-strength cipher suites—for various reasons discussed below—then the attacker can bypass this tamper detection mechanism.
For FREAK, the server uses its private key to sign a 512-bit export RSA public key and sends that to the client. In a perfect world, the client would say, “Wait, stop. I didn’t ask for this weak, brute-forceable public key!” and then terminate the connection. But owing to a flaw in some versions of OpenSSL and the Apple Secure Transport implementation, in some cases, this does not happen, and the client can begin performing cryptographic operations using a brute-forceable RSA key. Since most servers that support export-grade cryptography generate the export-strength key once and then reuse it until the server is restarted, the attacker can brute-force this key once and use it to impersonate the server to vulnerable clients.
For Logjam, the server uses its private key to sign weak 512-bit ephemeral Diffie-Hellman parameters and sends them to the client. Unlike FREAK, which is an implementation bug, Logjam is a vulnerability in the TLS Protocol itself. The format of the handshake message containing ephemeral Diffie-Hellman parameters is identical regardless of whether an export-strength cipher suite is chosen. The message itself does not specify the cipher suite for which the parameters were generated. Moreover, the “group” used for Diffie-Hellman parameters is often hard-coded in server implementations, which allows attackers to precompute information that they can use later to attack TLS handshakes. The client can (and modern web browsers currently do) prevent this by refusing to connect to servers that provide DHE parameters smaller than 1,024 bits. This wasn’t previously the case, owing to compatibility issues, alongside the belief that manipulating the server into choosing weak parameters was infeasible.
At this point, the client will begin performing cryptographic operations using either a brute-forceable public RSA key (FREAK) or brute-forceable Diffie-Hellman parameters (Logjam). Either way, the MitM can now recover the TLS session key, complete the TLS handshake with the client, and impersonate the server. Another one bites the dust!
Let’s kick off our explanation of Sweet32 with a puzzle. How many people have to be in a room for there to be a greater than 50% chance that two of them share the same birthday? The answer to this question is referred to as the birthday paradox. The answer may surprise you, so let’s just do the math:
The probability p of two people sharing the same birthday is 1 minus the probability of the two people having unique birthdays. We’ll calculate the probability of everyone in a group of people having a unique birthday and repeat this until we generate a probability p < 0.5.
p(1 person having a unique birthday) = 1 p(2 people having unique birthdays) = 1 × (1 − 1/365) p(3 people having unique birthdays) = 1 × (1 − 1/365) × (1 − 2/365) p(4 people having unique birthdays) = 1 × (1 − 1/365) × (1 − 2/365) × (1 − 3/365) … p(23 people having unique birthdays) = 1 × (1 − 1/365) … (1 − 22/365)
After some simplification:
p(1 person having a unique birthday) = 1 p(2 people having unique birthdays) = 0.997 p(3 people having unique birthdays) = 0.992 p(4 people having unique birthdays) = 0.984 … p(23 people having unique birthdays) = 0.4995
And there we have it! The birthday paradox dictates that in a group of 23 people, the chance of 2 or more people sharing the same birthday is greater than 50%. As you may have guessed by now, the Sweet32 vulnerability exploits the probability of finding collisions in a given ciphertext that uses short block lengths. This is known as a birthday attack.
Fortunately, 3DES is the only “modern” encryption algorithm that uses 64-bit block sizes. As long as you keep 3DES at a distance, the only birthday attack you need to worry about is a sugar coma induced by your next birthday cake.