close search bar

Sorry, not available in this language yet

close language selection

TLS 1.3 and the future of cryptographic protocols

Synopsys Editorial Team

Apr 12, 2016 / 7 min read

SSL and TLS are a family of cryptographic protocols that protect sensitive communications on the Internet. The first standard, SSL 2.0, was released in 1995. The latest standard, TLS 1.2, was released in August 2008. Its 20-year history has been marred by numerous cryptographic breaks (both in the underlying primitives and in the protocol itself) and software flaws in implementations. In this blog post, I offer some explanations for these issues and how the Internet Engineering Task Force (IETF) is trying to address the fundamental problems in its upcoming TLS standard, TLS 1.3.

A bit of background

For nearly 20 years, Secure Sockets Layer (SSL) and its successor Transport Layer Security (TLS) have secured Internet traffic. Both use mathematics that has been well-scrutinized for nearly 40 years, and its most popular implementations are open source and deployed to billions of users. Despite its age and popularity, TLS has a long history of cryptographic breaks and implementation mistakes. SSL 2.0 and SSL 3.0 have catastrophic vulnerabilities and even TLS must be carefully configured before it is able to be used safely. Sadly, many of these vulnerabilities affect the underlying primitives, such as RSA or AES. This raises the question, why are these primitives so vulnerable?

Crypto is fragile

There is significant mathematics behind each of the primitives in TLS, and time and time again we’re shown that the mathematics is the only robust component. Part of the problem is that most of these primitives were designed from a purely mathematical standpoint without consideration of the implementation. This inevitably led to the situation whereby software implementations are brittle, buggy, and have many side-channel attacks unless the developers are extraordinarily careful.

There are numerous examples of the fragility of crypto implementations. In 2003, two researchers discovered that OpenSSL’s implementation of RSA decryption was vulnerable to timing attacks and allowed the researchers to recover RSA private keys in two hours. In 2006, Debian developers removed a seemingly innocuous line of code from OpenSSL that referenced an uninitialized variable. The consequence was CVE-2008-0166 and it compromised all SSH keys generated in Debian and Ubuntu over a two year period. In December 2010, a group gained root access of the PlayStation 3 because Sony forgot to regenerate a variable in its implementation of ECDSA. Then in 2013, two researchers discovered a timing attack in most implementations of AES_CBC, which led them to recover complete plaintext from TLS 1.2. The list just goes on.

Of course, it’s easy to blame the implementer. It’s easy to blame the state of OpenSSL’s codebase. It’s easy to say that OpenSSL’s developers should have written constant-time RSA code or that Sony’s security team should have had a deeper understanding of ECDSA. While the implementers did make mistakes, I don’t consider them at fault. After all, crypto algorithms are notoriously difficult to implement correctly. Chances are you’ve got it wrong if you try to roll your own. Blaming the implementer for not addressing “obvious” pitfalls, corner-cases, and side-channels is a naïve argument and does not address the fundamental problem. The implementation doesn’t have to be tricky if the primitive is designed properly in the first place. Otherwise, brittle crypto primitives will inevitably lead to fragile implementations even in the most popular libraries.

Consider authenticated encryption, wherein the plaintext is simultaneously encrypted and integrity protected. There are three competing encryption modes for authenticated encryption: OCMCCM, and GCM. OCM is a very strong mode, but it is patented and is legally challenging to use in practice. CCM is a two-pass mode and thus won’t work for streaming applications like Netflix, Twitch, or YouTube. This leaves GCM, which is commonly used in SSL/TLS. GCM is straightforward to implement, but it’s brittle. To quote Peter Gutmann,

The GCM slides provide a list of pros and cons to using GCM, none of which seem like a terribly big deal, but misses out the single biggest, indeed killer failure of the whole mode, the fact that if you for some reason fail to increment the counter, you're sending what's effectively plaintext (it's recoverable with a simple XOR).  It's an incredibly brittle mode, the equivalent of the historically frighteningly misuse-prone RC4, and one I won't touch with a barge pole because you're one single machine instruction away from a catastrophic failure of the whole cryptosystem, or one single IV reuse away from the same.  This isn't just theoretical, it actually happened to Colin Percival, a very experienced crypto developer, in his backup program tarsnap.  You can't even salvage just the authentication from it, that fails as well with a single IV reuse.

The fact that GCM is intrinsically brittle means that a strong cipher suite such as TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 can be trivially broken if the GCM implementation makes that simple mistake.

Enter djb

The fragility of most of the standardized crypto primitives has prompted some communities to move to alternatives. Arguably, the most promising of these are developed by Dan Bernstein, also known by his handle, djb. He is well-known for his elliptic-curve key exchange protocols x25519 and x448, digital signature schemes ed25519 and ed448, the ChaCha20 stream cipher, and the message authentication code Poly1305. These functions are utilized in OpenSSH, Tor, Tox, WhatsApp, and Signal.

These primitives are compelling for several reasons. For instance, ed25519 and x25519 are extremely fast, run in constant-time, and avoid most side-channel attacks. By design, they avoid timing attacks, cache-contention leaks, and the vulnerability that compromised the PS3. There are no subtle corner-cases to address, so it’s relatively simple to write a secure implementation.

Another attractive feature is that his primitives (particularly x25519) do not include any magic numbers or unexplained design decisions. By contrast, NIST P-256 (secp256r1), which is used in ECDHE, generates its elliptic curve points based on the unexplained seed c49d3608 86e70493 6a6678e1 139d26b7 819f7e90. Dual_EC_DRBG also uses magic numbers and was used as a backdoor on three separate occasions. With perhaps the sole exception of DES/3DES, magic numbers in cryptographic primitives are rarely a good thing.

TLS 1.3

For the past two years, the Internet Engineering Task Force (IETF) has been developing the TLS 1.3 standard, the next generation of cryptographic protocols in the SSL/TLS family. The changelog is quite extensive, but promising. TLS 1.3 removes obsolete and insecure features in the standard, including RC4, DES, 3DES, EXPORT-strength ciphers, weak and rarely-used elliptic curves, AES-CBC, MD5, and SHA-1; in short, all vulnerable primitives that contribute to a weak SSL configuration. While there are also many protocol and efficiency improvements, arguably one of the most interesting changes is the introduction of djb’s primitives into the standard. With the exception of the hash function, under TLS 1.3 it will be possible to only use djb primitives, a monoculture not seen in any previous SSL/TLS standard.

To give some examples, TLS 1.3 supports cipher suites like TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256. Implementations should support both the NIST curves and djb’s curves for digital signatures and key exchange. The NIST curves, RSA, and AES_GCM are very well established and TLS 1.3 isn’t removing them, but the standard now offers a very limited choices of primitives, which is in practice is actually a really good thing. Clients can use x25519, ChaCha20, and Poly1305 to avoid the magic numbers in NIST P-256 and the fragility of AES_GCM, leading to much stronger encryption.

Peter Gutmann suggests that IETF hasn’t chosen djb’s primitives out of fanboyism; it’s because there has been an uncomfortably long list of vulnerabilities in SSL/TLS in past 15-20 years and the IETF are trying to move to something different. They are looking for what djb calls “boring crypto,” primitives and protocols that “simply work, solidly resists attacks, and never need any upgrades." Time has shown that the old primitives are far from that. In Gutmann’s words, the IETF is trying to dig themselves out of a hole; they are lost in a desert and djb is offering an oasis.

Summing it up

The TLS 1.3 protocol offers much-needed changes. Under the current draft, there are only two mandatory cipher suites and four optional cipher suites. The standard enforces perfect forward secrecy by requiring a Diffie-Hellman handshake: clients have the choice to use either NIST curves or djb’s x25519 for that exchange. The standard also requires authenticated encryption: clients have the choice of the block cipher AES in GCM mode or the stream cipher ChaCha20 with Poly1305. TLS 1.3 introduces one round-trip negotiation and reduces the size and complexity of the protocol in numerous ways, including the removal of all primitives and features that contribute to a weak SSL configuration. Overall, TLS 1.3 introduces more improvements than any previous SSL/TLS standard and thus represents a significant leap forward for communication security.

I am excited for the upcoming standard. I think we will see far fewer vulnerabilities and we will be able to trust TLS far more than we have in the past. I hope that modern web browsers will indicate a preference for Dan Bernstein’s primitives, particularly x25519 and Poly1305. I trust his designs and I appreciate that they are safer to implement than their competitors. “Boring crypto” is what we want and I’m happy that we’re finally going to get it with TLS 1.3.
 

Additional reading

Main properties of djb’s primitives

  • Public-key digital signature systems Ed25519 and ed448
    • Operate at the 128-bit and 223-bit security level, respectively. Best known attacks against Ed25519 take 2^140 operations.
    • Very fast: a quad-core 2.4 GHz Westmere CPU signs 109,000 messages/second and verifies 71,000 messages/second.
    • Unlike ECDSA, Ed25519 generates deterministic signatures and thus does not depend on a secure random number generator during the signing step.
    • Ed25519 uses SHA-512, but hash collisions do not compromise the system.
    • Constant runtime, thus avoiding timing attacks.
    • No secret use of RAM, thus avoiding side-channel leaks that rely on the contention in CPU cache.
    • Small overhead: Ed25519 uses 512-bit signatures and 256-bit public keys.
  • Key exchange protocols x25519 and x448
    • Operate at the 128-bit and 223-bit security level, respectively.
    • Very rigid design with no magic numbers and no unexplained design decisions.
    • Unlike many elliptic curves, is not covered by any patents.
    • Designed to resist several types of active and mathematical attacks on elliptic curves. For instance, attackers cannot compromise the system by using an invalid public key, nor can an attacker send a point that is not on the elliptic curve.
    • Constant runtime and identical space requirements to ed25519 and ed448.
  • Stream cipher ChaCha20
    • Replacement for RC4.
    • Very fast in both software and hardware implementations.
    • Encryption and decryption is embarrassingly parallel.
    • Proven resistant to differential cryptanalysis.
    • One of the winners of eSTREAM, a competition to find stream ciphers suitable for standardization and widespread adoption.
  • Message authentication code Poly1305
    • Replacement for GCM, a common authenticated encryption mode for AES.
    • Provably secure. For example, the only way to compromise the security guarantees of ChaCha20-Poly1305 is to break ChaCha20.
    • Very fast and highly parallelizable.

Continue Reading

Explore Topics