close search bar

Sorry, not available in this language yet

close language selection

Understanding the Apple ‘goto fail;’ vulnerability

Synopsys Editorial Team

Feb 25, 2014 / 6 min read

You may have heard about the recently publicly disclosed vulnerability in Apple iOS. Let’s take a look at the goto fail vulnerability details as well as who is affected.

Vulnerability details

As the code at shows, there is a bug in the implementation of the SSLVerifySignedServerKeyExchange function. Although the goto fail vulnerability has been discussed in many other places, let’s take a quick look at it here:

ssl code

The issue is the two consecutive goto fail; statements. Although the indentation of the lines makes it appear as though they’ll both get executed only if the predicate in the if-statement is true, the second one gets executed regardless of whether the predicate is true or false. If the indentation is corrected, the problem becomes more obvious:

ssl code2

Since the SSLHashSHA1.update call will generally not return an error, the value of err will almost always be zero when the second goto fail; statement is executed. And what happens when goto fail is executed?

ssl code3

The return value of zero is provided to the caller, who believes that the signature verification on the “Server Key Exchange” message passed.

Who is affected?

Everybody running recent versions of iOS and Mac OS X seems to be affected. The bug is in code that verifies the signature on the “Server Key Exchange” message in the SSL/TLS protocol when a version of SSL/TLS older than TLS 1.2 is used. The code is called from:

ssl decode

This code will only be executed if an Ephemeral Diffie-Hellman cipher suite is chosen for communication. Also, as you can see above, if TLSv1.2 is used, a different version of the function is executed, which does not have this vulnerability.

However, this vulnerability enables man-in-the-middle attacks for all SSL/TLS connections initiated by affected devices. I’m not going to discuss the details of how the SSL/TLS protocol works, but at a high level, here are the steps that get executed:

  • The client sends a list of cipher suites that it supports to the server.
  • The server selects the cipher suite it wants to use back to the client. If an Ephemeral Diffie Hellman cipher suite is chosen, the server also provides its certificate and Diffie-Hellman parameters signed using the private key corresponding to its certificate.
  • The client verifies the signature on the Diffie-Hellman parameters provided by the server, and then generates its Diffie-Hellman parameter and sends it to the server. If a client certificate is used, this value is signed by the client.
  • The client and server generate a session key using the Diffie-Hellman parameters.
  • The client and server exchange hashes of all handshake messages over a channel protected using the session key. This is meant to protect against man-in-the-middle attackers modifying otherwise unprotected handshake messages in transit.

Let’s consider the case where an Ephemeral Diffie Hellman cipher suite is not selected by the server. The attacker can replace the cipher suite selected by the server, leave the server’s certificate unmodified, and include a “Server Key Exchange” message with Diffie-Hellman parameters chosen by the attacker. The client will verify the server’s certificate, but will skip verifying the Diffie-Hellman parameters’ signature due to the vulnerability discussed in the previous section. The client will then generate its own Diffie-Hellman parameter and send it to the server; this value will be intercepted by the attacker. At this point, the attacker and the client will generate the same session key (because of how the Diffie-Hellman protocol works) and will exchange hashes of handshake messages. The client will have failed to verify that the Diffie-Hellman parameters provided to it were actually provided by the server whose certificate it received. The attacker can optionally also establish a SSL/TLS connection with the server and proxy traffic between the client and the server.

The same type of man-in-the-middle attack will work if the server selects TLS v1.2 as well. The attacker can cause the client to fall back to TLS v1.1 where the vulnerable code will be executed.

How did this happen?

This is an interesting question. There’s evidence of copied and pasted code in several places in the vulnerable file. For example, note the logging statement in the first screenshot in this blog entry. It’s clear that the code was copied from SSLDecodeSignedServerKeyExchange to SSLVerifySignedServerKeyExchange. It is possible that the developer copied and pasted some code on top of some existing code, but forgot to select one line in the code they were replacing.

How could this have been prevented?

There are several ways in which this problem could have been prevented. Let’s take a look at some of them.

Writing/compiling code

Several people have pointed out the use of goto in the code. Although this is generally a bad practice, I tried modifying similar code and using “better” coding constructs instead. I never got a compiler error, and the code was not necessarily a lot easier to read. I don’t believe that the use of goto significantly contributed to this problem.

Reviewing code

Manual code reviews are a great way to catch these types of problems. The indentation issue does make the code more difficult to read. This is precisely why whenever I manually review code, I use the IDE to automatically fix indentation before I start looking at it. This doesn’t guarantee that the reviewer will catch every problem, but it does make code easier to read and understand. If you perform manual code reviews, I would highly recommend doing this.


If you’re implementing security protocols, you need to have test cases that test every step of the protocol. Some commercial SSL/TLS fuzzing tools may have test cases for this problem already. If they don’t, I’m sure they will in the near future. Use these tools, or implement your own.

Protocol definition

I was thinking about whether this issue would have been a less severe problem if the SSL/TLS protocol worked slightly differently. For example, if the client sent its Diffie-Hellman parameters to the server encrypted with the server’s public key, this vulnerability wouldn’t have had any impact. The client’s Diffie-Hellman parameter would not be visible to the attacker, and so, the attacker couldn’t establish a SSL/TLS session with the client. This is of course only relevant if the server has a RSA key.

The client side always encrypts the pre-master secret using the server’s public key when RSA is used for key exchange. So why doesn’t it do it for Diffie-Hellman cipher suites? The only explanation I could come up with was that if the SSL/TLS implementation is correct, this is unnecessary and adds unnecessary overhead to the handshake. When Diffie-Hellman is used for key negotiation, the server already has to perform quite a bit of work; it has to generate Diffie-Hellman parameters, sign them, and then combine them with the client’s Diffie-Hellman parameter. These are all slow operations. Adding decryption of the client’s parameter would add one more slow operation to the protocol. Also, this wouldn’t work if the server had a DSA key.

What can you do about it?

End users should apply patches supplied by Apple as soon as possible. If you have an iOS app, you can use the SSLSetEnabledCiphers function to disable Ephemeral Diffie-Hellman cipher suites on the client side in the short term. Of course, this is only helpful if your users are likely to install your app update sooner than Apple’s security patch. Also, you should re-enable the Ephemeral Diffie-Hellman cipher suites once most of your users have applied Apple’s security patch (in a few weeks to a month). These cipher suites provide forward secrecy, which is a good thing. So don’t leave them disabled for too long. Unfortunately, there’s nothing you can do on the server side to mitigate this issue. So people accessing your website from unpatched iOS / Mac OS X devices will be vulnerable.

If you have a custom security protocol implementation, make sure you review and test it thoroughly! You just saw how a single line of code can cause your implementation to be completely broken.

Continue Reading

Explore Topics