Software Integrity

 

What the Heartbleed bug should be teaching us

What a difference a few weeks makes in the software security world. When the Heartbleed bug was publicly disclosed a short while ago, the reaction was swift and fairly consistent. It was identified as a real problem, not FUD, and systems were being patched VERY quickly. Often time when a security vulnerability is announced we try to answer questions such as:

  1. Does this really affect me?
  2. Do I really need to worry about this?
  3. Is this just some theoretical thing and it can’t really happen in the real-world?

In an incredibly short amount of time we discovered the answers were:

  1. Almost definitely yes (either directly or indirectly)
  2. Absolutely
  3. Absolutely not

Unlike some other vulnerabilities related to cryptography, this bug had fully functioning, publicly available code to take advantage of this bug in a matter of days. Yes – days. When the first set of tools were released we quickly saw that sensitive data sent to a server could be retrieved by the attacker if the server was using the vulnerable versions of OpenSSL. In the last few days we now know that private keys used on the server can also be retrieved. YIKES!

There are a number of good articles describing what you need to do if you are using a vulnerable version of OpenSSL. But rather than look at this bug from a reactionary point of view, it seems like a great time to ask yourself, is there something I could have done so that this vulnerability would not have hit me so hard in the first place? The answer is of course, “yes”.

As we look at the past few years we can see that SSL has been taking quite a beating. You may have heard the catchy names of CRIME, BEAST, Lucky13, and of course, Heartbleed. And the protocol itself isn’t the only vulnerability we have seen. Do you remember the DigiNotar CA debacle? See the common denominator? That’s right, you can say it … it’s SSL.

Running the Architecture Analysis practice here, I get to review the designs of a wide variety of systems in different business sectors, deployment scenarios, platforms, and architecture types. One of the go-to controls in almost every system is … SSL. It seems to me the time has come to not just blindly assume that SSL as THE security control is enough when protecting data moving between two endpoints. I would like to think the last few years should have shaken the confidence of even the most confident individual who thinks SSL is always enough.

So what can we do from a design perspective to improve our security posture? I would like to think that exploits like this, where the turn-around time from disclosure to available tools taking advantage of the exploit was extremely short, make us take stock of the way we design software. You have an opportunity here to go to your boss and make the case that you need more time to develop software that not only meets functional goals, but can resist attacks. This means designing your software with security in mind.

Would having a more secure design have helped in this specific case? I have no idea. But let’s look at a couple of things that we can do.

  1. Only access sensitive data and pass it around when it is absolutely necessary. When you need to pass around sensitive data, determine if the data can be masked – maybe the recipient does need to see all the data. Maybe you can pass a reference to the sensitive data if you just need a way to identify a piece of sensitive data and not access the actual sensitive data. Would this have helped in this case? Probably not as passwords and private keys are being extracted from the web server. But maybe this will help with the “next” vulnerability.
  2. Encrypt sensitive data in addition to encrypting the channel. In general, you can increase your ability to defend against attacks by using multiple layers of defenses, a concept known as “defense in depth”. It is best when the layers of defenses are unrelated, so expertise to break through one layer of defense won’t necessarily help you break through the other defensive layers. Even though here I am recommending using two controls based on cryptography, there is still a benefit when using these two controls. Of course you need to exercise care when implementing this new cryptographic control because … crypto is hard to get right. The encrypting of the sensitive data at the application layer should be done using forward secrecy with the keys being valid for some reasonably short amount of time and of course the keys should not be sent over the (now) untrusted SSL channel.

Don’t wait for the next SSL vulnerability to be found. Take a good look at your system design and see if there are places where layers of defenses makes sense, and when it does, go build those layers so that maybe the next big vulnerability will just pass you by.