Software Integrity


Standard versus proprietary security protocols

Standard Security Protocols

An encyclopedia defines a security protocol as “a sequence of operations that ensure protection of data. Used with an underlying communication protocol, it provides secure delivery of data between two parties.”

We use security protocols in everyday computing. For example, when we use our domain credentials to login to a Microsoft Windows environment, we use the Kerberos (or the NTLM protocol in the earlier versions of Windows) indirect authentication protocol. When we conduct an e-commerce transaction using a secure e-commerce site, we use various encryption protocols, integrity protocols, and so on. Of course, there are other types of security protocols – authentication protocols, access-control (or authorization protocols), encryption protocols, key management protocols, key distribution protocols, and so on.

In day-to-day computing, we use standard security protocols because they offer us significant security benefits.

For one, experts design standard security protocols. Once designed, these are reviewed and re-reviewed by experts on standards organizations (e.g. IEEE, W3C, IETF). New protocols are subjected to security threat-modeling analysis to ensure that they offer protection against commonly known attack patterns. When these protocols are deployed in the field, their security is monitored, and over time their security kinks are worked out.

In addition, when these standard protocols become (eventually) insecure, more secure versions of protocols are made available (TLS 1.1/1.2 to replace TLS 1.0) or new protocols are designed to replace the aging ones (e.g. AES replaced aging DES/3DES).

Then there are weaknesses associated with the use of standard protocols as well. Since all the knowledge pertaining to a standard protocol design / implementation is in the public domain, there is a greater likelihood of finding 0day exploits. Additionally, when an issue discovered, it is widely known and exploitable in a very short period of time. The Heartbleed bug is a good example [2] of this. Finally, tools are widely available to exploit any older and unpatched versions of the software libraries/packages that are known to implement the standard protocol incorrectly.

Proprietary Security Protocol

In the world of embedded devices there are a few reasons why people may choose to use proprietary protocols:

  • Security through obscurity:
    • Lesson Overview and Objectives
    • A proprietary protocol by definition is obscure and an attacker will most likely need time and resources (usual exploiting tools will not work as an example) to understand how the protocol is designed and implemented before exploiting any weaknesses.
    • Another reason to implement proprietary protocols is to avoid scenarios where software updates are required (which are expensive for embedded devices) because a vulnerability is found in a standard protocol in a different context. If standard protocols are implemented then with each known issue a software update will be required. This can be an expensive proposition. With proprietary protocols – at least we are safe – until someone specifically goes after our device.
  • Performance: Most embedded devices have limited computing power and in such cases there is a need / tendency to use lightweight / proprietary protocols.

Whatever the reasons are, when we design our own proprietary protocols, we need to worry about many security issues.

For example, if we are designing a custom cryptographic protocol (a really really bad idea), then we need to worry about whether our protocol is provably secure? If not, then is our proprietary protocol secure against passive attacks (CPA-secure) and/or active attacks (CCA-secure)?

If we are designing a distributed proprietary protocol (say an custom authentication protocol or a custom key distribution protocol), then we need to worry many other security issues such as:

  • Is the proprietary protocol vulnerable to Man-in-The-Middle (MiTM) attack? Can an attacker insert himself/herself in the middle to breach the protocol?
  • Is our proprietary protocol vulnerable to replay attacks? Can the protocol packets be captured and replayed at a later point to confuse either of these endpoints?
  • Can an attacker find other ways to spoof the endpoints?
  • Can an attacker launch an interleaving attack to break the proprietary protocol?
  • We may even need to worry about reflection attacks, forced-delay attacks, and the list goes on.

We conduct reviews of thick clients / embedded systems / mobile applications in the healthcare industry, in the gaming industry, and in the financial industry (among others). In our reviews, we commonly find serious design-level and implementation-level issues in proprietary security protocols implementations. In our experience, it is very difficult to get proprietary protocol designed and implemented correctly. Our security analysts are routinely able to find serious vulnerabilities in such protocols. A common example of such proprietary security protocol code is X.509 certificate validation. While most commonly used browsers do a good job of validating server-side X.509 certificates, we routinely find many issues when mobile applications (which are responsible for certificate validation) implement such validation.

Our recommendation is that we should as much as possible use standard security protocols. If we must use a proprietary security protocol, then we should try to use such a protocol in conjunction with a standard protocol (e.g. implementing proprietary encryption or encoding scheme over a secure TLS link, or the use of custom white-box cryptography controls in conjunction with other well-known obfuscation controls). If a proprietary security protocol is used exclusively, then its design and implementation must be thoroughly assessed for security vulnerabilities.