Posted by Robert Vamosi on April 17, 2017
Legacy vulnerabilities are often old “features” that weren’t designed for modern use. Since every new day brings a new attack, it’s time to secure them.
Increasingly, computer hacking is leaving the traditional network and reaching out into the physical world. So it shouldn’t be too surprising that two recent well-publicized hacks were accomplished using non-traditional ways. One, the sounding of all 100+ civil defense sirens in Dallas, Texas (for 90 minutes during the night) most likely used only sound waves to compromise the system. Another, where a researcher showed that he could take over a smart TV and then a home network, used only analog TV broadcast signals. What each has in common with software applications is that the attacker accessed vulnerable legacy features that few remembered were even there.
It was an early Saturday morning when all the civil defense sirens in Dallas sounded, and continued sounding even after authorities had shut down the system. According to T.C. Broadnax, Dallas City Manager, the outside hack of the emergency system involved a radio signal “that spoofed over-the-air system used to control the siren network.” Reporter Sean Gallagher from Ars Technica later reported on Twitter that the hacker claiming credit did so using a “supersonic satellite/weather control tool,” and that the person did it because they wanted not only to draw attention to the problem but for officials to “fix the damn issue.”
Civil defense sirens in the United States were standardized in 1950s by the Federal Civil Defense Administration. The threat then was the Cold War, and series of complicated messages (e.g., long sound, then silence, then long sound again) were designed to convey to the population the difference between a Red Alert and an All Clear. In reality, most people only know civil defense sirens from their local volunteer fire departments in rural areas and from natural disasters such as tornadoes and tsunamis.
In the early 1960s the Emergency Broadcast System (EBS) was established so that the President of the United States could make one uniform broadcast throughout the country. In 2013, hackers broke into a broadcast episode of “The Bachelor” in northern Michigan with their own EBS announcement. The hackers delivered an announcement that bodies were rising from graves and that the public should avoid all contact with living dead. In this case, the person responsible was arrested, and the cause was determined to be the TV station having not changed the default passwords on their EBS system.
There are no default passwords on the civil defense siren network in the U.S., per se. It is designed to operate based on radio signals. The radio signals are layered, in that a simple message radio signal should not turn on the siren without a secondary or tertiary signal to follow. Whoever hacked the sirens in Dallas had to be physically close to each tower and had to know the proper frequency and proper sequence for the signals. Wired.com has a more thorough explanation of what might have happened. This method of radio signal use, however, failed to anticipate that in 2017 a simple laptop could easily figure out the frequency and sequence and, if anything, capture and replay a legitimate signal. By the end of the week, the Dallas system added an additional layer of encryption.
Meanwhile, at the European Broadcasting Union Media Cyber Security Seminar in February, a security researcher, Rafael Scheel, disclosed that he’d found an unintentional back door into nearly every smart TV. Designed for the transition period between traditional broadcast TV signals and internet-based TV signals, a feature that listened for both can, with a little bit of hardware, be used to gain remote access to a home network.
Scheel said that TVs connected by the internet are also equipped to handle digital video broadcasting—terrestrial (DVB-T) transmissions, a standard that’s built into most TVs. He found that smart TVs that support the hybrid broadcast broadband TV standard contain at least one critical vulnerability. Scheel said the exploit was relatively easy to install without the viewer realizing and was impervious to reboots and factory resets. An attacker could then bootstrap from the compromised TV to the home network to any device connected. So far, the DVB-T attack appears limited to Europe; the U.S. does not use the standard.
In the case of the civil defense siren, access was obfuscated for 50 years, and in the case of the smart TVs, arguable. As systems modernize, so should the fail-safes. Keeping an archaic means of access against a rapidly modernizing world doesn’t make sense when it comes to public safety or privacy.
Something similar happens with software applications and firmware. Whether intentional or not, legacy code remains. The process could be unconscious; over time, as new features are added, old ones may not be discontinued. Sometimes it could be result of regulation—a mandate to keep older functionality. More often, legacy code is simply cut-and-pasted into newer versions, thereby increasing the chances for vulnerabilities.
It stands to reason that up to half of the software defects that create security problems can occur in design. Simply testing software for security bugs within lines of code or penetration testing your applications is not enough. Ignoring the fundamentals of how the code was written or what it was originally expected to do can also leave your organization vulnerable to attack.
Architecture risk analysis is a process that begins before coding and helps organizations identify and manage potential problems. The process identifies a possible threat, and then estimates the likelihood that the threat will materialize. An ARA is unique to each organization. A qualitative risk analysis assigns a number value (high to low) to each identified risk, making it easier to prioritize.
Threat modeling is different from risk analysis in that it looks at who is likely to attack your system. Just as risk analysis is unique, so is threat modeling—you can’t crib from your neighbor’s threat model. The model must be specific to your organization.
Threat modeling and risk analysis should also precede any software application coding, including updates and patches. Additionally, threat modeling can be conducted later in the SDLC to help define areas for additional security testing. The process should be used to remove unnecessary features that could potentially become vulnerabilities.
Just when no one ever thought a PLC system on an air-gapped network could be compromised, we had Stuxnet. Things change. There will be more creative attacks, each looking for old ways of thinking and using new techniques to exploit them. Organizations need to consider this fact today.
While you can’t easily go back and retrofit all the old devices and services, you can plan for a better tomorrow. In the meantime, we might have to put with a few more sirens in the night.
Get the latest Software Integrity news, thought leadership, and more.