Cyber-physical attacks are on the rise. As the IoT creeps further into our daily lives, so does the attack surface. What can we do to keep ourselves safe?
The original version of this post was published in Forbes.
The fact that a cyber attack can have physical consequences is not exactly breaking news. The use of the computer worm Stuxnet to destroy nearly a thousand, or about a fifth, of the centrifuges in Iran’s Natanz nuclear enrichment facility is now a decade in the rearview.
The warnings of a “Cyber Pearl Harbor” go back much further than that—former counterterrorism czar Richard A. Clarke invoked the term in 2002—although such warnings became much more high-profile in 2012 when then Defense Secretary Leon Panetta used it in a speech in New York to the Business Executives for National Security.
Nothing at that kind of catastrophic scale has happened yet, and numerous experts have scoffed in the past at the idea that a hostile nation-state could cripple the U.S. grid or other critical infrastructure for months, or even years, at a time.
But what is increasingly being called the “convergence” of cyber and physical doesn’t have to mean national catastrophe. It could mean regional or local. It could mean personal—your own workplace, house, car or even your scooter. And that threat is indeed growing.
The list of potential cyber-physical attacks that could yield physical damage on a smaller but still lethal scale is growing longer. Which should be no surprise. The “attack surface”—the number of connected systems and devices—has been increasing exponentially since the Internet of Things (IoT) started becoming more like the Internet of Everything (IoE).
Verizon’s 2018 Data Breach Investigations Report found that more than one in 10 data breaches in the previous year had a physical component.
As Threatpost put it earlier this month, “The stat underscores the realities of a shrinking gap between physical and cyber infrastructures.”
It’s why blogger, author and encryption expert Bruce Schneier, CTO at IBM Resilient, titled his latest book Click Here to Kill Everybody. While acknowledging the title was intended to be clickbait, he said it was only marginal hyperbole since “everything is a computer.”
Which is itself only marginal hyperbole, when everything from toasters to refrigerators, vacuum cleaners, lawn mowers, thermostats, farm equipment, cars, home security systems and more are increasingly connected to the internet.
“Computers are moving into a space that was only physical,” he has said numerous times.
Aloke Chakravarty, partner and co-chair of the Investigations, Government Enforcement and White Collar Protection practice group at Snell & Wilmer, agrees. “All sectors of society are increasingly reliant on smart technology to command and control the tools that it uses to work, play and live,” he said.
So does Craig Spiezle, managing director with the Agelight Advisory Group and chairman emeritus of the Online Trust Association. He said the physical risks from cyber attacks on everyday devices that until recently had nothing to with the internet are steadily increasing.
“The risks are real,” he said. “Think about the domino effect as devices within a home network attack or disable others or create a bot storm disabling the other devices.
“The Online Trust and Integrity working group led by the Agelight Advisory Group has been focused on this issue for the past two years, including briefing NIST [National Institute of Standards and Technology], the FTC [Federal Trade Commission] and most recently the Consumer Products Safety Commission.”
Indeed, the group could focus on just the past couple of months, during which there have been reports on vulnerabilities that could allow hackers to interfere with connected devices ranging from ocean-going ships to scooters.
Pen Test Partners documented how easy it would be for a motivated hacker to capsize a cargo ship by hacking into its ballast pump controllers and causing them all to pump from port to starboard ballast tanks.
To accomplish that, in the words of PTP’s Ken Munro, would be “trivial.”
Forbes reported last month on a couple of hackers in Italy who, about a year ago, armed with nothing but their laptops, prepared exploit code and radio hardware to transmit that code, took control of construction cranes, excavators, scrapers and other large machinery.
And a blog post from mobile security company Zimperium reported earlier this month on an electric scooter made by the Chinese electronics company Xiaomi. Their researchers found that due to improper validation of the user password at the scooter’s end, “all commands can be executed without the password.”
That meant a remote attacker, up to 100 meters away, could send unauthenticated commands over Bluetooth to a targeted vehicle that could include locking it or causing it to brake or accelerate suddenly. The company responded that the problem was with a third-party product, and it was working to resolve it.
It is impossible to eliminate all online risks like those. But it is possible to reduce them significantly. For about as long as there have been cyber attacks, experts have been saying that security testing should be part of the entire software development life cycle (SDLC)—that security has to be “built in” to the software that operates connected devices, especially given how rapidly hackers improve their techniques.
“Cyber threats continue to evolve, so security systems must evolve along with the changing threats,” said Jeff Dennis, a partner with Newmeyer & Dillion. “A cyber defense system that was effective in 2017 may not be as effective currently due to that evolution.”
Chakravarty agrees, noting that the evolution will have to include addressing artificial intelligence, smart technologies, and universal networking.
“Cybersecurity must be built into advanced technologies by design,” he said. “Where there is digital or electronic communication, there is a potential hacking opportunity. And where decisions are increasingly made by algorithm or other automated process, even data-at-rest, if manipulated, could dramatically change what a machine does in the real world.”
But while some of that is happening, it is not keeping pace with the speed of IoT convergence.
Chakravarty noted that while governments have become more involved in mandating security, they have mainly focused on privacy and PII—personally identifiable information.
Dennis said some of that is changing, noting that California’s IoT regulation “requires manufacturers of connected devices to utilize ‘reasonable’ measures to ensure that the systems they build have adequate security.”
But of course, “reasonable” could mean many different things in different circumstances.
So, as is often the case in industries ranging from automotive to tobacco, legal liability may move the needle more quickly than anything else.
Physical damages from insecure devices could open “various avenues of legal liability,” Chakravarty said, ranging from civil suits brought by injured users, business partners and shareholders to more aggressive sanctions by government to criminal liability.
To avoid that list of potential calamities takes measures that should be fundamental but are a long way from universal. Besides building security into products before they hit the market, companies need to “be transparent in contractual relationships as to what risks are being borne by whom—what are the obligations of each business partner, software provider, hardware manufacturer, network component, and in some cases, what is a user responsibility,” Chakravarty said.
“To limit liability, companies need to show that they take security more seriously than the users of their products do.”
So far, that is still very obviously a long way from reality. But a few major class action lawsuits might change it.