How to prevent IoT hacks: Secure your software before you release it. It’s not that hard. So why aren’t more IoT device manufacturers doing it?
Welcome to a summary of a few not-so-random recent events in the vast Internet of Things (IoT):
NBC News reported that an “avid” user of smart home technology told the station that after hearing noises from his seven-month-old baby’s bedroom, he checked and found a “deep male voice” coming from the Nest security camera in the nursery. The online intruder had also taken control of the thermostat in the room and set the temperature to 90 degrees.
This was just one of multiple stories in recent days about Nest glitches.
A security researcher who goes by the handle LimitedResults reported that by using a few tools like a hacksaw and soldering iron, he was able to hack his way into a smart LIFX light bulb and get Wi-Fi credentials (stored in plaintext within the flash memory), root certificate, and RSA private key from the firmware.
More than a year ago, the Norwegian Consumer Council (NCC) analyzed four smartwatches for children—wearable mobile phones that let parents communicate with and track their offspring.
They reported “critical flaws” that could allow hackers to “take control of the apps, thus gaining access to children’s real-time and historical location and personal details.” Hackers could even “contact the children directly, all without the parents’ knowledge.”
And a year later? Pen Test Partners reported last week that they looked at the same watches and found, “Guess what: a train wreck. Anyone could access … real time child location, name, parents details etc.” And these vulnerabilities “covered multiple brands and tens of thousands of watches.”
Researchers from Brazil’s Federal University of Pernambuco and the University of Michigan in the U.S. are out with a report of a study they conducted of 32 smartphone apps used to configure and control the 96 top-selling Wi-Fi and Bluetooth-enabled devices sold on Amazon.
They found that “31% of the apps do not use any crypto to protect the device-app communication and that 19% use hardcoded keys. A significant fraction of the apps (40–60%) also use local communication or local broadcast communication, thus providing an attack path to exploit lack of crypto or use of hardcoded encryption keys.”
All of which would seem to make the declaration of blogger, activist, and author Cory Doctorow, while crude and rude, pretty much on target. In a brief post reacting to the smart light bulb post, he called it the “internet-of-sh–.” Never let it be said that Doctorow doesn’t tell you how he really feels.
The fallout from these stories? Google, which owns Nest, told both NBC and CNN that some customers had used passwords that had been breached and published on other sites. The company said—as experts relentlessly do—that users should use unique passwords and two-factor authentication (2FA), which adds a layer of security.
And yes, users are responsible for using the strongest security measures available for them. Still, Google knows just about everything about everybody. So it seems the company could have notified Nest owners if they were using a compromised password.
Indeed, the company just launched Password Checkup Chrome extension, designed to notify users if they try to sign into a website with a username and password exposed in a breach. And according to a report in December, after a Nest owner was alerted by a white hat hacker who had gained access to his camera, Nest said it had reset all the accounts that were using compromised passwords.
In the other incidents, LIFX said it had addressed the vulnerabilities in the light bulbs with automatic firmware updates.
And in the case of at least one of the smartwatches—Gator—Pen Test Partners said that after some back and forth, the vendor fixed the vulnerability in 48 hours.
Still, these are examples of what seems to be an incorrigible problem in the ubiquitous and still explosively growing IoT: Vendors tend to fix things only after a breach, or after a security researcher exposes a vulnerability. They don’t prevent issues by building robust security into products before releasing them.
But at least some security experts say it’s not that manufacturers are ignoring security entirely. It’s just that the IoT is growing so much faster than “incremental improvements” in security, as Ted Harrington, executive partner at Independent Security Evaluators, puts it. It sounds a bit like a disease epidemic—for every patient the doctors cure, two more people get sick.
“Some manufacturers are starting to prioritize better security,” he said. “But the industry is growing so fast that these incremental gains are being vastly outpaced by the overall lack of progress in the hugely expanding pool of market players.”
Larry Trowell, principal consultant at Synopsys, contends that the anecdotes don’t necessarily reflect the overall reality. “The rate of devices being tested is going up, as there are more people in the profession now,” he said. “Not to mention that the security testing tools are getting much more refined.”
Referring to the case of the smartwatches, he said most pen testers give companies 90 days to fix vulnerabilities. But Pen Test Partners only gave Gator 30 days. “Granted, it was for child safety. It’s a tough call, but it was still a little rushed, in my opinion,” he said.
In the case of the Nest hack, he said, the user should have used 2FA. “Yes, Nest could have forced him to use 2FA, and they could have done a better job of informing him about the option. But I don’t see that as their system having a weakness,” he said.
Still, the ongoing flood of stories about IoT breaches has reached the point that some experts are calling for government regulation of IoT security. Blogger, author, and encryption guru Bruce Schneier, CTO of IBM Resilient, has lobbied before Congress for it. He argued that since “everything is a computer,” the hacking of IoT devices can have catastrophic physical consequences. His latest book is titled Click Here to Kill Everybody.
But not everybody is on the regulatory bandwagon. “Regulation requires too many trade-offs, is too broad to work well for any particular use case, and takes so long to enact that it often is irrelevant in the marketplace by the time it goes into effect,” Harrington said.
Trowell said the problem with government regulation is that the technology moves faster than legislation does. “Government is rather good at regulating things that have been done before and are understood,” he said. “Not so much at things that people haven’t done.”
That, he noted, is what is constantly happening in the IoT. “Each element of the IoT that gets created has something new—something that makes it different and, in a lot of cases, less secure,” he said.
He also noted that unless members of Congress have significant technical expertise, “there’s a question about how they could cope with the creation of such legislation in a maintainable way.”
So what, if anything, can significantly move security of the IoT in the right direction?
“The problem will be solved when the IoT security movement gains enough momentum that it cannot be ignored,” Harrington said. “IoT Village [organized by his firm and running at multiple conferences throughout the year] is a good example of many stakeholders—from security researchers to device manufacturers to regulators—collaborating. It’s already generated powerful momentum within the corners of the security research community, but not yet enough traction in the industry.”
Trowell said it comes down to one word: Attention.
“The more the spotlight shines on the flaws, the fewer flaws there will be,” he said. “Cars got better because of seatbelts, frame, airbags. People saw the problem, realized it was important, and demanded a change. These things were sold in top-model cars before they were mandatory.
“It’s important that security professionals explain why these things are important, and also how to fix them. If we only do one of these two tasks, nothing will ever get done,” he said.