What makes medical devices hackable? The same thing that makes websites hackable: software vulnerabilities. But the consequences are far worse than stolen data.
Security researchers Jonathan Butts and Billy Rios wanted to make it clear at the beginning of their presentation. “The benefits of implanted medical devices outweigh the risks (for most people),” read one of their opening slides.
But they probably wouldn’t have been doing a session at Black Hat titled “Understanding and Exploiting Implanted Medical Devices” if that was all there was to say about it, and if some of those “outweighed” risks weren’t serious.
The safety and security of such devices are, after all, crucial to the health and even survival of patients who use them. If a malicious attacker gets control of them, the results could, obviously, be much more catastrophic than the loss of money or even identity.
There are multiple indications that things are improving in the medical device ecosystem, and will continue to improve. One of the most significant is the recent adoption by the Food and Drug Administration (FDA) of UL 2900-2-1 as a “consensus standard” for premarket certification of the cyber security of medical devices.
But there are still plenty of vulnerabilities out there, as well as—at least in some cases, according to Butts and Rios—resistance to acknowledging them and making necessary fixes.
The two demonstrated that some devices they tested, including infusion pumps, pacemakers, and patient monitoring systems, had vulnerabilities that they found relatively easy to exploit remotely.
Rios, founder of WhiteScope, and Butts, founder of QED Secure Solutions, said that in recent years, they have reported 500 advisories to vendors. Most have been cooperative and worked with them on both “coordinated disclosure” of problems and fixing those problems.
But they unloaded on one vendor—Medtronic, whom they said was both uncooperative and unresponsive. They said 18 months after they disclosed vulnerabilities in devices made by the company, there had been one patch but no real fix, and not even an acknowledgment that a fix was needed.
“They spent more time trying to twist the story than fixing it—and we told them how to fix it,” Butts said.
Medtronic—who knew Rios and Butts would be at Black Hat—posted two new advisories on its website just days beforehand. In those advisories, the company acknowledged a “potential” vulnerability in its MiniMed Paradigm insulin pumps and vulnerabilities in its MyCareLink Patient Monitors. But in the case of the monitors, Medtronic contended that exploiting the vulnerabilities would require physical access. So, the company claimed, “the risks are controlled (meaning there is sufficiently low [acceptable] residual risk of patient harm)” (brackets in original).
Medtronic also issued an advisory in February, and then updated it in June, regarding vulnerabilities in its CareLink 2090 pacemaker programmers. The advisory said that “existing security controls mitigate this issue” and that the company had “added periodic integrity checks for certain files associated with the software deployment network.”
But Rios and Butts said the company’s story had changed multiple times since they first notified Medtronic in January 2017.
In the case of the patient monitors, “they said earlier that there was no impact on patient safety,” Butts said. “But when they heard we were going to be talking about it at Black Hat, they said the risk is ‘sufficiently low.’ Is it none or is it low? That matters to me.”
There is also a disagreement between Medtronic and the researchers over whether attacks can be launched remotely. The company has said they can’t, but the two insist they can.
In the case of the pacemaker programmer, Rios and Butts demonstrated how they could connect to the SDN—software delivery network—and obtain credentials to the point where they could launch attacks against an individual, a clinic, or “the entire ecosystem and affect every patient.”
They didn’t actually break into Medtronic’s SDN, which they said would have been illegal. But they created a proof of concept showing how it could be done.
Butts said the vulnerabilities would allow an attacker to generate shocks through a pacemaker when they weren’t needed, or withhold them when they were needed.
The researchers said when they first contacted Medtronic, they gave the company research documentation on how they could break into the devices and the SDN. They said Medtronic representatives told them, “We’re setting up a testing environment.”
But, they said, they were told the same thing for weeks and then months.
They noted that the FDA’s post-market guidance calls for device makers to notify customers of any vulnerabilities within 30 days and to fix them within 60 days.
But in this case, they said, the company acknowledged eight months later it had never attempted to reproduce their research. Then, after 10 months, the company said there were no patient safety implications.
Not to mention that the programmers run on the legacy (and no longer supported) Windows XP operating system.
“That’s not acceptable,” Butts said. “We have no financial interest here—we’re not invested in anything related to this. We’re just passionate about patient safety, and they have the money to fix this. Do you think Microsoft, Google, or Adobe would take 18 months to push out a patch?”
The two said that in their view, Medtronic is an outlier. “Most vendors are trying to do the right thing,” Butts said. “But situations like this show the industry still has a long way to go.”
It isn’t just insecure medical devices that put patients at risk, of course. A session titled “Pestilential Protocol: How Unsecure HL7 Messages Threaten Patient Lives” focused on privacy and what are called “availability attacks.” These are attacks like the infamous WannaCry ransomware attacks in May 2017 that affected the availability of care in more than 100 countries.
Since the attacks took down computer systems, they also delayed patient appointments, treatments, and in some cases, surgeries.
The panelists—physicians Jeff Tully and Christian Dameff, along with grad student Maxwell Bland, all of the University of California—also demonstrated a classic man-in-the-middle (MitM) attack between lab information systems and the HL7 interface engine to modify lab results. The modification, they said, could cause doctors to think a patient had diabetic ketoacidosis, inject him or her with insulin, and cause a toxic reaction—possibly fatal.