The FDA’s adoption of UL 2900-2-1 as a consensus standard for premarket certification of medical devices means the world is about to change—for the better.
Any effort to overhaul the cyber security of connected medical devices is likely to take considerable time and energy. Given that many of them are made to last decades, securing them while they’re in use can make turning an ocean liner look positively nimble.
Still, the announcement last month by the Federal Food and Drug Administration (FDA) that it has adopted the ANSI (American National Standards Institute)-approved UL 2900-2-1 as a “consensus standard” for premarket certification of medical devices means the world is about to change—for the better. Especially for patients.
As has been reported and acknowledged for more than a decade, those devices were designed to work and to ensure patient safety. They were never intended to be connected to the internet.
But in the future, they will be.
UL 2900-2-1 calls for multiple levels of security testing throughout the software development life cycle (SDLC), including “structured penetration testing, evaluation of product source code, and analysis of software bill of materials.”
And while the FDA calls the standards “guidance,” and they are not a legal mandate, they might as well be, given that those who fail to follow them will likely find the approval process even more cumbersome than it is now—to the point where they might never get their products to market.
Which is a good thing—better cyber security in such devices is long overdue, since vulnerabilities can put patients at risk of injury or even death in extreme cases.
As Billy Rios, founder of WhiteScope, and Mike Ahmadi, now global director of IoT security at DigiCert and former global director of critical systems security at Synopsys, put it in a presentation several years ago, testing is crucial for a number of reasons:
So what are the testing “recommendations” (mandates) of the new standards, and how can developers comply with them?
The FDA doesn’t specify a tool that developers must use—it simply sets forth the different methods of testing it expects. What follows is some analysis and recommendations from experts on how developers can apply the FDA/UL “guidance” to build security into their products and have a better chance to qualify more quickly for premarket certification.
Known vulnerability testing focuses on platform and dependency weaknesses. It involves testing software for vulnerabilities that have already been discovered in products, third-party libraries, and some open source libraries and that are in the National Vulnerability Database (NVD). The NVD is augmented with additional analysis, a database, and a fine-grained search engine, according to NIST, the National Institute of Standards and Technology.
That is just a start, however, according to Chandu Ketkar, principal consultant with Synopsys.
He added that there are many open source tools and commercial tools that can scan for vulnerabilities in the underlying dependencies in the various databases. “When available, it is important to use a scanner that is platform specific,” he said.
The bottom line is that while it may be impossible to make a product immune to any future zero-day attacks, it should at a minimum be free of known vulnerabilities.
Malware testing focuses primarily on two things: finding out if the library or the executable a developer is about to deploy contains some malware, and making sure that the system/server on which the software is being deployed does not contain malware.
“Most basic malware scanning tools use a signature-based approach to identify and remove malware from your system,” Ketkar said.
More advanced tools will have a more sophisticated, behavior-based approach. “These tools watch processes for telltale signs of malware and compare to a list of known malicious behaviors,” he said.
There are many open source and commercial malware removal tools that do those kinds of testing.
Malformed input testing, a kind of automated testing also called “fuzzing,” is often the first form of evaluation an attacker uses against a target. Fuzzing sends randomized inputs to programs to find test cases that cause anomalous behavior.
There are many types of fuzzing, but the FDA/UL standards focus on two. The more basic form is mutational fuzzing. Mutational fuzzers use a valid sample input as a seed and alter it randomly to see how a target reacts.
The more sophisticated form is generational fuzzing. Rather than mutating existing input, generational fuzzers use a state engine to generate input from scratch.
Ketkar stated that “these fuzzers are used to test custom protocols” present on target devices.
As most security practitioners know, a penetration test, or “pen” test, is a simulated cyber attack against your product, system, or network to check for vulnerabilities. If a good person can hack your stuff, then obviously the bad people can too.
The most basic version is fully automated, but it is not as rigorous—and therefore not as effective—as the advanced versions.
At the advanced level, Ketkar said, “We are talking about proxy tools—such as Burp—that allow penetration testers to perform manual testing,” which can be much more effective.
Software weakness analysis looks to remove the most common forms of weaknesses by eliminating common categories of defects. Chris Clark, business development manager at Synopsys, said these categories point to what are cataloged as the CWE Top 25, the CWE/SANS On the Cusp list, and the OWASP Top 10.
There is a wide range of methods for finding and reducing these defects. But Ketkar said while there are some tools in the works for software weakness analysis, it is now done mostly by experts.
Static code analysis is a kind of debugging done by tools that scan code without executing the program, which means the code isn’t running.
There are multiple tools available that do static analysis. The most basic simply search the source code for a certain pattern, such as an empty catch block or a function whose return value is not captured by the caller, and report that. The more advanced tools use sophisticated techniques such as taint, dataflow, and control flow analysis to find more complex defects.
Binary code analysis scans compiler-generated machine code. It is useful when the source code isn’t available and all that is accessible are vendor libraries and executables.
But, Ketkar said, many experts strongly believe that even if analysts have access to the source code, binary analysis also provides highly useful results. “For example, seemingly simple code in a programming language such as Java creates copies of objects in memory that could be harvested by attackers,” he said.
Rios and Ahmadi said deep binary analysis can help catch back doors, design flaws, implementation issues, and configuration issues.
Bytecode analysis scans bytecode—computer object code that is processed by a program, usually referred to as a virtual machine, rather than by the “real” computer, or the hardware processor.
The list of testing methods expected by the FDA/UL may appear to be significant and challenging. But experts say that given the threat landscape, security testing should be standard operating procedure in any software development process, especially where medical devices are involved. And organizations should keep in mind that some tools are more effective than others and automation is key.
“Companies should consider what they can do to tie these certification requirements into their production pipeline so the certification process is part of daily operations,” Clark said.
“This will require considerable up-front effort but will lead to much more cyber resilient solutions in a relatively short time.”
Taylor Armerding is an award-winning journalist who left the declining field of mainstream newspapers in 2011 to write in the explosively expanding field of information security. He has previously written for CSO Online and the Sophos blog Naked Security. When he’s not writing he hikes, bikes, golfs, and plays bluegrass music.