So what are the testing “recommendations” (mandates) of the new standards, and how can developers comply with them?
The FDA doesn’t specify a tool that developers must use—it simply sets forth the different methods of testing it expects. What follows is some analysis and recommendations from experts on how developers can apply the FDA/UL “guidance” to build security into their products and have a better chance to qualify more quickly for premarket certification.
Known vulnerability testing
Known vulnerability testing focuses on platform and dependency weaknesses. It involves testing software for vulnerabilities that have already been discovered in products, third-party libraries, and some open source libraries and that are in the National Vulnerability Database (NVD). The NVD is augmented with additional analysis, a database, and a fine-grained search engine, according to NIST, the National Institute of Standards and Technology.
That is just a start, however, according to Chandu Ketkar, principal consultant with Synopsys.
He added that there are many open source tools and commercial tools that can scan for vulnerabilities in the underlying dependencies in the various databases. “When available, it is important to use a scanner that is platform specific,” he said.
The bottom line is that while it may be impossible to make a product immune to any future zero-day attacks, it should at a minimum be free of known vulnerabilities.
Malware testing focuses primarily on two things: finding out if the library or the executable a developer is about to deploy contains some malware, and making sure that the system/server on which the software is being deployed does not contain malware.
“Most basic malware scanning tools use a signature-based approach to identify and remove malware from your system,” Ketkar said.
More advanced tools will have a more sophisticated, behavior-based approach. “These tools watch processes for telltale signs of malware and compare to a list of known malicious behaviors,” he said.
There are many open source and commercial malware removal tools that do those kinds of testing.
Malformed input testing
Malformed input testing, a kind of automated testing also called “fuzzing,” is often the first form of evaluation an attacker uses against a target. Fuzzing sends randomized inputs to programs to find test cases that cause anomalous behavior.
There are many types of fuzzing, but the FDA/UL standards focus on two. The more basic form is mutational fuzzing. Mutational fuzzers use a valid sample input as a seed and alter it randomly to see how a target reacts.
The more sophisticated form is generational fuzzing. Rather than mutating existing input, generational fuzzers use a state engine to generate input from scratch.
Ketkar stated that “these fuzzers are used to test custom protocols” present on target devices.
Structured penetration testing
As most security practitioners know, a penetration test, or “pen” test, is a simulated cyber attack against your product, system, or network to check for vulnerabilities. If a good person can hack your stuff, then obviously the bad people can too.
The most basic version is fully automated, but it is not as rigorous—and therefore not as effective—as the advanced versions.
At the advanced level, Ketkar said, “We are talking about proxy tools—such as Burp—that allow penetration testers to perform manual testing,” which can be much more effective.
Software weakness analysis
Software weakness analysis looks to remove the most common forms of weaknesses by eliminating common categories of defects. Chris Clark, business development manager at Synopsys, said these categories point to what are cataloged as the CWE Top 25, the CWE/SANS On the Cusp list, and the OWASP Top 10.
There is a wide range of methods for finding and reducing these defects. But Ketkar said while there are some tools in the works for software weakness analysis, it is now done mostly by experts.
Static source code analysis
Static code analysis is a kind of debugging done by tools that scan code without executing the program, which means the code isn’t running.
There are multiple tools available that do static analysis. The most basic simply search the source code for a certain pattern, such as an empty catch block or a function whose return value is not captured by the caller, and report that. The more advanced tools use sophisticated techniques such as taint, dataflow, and control flow analysis to find more complex defects.
Static binary and bytecode analysis
Binary code analysis scans compiler-generated machine code. It is useful when the source code isn’t available and all that is accessible are vendor libraries and executables.
But, Ketkar said, many experts strongly believe that even if analysts have access to the source code, binary analysis also provides highly useful results. “For example, seemingly simple code in a programming language such as Java creates copies of objects in memory that could be harvested by attackers,” he said.
Rios and Ahmadi said deep binary analysis can help catch back doors, design flaws, implementation issues, and configuration issues.
Bytecode analysis scans bytecode—computer object code that is processed by a program, usually referred to as a virtual machine, rather than by the “real” computer, or the hardware processor.