Posted by Synopsys Editorial Team on December 10, 2015
We used to joke that the only thing the Java Intelligent Network Infrastructure (JINI) specification was good for was running Java on your toaster (a hat tip to John Romkey’s Internet Toaster, no doubt). We’d all get a good laugh at poor Bill Joy’s expense when the subject of writing autonomous coffee makers would come up at work. I mean, why would anyone want a $20,000 Internet Refrigerator?
Since then, the world has changed a lot. We’ve gone from jokes about autonomous household appliances to wearable devices that share every aspect of our lives to our network of friends and “friends,” cars that make intelligent decisions in microseconds to avoid an accident, and connected medical devices that literally hold the power of life and death. Some experts estimate that by 2020 we could have as many as 75 billion connected devices. Think about that number for a minute. If our population reaches 10 billion by 2020, that means that every human will possess no less than seven connected devices.
It may not be Skynet from “The Terminator,” but every day we hand over control of our lives to our devices. And somewhere along the way, we forgot to ask ourselves: who is ensuring that the “Internet of Things” is secure?
The reality is that the push to meet consumer demand—and beat competitors to the market before consumers find The Next Big Thing—means that quality and security are most often an afterthought. Before we know it, we’ve got hundreds of thousands of patients whose lives depend on Internet-connected insulin pumps, and only now are we asking, “What if a bad guy gains access to the device?”
How do we ensure that devices are undergoing the right amount of scrutiny before being released, and how do we implement a set of security standards for the “internet of all the things” without impeding progress? First, we need to understand the risks.
In the information security industry, we call this threat modeling: a practice by which we identify, quantify, and discover how to address the risks of a system.
This initial step is necessary in order to ensure that we’ve identified our weakest areas, but also serves as a road map for developing a set of security standards. During this phase, stakeholders will have the opportunity to ask questions like “What happens if a bad guy does gain access to our device?” Transparency into this process will help keep the dedicated community engaged with technology development and show accountability on the part of the product company.
Devices, like modern applications, rely heavily on the ability of developers to leverage existing technology, components, and frameworks in order to work. Companies regularly contract out development of components that they don’t have the in-house expertise to create and maintain. This means that individual parts of their product may all be developed by different people, none of whom interact with each other. They’re siloed.
As a result, when something goes wrong with a device, it’s rarely a single component at fault. More often there is a waterfall effect: one component interacts with another in an unexpected way, which leads to an unexpected interaction with another component, and ultimately to a flaw.
To avoid such problems, we must build a canon of knowledge about every component within a system and how those components interact with each other.
All of these third-party components, custom-developed software, and integrations with third-party services are an integral part of device functionality. It is imperative that we establish a culture of accountability for everyone involved in production.
We must first establish a set of standards that vendors must adhere to. Standards don’t solve the problem, but they build a foundation on which programs can be built to ensure that every component is held to a specific level of quality. Standards also help ensure the integrators of those components are provided the best support possible, ultimately providing assurance to consumers that due diligence is done before the technology is in their hands.
To succeed, it is critical that standards do not become just a box that needs to be checked to release something to market. Standards like PCI have great intentions for protecting consumers, but because those standards have become a roadblock to development, developers have discovered innovative ways to shortcut (or bypass altogether) requirements in the interest of a market push. This of course completely undermines the purpose behind standards!
As consumers, we’ve accepted that any technology we purchase will inevitably have a flaw at some point — whether it’s the need for an update to firmware or a component failure resulting in the need to “patch” the device. As producers, the industry is beginning to accept the fact that it is no longer a question of “if” but a question of “when” a flaw will be exploited. Vendors should have a plan in place for how they will address flaws in their systems, and their plan should be transparent and available for community review.
In today’s world this is a tough nut to crack. We’ve connected all our devices but haven’t put much thought into how to quickly push a fix out to potentially millions of devices. This is a design consideration that needs to be fleshed out long before we put devices on the shelf, and definitely long before someone figures out how to push malicious firmware to the vehicle control systems of potentially millions of cars while they’re driving down the road.
The reality of the problem that is “the internet of things” is that there is no simple solution, no quick fix. To truly solve the problem, the industry itself must adapt and adopt new best practices and policies. It isn’t just the “IoT” industry that needs to implement change, however. The information security industry as a whole needs to adapt and change its approach. We can no longer afford to take a reactive stance to securing our software and devices; the time to become proactive is now.
Get the latest Software Integrity news, thought leadership, and more.