OTA updates are complex—they don’t come as a single, simple solution wrapped in a neat box ready to be deployed. Because of this, it’s important to focus on the underlying systemic infrastructure for delivering your software, along with the appropriate processes and checks that verify everything works as it should under any circumstance. In fact, a great number of activities need to be in place to have an effective OTA program. These activities tend to fall within three categories:
- Robust public key infrastructure (PKI)
- Well-defined software verification and validation practices
- Effective vehicle software deployment, storage, and activation
Robust Public Key Infrastructure
While key management at scale has been around for a while—think credit card applications in retail—it’s still a relatively new concept for automotive applications. For instance, properly managing PKI for automotive involves working with a wide range of technologies and participants, including automotive manufacturers, OEMs, and the key infrastructure management organizations that help comprise the OEM solutions.
Bad actors can spoof or even steal authentication keys to deliver contaminated software or block service access during a breach. Once a compromise occurs, managing it is no easy task. You’ll need to revoke not a hundred keys but tens to hundreds of thousands of keys throughout the supply chain and reissue them. This is just one reason securing PKI is a multi-disciplinary, multi-organizational undertaking involving the parties’ full participation.
Creating a simulation environment that can be used to test and evaluate a process is critical to success. Because it is not if an attack will happen, it is when. PKI programs should encompass testing for a wide range of fault conditions and corner cases that an attacker could take advantage of. This program should also follow ISO/SAE 21434 practices to ensure the process is part of the development program and evolves with software features and capabilities.
Well-Defined Software Verification and Validation Practices
There are many software development aspects to consider in the software verification and validation process. One area that is rapidly growing is hardware virtualization for software testing. With the rapid changes in software and the volume of developers participating, it is impossible to have enough hardware test beds to test software early in the development cycle effectively. Virtualized hardware provides access to hardware simulants that can be integrated into the CI/CD pipeline and made available to a much broader development audience. This also includes development for a post-production environment. Hardware access is much easier, but due to the volume of hardware, space constraints, and lack of expertise, this begins to become impractical rather quickly.
Leveraging virtualization technologies like Synopsys Virtualizer™ and other simulation technologies enable developers to quickly deliver unit- and system-level tests in a protected environment. This form of testing adds a level of rigor to your software test cycle, reducing the potential for defects or unintended interactions between test software and deployed software that would not be possible in traditional testing environments.
Effective Vehicle Software Deployment, Storage, and Activation
Even with the most capable and extensive levels of testing, there will be issues. Reasons can include poor post-deployment instructions, unintended interaction, or after-market adaptations. The scenarios are endless. What is important is to have safeguards in place and adaptable practices for addressing a wide range of issues and conditions.
Establishing policy for verification and validation of updates is key. Yes, many organizations have these policies in place, but do they take into account partially failed deployments? Do they think “out of the box?” The important takeaway is that while simple steps, such as code signing checks, are important, simple checks for wrong platform deployment are much more difficult.
Once an update has been pushed to a vehicle, the storage and activation of updates must take place perfectly. Storage of software for updates requires architectural decisions, such as the amount of in-vehicle storage and flash memory. How much is enough? Will the amount be suitable for the vehicle’s lifespan? Will it be suitable for the number of reads and writes the specific type of memory will experience? These types of decisions will have an impact on the volume, cadence, and size of software updates. Not enough memory and a copy of the current firmware won’t be stored for rollback. Too much and manufacturing costs may be too high.