Automotive compute architectures are undergoing a fundamental transition. What began as collections of discrete ECUs evolved into domain controllers and then centralized compute. That trajectory is now reaching its limits as vehicles become AI-defined systems, where perception, reasoning, planning, and human–machine interaction are driven by increasingly large and heterogeneous models.
Several forces are converging: exploding bandwidth due to increased sensors, growing model diversity spanning convolutional neural networks (CNNs), transformers, graph neural networks (GNNs), reinforcement learning, and multimodal AI models, hard real-time control constraints, and strict power and cost constraints. Monolithic SoCs struggle to scale across these competing dimensions, driving the shift toward multi-die design.
As automotive platforms evolve toward AI-defined vehicles, Synopsys and SiMa.ai are collaborating to address a core challenge: how to scale machine-learning (ML) performance using multi-die designs while preserving the safety, determinism, and longevity required for automotive deployment. This work focuses on safety-aware ML acceleration as a building block within next-generation centralized compute platforms.
Multi-die design enables heterogeneous scaling of AI accelerators, CPUs, safety controllers, memory, and I/O, while improving yield and architectural flexibility. Scaling AI allows automakers to deploy limited AI capabilities for entry level vehicles and increased AI for high performance applications in premium vehicles.-However, automotive multi-die designs differ from data center multi-die designs due to functional safety, deterministic execution, and long product lifecycles. A key architectural insight is functional decoupling: AI compute is adaptive, while safety and control must remain deterministic and certifiable.
Effective automotive AI platforms separate AI cognition architecture layer from deterministic control and safety domains, with digital twin and validation layers providing continuous monitoring and explainability. Hardware realization of this concept relies on dedicated dies (also called chiplets) with clearly scoped responsibilities, connected through safety-aware interconnects that carry both data and supervision signals.
The MLA chiplet, shown below, illustrates how an automotive ML accelerator can be architected for safety-critical deployment. At its core is a workload-optimized ML inference engine supporting perception, sensor fusion, scene understanding, behavior prediction, trajectory evaluation, and multimodal interaction workloads. These include CNN- and transformer-based perception, GNN-based behavior modeling, reinforcement learning-assisted planning, and multimodal inference combining vision and language inputs.
Rather than embedding safety logic within the ML datapath, the MLA chiplet implements parallel supervision. Safety logic monitors ML execution for faults or anomalous behavior, enforces runtime constraints on AI outputs, and signals fault conditions to external safety controllers. This separation supports reuse of certified safety components across evolving AI models and accelerator generations.
Dedicated security and isolation mechanisms protect ML models, parameters, and data, while enforcing trust boundaries between AI workloads and safety-critical control paths. This is essential for platforms supporting software-updatable AI models over long automotive lifecycles.
The MLA chiplet is designed as a composable building block within a centralized multi-die AI compute module. In a typical system, MLA chiplets with high performance external LPDDR memory provide inference acceleration, host dies manage execution and scheduling, safety island dies enforce system-level safety policies, and I/O dies handle high-bandwidth sensor input. Interfaces support both high-throughput data movement and low-latency safety signaling, enabling scalable AI performance without compromising determinism.
Multi-die designs are becoming foundational to AI-defined vehicles. By combining workload-optimized ML acceleration with deterministic safety supervision, strong security isolation, and system-level composability, the Synopsys and SiMa.ai approach demonstrates how AI capability can scale with automotive functional safety, reliability, and longevity.