Smartphones that become smarter with each generation. Al algorithms and big data teaming up to generate insights, driving further progress on major challenges from vaccine discovery to climate change. Advanced robotics that can produce goods or perform surgery. The level of intelligence in our devices and systems is growing rapidly, and with this comes far greater demand for more functionality, higher bandwidth, better performance, and lower power—often within the same or smaller footprints.
Moore’s law has been a stalwart since its inception, delivering a doubling of computing performance every couple of years along with overall power reductions. In our current data-driven era, however, performance needs to scale at a much more rapid rate just to keep pace. Processing, memory, bandwidth—they’re all hitting some hard walls with monolithic SoCs. In addition, we’re fast approaching the reticle limits of manufacturing, when density scaling will slow substantially as costs increase.
This is where multi-die systems can shine, providing a new avenue to spark continued innovation.
Multi-die systems—integrated heterogenous chiplets—can boast trillions of transistors, providing the flexibility to designate dies for particular functions to certain process technologies based on their unique requirements as well as overall system performance and cost targets.
Design starts for multi-die systems are anticipated to grow significantly over the next several years, starting in 2023. The biggest adopters of this architecture in the immediate future will likely be those in the high-performance computing (HPC) and hyperscale data center spaces, given their compute-intense workloads. Chip designers in the mobile sector also have multi-die designs in the works, taking advantage of the PPA benefits for their space-constrained devices. Factoring in the many flavors of multi-die, some mobile manufacturers are tapping into advanced packaging to increase chip density. Automotive chip designers are adopting multi-die architectures as well (see the Tesla D1 for AI model training for an example), and we’re seeing increasing interest from more chipmakers in the sector. It’s no wonder—by using different dies for different specialized functions, automotive subsystems can be better positioned to meet overall PPA and cost requirements.
The reality is, multi-die systems are being rolled out across all application segments, given their cost, functional integration, and scaling advantages. So, the question is not whether different sectors will move this way, but when. All signs point to 2023 to be the start of a trajectory toward mass adoption.