How can the electronics industry continue as Moore’s law slows, system complexity increases, and the number of transistors balloons to trillions?
Multi-die systems have emerged as the solution to go beyond Moore’s law and address the challenges of systemic complexity, allowing for accelerated, cost-effective scaling of system functionality, reduced risk and time to market, lower system power with increasing throughput, and rapid creation of new product variants. For applications like high-performance computing (HPC), highly automated vehicles, mobile, and hyperscale data centers, multi-die systems are becoming the system architecture of choice.
Multi-die systems are an elegant solution, to be sure, but not without challenges in areas including software development and modeling, power and thermal management, hierarchical test and repair, die-to-die connectivity, system yield, and more. How do you ensure that your multi-die system will perform as intended? How do you do it all efficiently and swiftly? From design exploration all the way to in-field monitoring, what are all the key steps in between that are important to consider from an overall system standpoint?
In short, designing multi-die systems is quite different than designing monolithic systems-on-chip (SoCs). Every step that you know, like partitioning, implementation, verification, signoff, and testing, must be performed from a system perspective, going from one die to multiple dies. What works for monolithic SoCs may not be adequate for these more complex systems. Read on for a deeper understanding of multi-die systems: their market drivers; how key steps including architecture exploration, software development, system validation, design implementation, and manufacturing and reliability can be adapted for the system; and opportunities for continued semiconductor innovation.