So why did we choose DRCs for our testcase? DRCs ensure that designs operate correctly and can be manufactured in the foundry. Performing them using traditional on-premise compute resources can take precious time, especially as designs get larger and more complex.
Because today’s design sizes are larger, the number of process rules have increased. In fact, process rules in many of today’s designs can number in the thousands, and the increased design complexity can result in hundreds of steps. For multi-die systems that have billions of transistors, a DRC or layout-versus-schematic (LVS) job can run multiple days and utilize hundreds of CPU cores.
The increased compute power that is needed in smaller time-to-market (TTM) windows causes physical verification challenges. This is especially true as process nodes advance from 7nm, to 5nm, to 3nm, and beyond. For instance, at 3nm a runset can contain over 15,000 complex rules and require 10x this number of DRC computational operations to execute the rules. As a result, full-chip DRC sign off can consume tens of thousands of CPU hours just for a single iteration. While physical verification has always been compute intensive, the size and complexity of today’s designs take this challenge to an entirely new level.
Serial dependencies to run DRC and LVS jobs mean that purchasing more compute power does not necessarily equate to faster run times. IC validation that requires computational scale means some of that compute power sits idle at times during the serial operations. If you don’t find a way to optimize your computational resources for this kind of scenario it will impact your bottom line—you will be paying for those unused resources.
Using cloud computing for your IC verification can help you eliminate this. With cloud verification, you can scale up and down from hundreds of on-premise CPU cores to thousands of CPU cores in the cloud. This elasticity gives you flexibility, agility, and scale, using only the compute resources you need, when you need them. The DRCs inside your runset can be distributed to run in multiple cores in parallel optimizing compute resources, saving you time and money.