Ansys CTO Prith Banerjee got the discussion going by describing the real-world benefits of scaling up in the cloud to meet the compute-intensive demands of chip design workloads. Ansys, said Banerjee, leverages high-performance computing (HPC) to render incredibly complex simulations that enable semiconductor companies to accurately analyze electrical and thermal interactions for billions of instances on monolithic SoCs and multi-die systems.
As Banerjee pointed out, chip design simulation workloads typically require massive amounts of parallel processing power provided by GPU clusters and large shared memory pools. Accessing this infrastructure in the cloud—and on demand—allows customers to pay only for the resources they need on a per project basis. Moreover, cloud vendors continuously upgrade their hardware with the latest GPUs and CPUs, purpose-built AI accelerators, and newest memories.
Dermot O’Driscoll, vice president of product solutions at Arm, expressed similar sentiments, noting that designing and verifying new chips requires a huge amount of compute power. The cloud, said O’Driscoll, offers Arm a viable alternative to obtaining this compute capacity while reducing its global data center footprint. According to O’Driscoll, over 50% of Arm’s EDA workloads now take place in the cloud, allowing engineers to scale up and efficiently run hundreds of thousands of concurrent jobs.
However, O’Driscoll emphasized more cycles isn’t always better, even if they are readily available in the cloud. O’Driscoll said he sees AI-driven EDA tools enabling semiconductor companies to further streamline chip design workflows by intelligently analyzing and managing processes in real time. O’Driscoll adds EDA vendors are already doing a “fantastic job” of optimizing cloud-based chip design with sophisticated ML models that help semiconductor companies PPA targets faster.