Due to the CPU-, memory-, and storage-intensive nature and unpredictability of EDA workloads, chip companies should seriously consider using the public cloud. This model offers increased flexibility and scalability.
It is essential to make the transition to the cloud thoughtfully. The key to unlocking the value of the cloud is to take a strategic approach. You should review each current deployment to fully understand compute and memory footprints and cost basis and dependencies. You also need to identify the workloads that will benefit from the public cloud.
You should not rely on standard cloud instances for EDA workloads simply because they are easily accessible. To achieve optimal performance, the workload must match the appropriate cloud instance. One of the key metrics to use when choosing the instance is the memory per core ratio needed for your workload. Typically, front-end simulation or library characterization tools require less memory to core while backend applications, such as static timing analysis or physical verification, require larger memory to core instances.
In addition, the cost of infrastructure and EDA licenses to run the workloads needs to be considered.