While all the above paints a daunting picture, I'm here to tell you designing chips in the cloud, reliably, predictably, and cost-effectively is indeed possible. In a prior life, I ran marketing and IT for an ASIC and IP company. How those two responsibilities found me is a story for another day.
Our engineering infrastructure was running on a privately hosted data center. A typical complex ASIC design flow could balloon by 5 – 10X in terms of compute requirements for parts of the design flow. Having a fixed compute footprint didn't do well in this scenario. So, we embarked on an ambitious project to move our entire FinFET-class design flow to Google Cloud Platform. These were early days for cloud migration and Google was interested in a test case for chip design on its cloud. Good news for us.
One at a time, we tamed things like the NFS file system problem, the tiered storage problem (solid state disks are very expensive, use them wisely), the massive data transmission problem and the interactive latency problem. This is just a short list of all the challenges we faced. In the end, we were successful. On one sunny Friday afternoon, we transferred *all* engineering workloads from our private data center to two Google Cloud instances in Iowa and Singapore. Our private data center went dark.
The best part was no one noticed. Without missing a beat, ASIC and IP design work continued, worldwide. The lack of impact was indeed the best reward we could hope for. We began this effort before there were complete cloud offerings, from EDA companies and others. Today, the world is a different place and support for chip design in the cloud is better developed, and improving all the time. Indeed, the outlook for chip design in the cloud is clear skies with a warming trend. There will be a lot more to say about this trend going forward.
In the meantime, if you want to learn the whole story of my adventure in cloud computing, you can find it here.