The success of the cloud model in other industries has not been lost on the traditionally conservative semiconductor sector. Seeing Fortune 500 companies and renowned research institutions trust their most important sales, HR, finance, engineering, and other critical operational information to the cloud is now convincing chip design decision makers that it can work for them, too. In fact, the granular telemetry, monitoring, and control offered by cloud providers to the resources supercedes the security mechanisms that corporations have deployed in their on-prem data centers. So, while security is always a concern that no one wants to diminish, it’s becoming less of a top-tier issue.
Use cases requiring HPC in scientific research, medical research, finance, and energy prove that when flexible access to the most powerful resources is needed, it isn’t always necessary, or even optimal, to have all the horsepower on-prem. Many of the most compute-intensive applications can be run just as efficiently at safe and secure data centers, easing the administrative overhead and providing increased flexibility for design teams. And we’ve seen notable success with our own customers with cloud-optimized tools for digital implementation, library characterization, sign-off, custom layout, and physical verification. Through our own experience, we have seen that the flexibility and elasticity of cloud-based computing works on many levels.
A convergence of key occurrences is driving adoption of the cloud for chip design:
- Systemic complexity of hyper-convergent integration along with scale complexity of Moore’s law demand hyper-convergent design flows, which, in turn, require exponentially more compute and EDA resources
- Cloud service providers have scaled HPC-optimized infrastructure availability, affordability, and capacity to handle these workloads
- AI, which is being used in more design flows and design tools, has a natural multiplicative effect on the first two scenarios
There’s been a significant shift in semiconductor companies’ confidence and trust in a shared or managed model for computing and managing the resources needed. Improvements in security posture, identity management, and other infrastructure enhancements have motivated engineering teams and executives to cross the chasm and adopt a more flexible and cost-effective approach to supporting their engineering efforts.
And this being the semiconductor industry, a primary driver is scale and performance. With chips and systems getting larger and more complex, access to more computing resources is an almost insatiable need. Setting up and managing farm after farm of servers in-house is impractical, if not impossible, for some, especially if the need for such resources is cyclical – which is the case in most chip design-verification processes. Tapping into extra horsepower only when needed gives additional flexibility and nimbleness to even the largest companies, not to mention a more economically efficient approach for fast-track start-ups.
The cloud provides any size company and any application with the flexibility to scale design and verification capabilities as needed in a secure environment. In the case of chip design, teams get access to the most advanced compute and storage resources, reduce their own system maintenance costs (or even eliminate them altogether), and enjoy more flexible use-based models that support the burst usage periods common in some phases of the chip design process. Synopsys contributes a robust suite of security tools to mitigate risk in cloud-based environments.