High-Performance Computing Architecture for Cloud Deployment

Sridhar Panchapakesan

Sep 04, 2022 / 4 min read

Synopsys Cloud

Unlimited access to EDA software licenses on-demand

High-performance computing (HPC) allows you to process data and perform complex calculations quickly. This article discusses what to consider when building your high-performance computing architecture for cloud deployment.

Traditionally, the capacity of your on-premises infrastructure limited HPC systems. Today, the cloud allows you to extend local capacity with resources in the cloud.

HPC workloads are well-suited for pay-as-you-go cloud infrastructure because they fluctuate and burst. Fine-tuned cloud resources and cloud-native architecture can speed up the turnaround of HPC workloads.

You can't utilize on-premises HPC solutions sized for peak capacity until they reach peak capacity. Otherwise, they will be underutilized and cost your organization money. A cloud-based HPC infrastructure can mitigate the risk of missing significant opportunities when you’re working on time-sensitive jobs. High-performance computing architecture can also eliminate the cost of unused infrastructure.

Design Principles for High-Performance Computing Architecture

Below is a set of general design principles for HPC architecture in the cloud:

 

Dynamic Architecture

Your architecture should be dynamic in that it can grow and shrink to match your demands for HPC capacity over time. When designing your architecture, consider the natural cycles of your HPC activity. 

In the design phase, for example, you might see an increase in demand followed by a decrease in demand as the project moves to another stage. A lot of HPC projects are bursty and well-suited to cloud paradigms. You don’t have to choose between oversubscribed or idle systems when you have elasticity and pay-as-you-go provided by cloud.

 

Data Visibility

You need a clear picture of your data—data origin, size, velocity, and updates—before you design your HPC architecture. The goal is to optimize performance and cost holistically. Consider using your cloud vendor’s data services, such as data visualization, which enables you to extract the most value from your data.

 

Collaboration

Many HPC projects are collaborative, sometimes spanning multiple countries. Besides collaboration on a specific project, methods and results are often shared with the larger HPC and scientific communities. Choosing in advance which tools, scripts, and data to share and with whom is essential. During the design process, you should consider project delivery methods.

Design Principles HPC Architecture in the Cloud | Synopsys Cloud

Cloud-Native Design

Replicating your on-premises environment is usually unnecessary and not optimal when you migrate workloads to the cloud. With cloud services, HPC workloads can run in new ways using cloud-native design patterns and solutions. Automating HPC cluster deployment allows you to quickly end one compute cluster and launch a new one.

 

Workload Testing

Testing your workload in the cloud is the best way to know how it will perform. There are a lot of complex HPC applications, and you can’t run a simple test on their memory, CPU, and network patterns. 

In addition, your infrastructure requirements depend on the algorithms you use as well as the size and complexity of your models. As a result, generic benchmarks can’t reliably predict HPC performance. In the cloud, you only pay for what you use so that you can create a realistic proof of concept. One advantage of a cloud-based platform is that you can run a full-scale test before you migrate.

 

Cost vs. Time

With high-performance computing architecture, you can analyze performance based on time and cost. You should optimize workloads that aren’t time-sensitive for cost. The least expensive way to run non-time-critical workloads is with spot or preemptible instances. Conversely, for time-critical workloads, performance should take precedence over cost optimization. In this case, you should pick the instance type, procurement model, and cluster size with the quickest execution time. When comparing cloud platforms, you should consider non-computing factors such as provisioning, staging data, or time spent in queues.

High-Performance Computing Architecture and EDA Workloads

Often, electronic data automation (EDA) workloads require HPC capabilities, like a compute cluster, a job scheduler, and a high-performance shared file system. The cloud has virtually unlimited CPU and GPU resources so that chip designers can run many EDA jobs in parallel. This process helps designers receive results faster and with added business value. 

Synopsys Cloud combines the availability of HPC in the cloud with unlimited access to EDA software licenses on-demand. We have partnered with the top cloud providers to optimize infrastructure configurations, removing the guesswork so EDA can rapidly deploy in the cloud. 

Synopsys, EDA, and the Cloud

Synopsys is the industry’s largest provider of electronic design automation (EDA) technology used in the design and verification of semiconductor devices, or chips. With Synopsys Cloud, we’re taking EDA to new heights, combining the availability of advanced compute and storage infrastructure with unlimited access to EDA software licenses on-demand so you can focus on what you do best – designing chips, faster. Delivering cloud-native EDA tools and pre-optimized hardware platforms, an extremely flexible business model, and a modern customer experience, Synopsys has reimagined the future of chip design on the cloud, without disrupting proven workflows.

 

Take a Test Drive!

Synopsys technology drives innovations that change how people work and play using high-performance silicon chips. Let Synopsys power your innovation journey with cloud-based EDA tools. Sign up to try Synopsys Cloud for free!


About The Author

Sridhar Panchapakesan is the Senior Director, Cloud Engagements at Synopsys, responsible for enabling customers to successfully adopt cloud solutions for their EDA workflows. He drives cloud-centric initiatives, marketing, and collaboration efforts with foundry partners, cloud vendors and strategic customers at Synopsys. He has 25+ years’ experience in the EDA industry and is especially skilled in managing and driving business-critical engagements at top-tier customers. He has a MBA degree from the Haas School of Business, UC Berkeley and a MSEE from the University of Houston.

Continue Reading