HPC Application Development and Its Role in AI-Driven Chip Design

Sridhar Panchapakesan

Oct 31, 2022 / 6 min read

Synopsys Cloud

Unlimited access to EDA software licenses on-demand

High performance computing application development is rapidly growing, enabling HPC solutions that process data at incredible speeds. The ability to utilize computing power to process data at high speeds has engendered a world of new possibilities for chip development and other technology, research, and development initiatives. The recent industry-wide adoption of artificial intelligence (AI) and machine learning (ML) has made these demands even more urgent, fueling new high-performance computing applications across industries. To meet these challenges, chip designers must address critical HPC design issues with smarter, more autonomous design tools powered by artificial intelligence.

Here, we will discuss how organizations can address chip design challenges and leverage AI and machine learning for cloud computing advantages with high performance computing.

Challenges of Chip Design Solved Through HPC

Semiconductor development becomes more challenging by the year. As competition increases, transistors become smaller and each new process technology packs more onto each die. Consequently, chip design companies must innovate to remain competitive.

In chip development, advanced technology node issues, heat dissipation, floorplanning, place and route, logic synthesis, and function verification have pushed EDA tools to their limits. In modern age SOCs, organizations must account for hundreds of clock and power domain analysis to allow for IP blocks from a variety of sources.

Designing innovative chips requires handling these challenges while also operating under time-to-market constraints. Scaling aggressively to maintain performance and cost leadership, creating new architectural innovations, and working with increasingly faster interfaces are all essential practices for keeping pace with market requirements. 

Cloud-based applications can provide high bandwidth, density, and multiport memories to accommodate high performance computing and enable simulation, machine learning, and big data analysis capabilities that are embedded in the EDA applications.

High Performance Computing Applications Used in Chip Design

High performance computing enables effective scaling for teams focused on performance, capacity, and power efficiency in their chip designs. To accomplish these designs, new nodes and innovative design techniques are employed. EDA tools utilize HPC to handle increasingly complex designs and provide enhanced capacity and faster runtimes.

With high performance computing, designers can employ simulation and advanced verification methods to cross the technology-to-design gap and design-to-silicon gap, ensuring that once the device is on the market it performs as predicted in development.

Memory development turnaround time (TAT) denotes another essential aspect that must be considered. Adopting design technology co-optimization (DTCO) lets designers co-optimize new technologies and architectures before finalizing and “hardening” choices. HPC can provide early awareness of potential reliability issues during the design phase for accelerating verification.

Cloud-Based and Scalable Infrastructure for HPC

The shift toward cloud-based HPC has introduced new advantages in flexibility and speed. By integrating cloud infrastructure with high performance computing, organizations can scale their resources up or down depending on project needs, without the physical constraints of traditional data centers. This scalability allows chip designers to run large, compute-intensive jobs in parallel, optimize their resource usage, and reduce overall runtime.

Cloud HPC enables AI-driven orchestration of jobs, such as autoscaling simulations and workload-specific machine deployments. This allows EDA workflows to become more efficient, improving time to market while reducing waste. As companies increasingly adopt AI in high performance computing environments, they gain a strategic advantage by enabling faster iterations, broader design space exploration, and more efficient use of compute.

Chip Design Processes in the Cloud

A one-year-old server running simulation and verification will not be able to compete with the current generation of compute developed this year. This phenomenon, coupled with the fact that businesses may need to “guess” their required memory, accelerators, and other hardware months in advance for on-premises hardware, makes the cloud a more appealing option. 

During the design cycle, each task may require different compute capabilities, storage access, and memory. As the environment is built and progresses, additional computer resources may be needed. Instant availability of expanded memory capacity for timing analysis is only possible through the cloud. 

For static timing analysis and physical verification tasks, access to multicores with high memory-per-core machines and throughput is advantageous. With the AI features EDA vendors have, multiple place and route runs could be performed for PPA optimization; however, this can only be accomplished through parallel high-performance computers. Furthermore, the specific servers used for this task differ from the servers best-suited for verification runs. Having HPC for specific tasks allows for enhanced optimization.

Ultimately, access to more compute resources allows project design and verification processes to be completed faster. The cloud essentially removes compute limitations and optimizes performance.

How AI and Machine Learning Transform HPC Design Workflows

As the volume and complexity of chip design workloads increase, AI and machine learning are becoming integral to modern HPC design environments. AI-assisted tools can now handle simulation and layout optimization, dynamically adjusting parameters to speed up convergence and reduce manual tuning.

Machine learning models are being used to detect anomalies earlier in the verification process and predict power and thermal performance based on prior designs. Reinforcement learning is also showing promise in optimizing floorplanning and verification flows by learning from successful design paths. These capabilities are complemented by smarter synthesis and placement tools that adapt based on runtime data, helping teams better manage the increasing number of constraints in chip development.

By embedding these capabilities directly into HPC environments, teams can improve throughput, reduce iterations, and accelerate development timelines.

Saving Costs With Cloud Computing

Examining cost savings also reveals significant advantages for chip design. Costs for taping out chips include personnel and engineers, licenses, and compute costs. Rather than comparing the price of running an on-premises server with the same period of “on-demand” cloud pricing, it is crucial to compare it to the larger picture. By utilizing the cloud, businesses get their chips out the door sooner. In addition, they can save on rent, electricity, physical security, hardware maintenance, and unused compute power. Engineers can also experiment with a variety of compute types in minutes without ordering specific hardware and waiting for installation. This all accelerates innovation and positions companies to achieve first-to-market goals.

With high performance computing applications that can be adopted incrementally, organizations do not need to start from scratch. Instead, companies can adopt a hybrid cloud model and use their local resources while expanding into the cloud as needed.

Synopsys, EDA, and the Cloud

Synopsys is the industry’s largest provider of electronic design automation (EDA) technology used in the design and verification of semiconductor devices, or chips. With Synopsys Cloud, we’re taking EDA to new heights, combining the availability of advanced compute and storage infrastructure with unlimited access to EDA software licenses on-demand so you can focus on what you do best – designing chips, faster. Synopsys is leading the future of chip design in the cloud by delivering cloud-native EDA tools that are optimized for AI and machine learning in HPC environments. With pre-optimized hardware platforms, an extremely flexible business model, and a modern customer experience, Synopsys has reimagined the future of chip design on the cloud, without disrupting proven workflows.  As AI continues transforming high performance computing, Synopsys is keeping pace by enabling seamless AI and HPC integration.

Real-World Impact of High Performance Computing Applications

Synopsys continues to lead in the application of high performance computing across a wide range of design use cases. Through close collaboration with customers, Synopsys has helped accelerate design schedules by integrating AI-driven workflows into HPC environments. These efforts reflect the growing demand for scalable, cloud-based solutions that can adapt to project complexity without sacrificing efficiency or predictability.

By combining HPC with cloud elasticity and machine intelligence, Synopsys enables chip designers to push the boundaries of innovation while meeting aggressive performance, power, and area targets.

Take a Test Drive!

Synopsys technology drives innovations that change how people work and play using high-performance silicon chips. Let Synopsys power your innovation journey with cloud-based EDA tools. Sign up to try Synopsys Cloud for free!


About The Author

Sridhar Panchapakesan is the Senior Director, Cloud Engagements at Synopsys, responsible for enabling customers to successfully adopt cloud solutions for their EDA workflows. He drives cloud-centric initiatives, marketing, and collaboration efforts with foundry partners, cloud vendors and strategic customers at Synopsys. He has 25+ years’ experience in the EDA industry and is especially skilled in managing and driving business-critical engagements at top-tier customers. He has a MBA degree from the Haas School of Business, UC Berkeley and a MSEE from the University of Houston.

Continue Reading