No More Pizza! Bringing the Power of HPC to Solve the Age-old Question: “What’s for Dinner?”

Ruben Molina, Marketing Director

The other night my wife and I were trying to pick a place we could both agree on for dinner. If you’ve ever been in this situation,  you know it can be a  difficult problem to solve. I decided to short circuit the usual torture by asking our virtual assistant for a solution. “Hey [Virtual Assistant], where’s a good place to eat?” Thus ensued 15 minutes of intermittent, wrong answers, miscommunication and eventual cursing at our VA. Ultimately, I headed for the fridge, wondering when HPC in my virtual assistant will rescue me from eating last night’s leftovers.

You’ve probably heard of HPC or high-performance computing if you’ve been following the latest frontiers on supercomputing, but for those that have had their heads in the sand, HPC refers to achieving performance beyond standard single machine computing through the aggregation of large amounts of compute resources (“inside HPC,” n.d.).  In other words, HPC means a lot of parallel processing across multiple compute cores to solve problems faster.

Hey Sierra! Find a restaurant for me and the wife.

HPC has traditionally been associated with supercomputers of which the 2nd fastest in the world resides at the Lawrence Livermore National Laboratory, about an hour from Silicon Valley. With an estimated 1,572,480 processing cores, this machine called “Sierra,” can perform almost 100 Petaflops per second. You need a computer that powerful if you are trying to solve some really hard problems like simulating nuclear decay and potential instability issues of our aging nuclear weapons arsenal. Less scary but seemingly just as difficult tasks involve weather prediction, DNA sequencing, and molecular modeling. Solving these problems does come at a pretty big power cost: 7.4 Megawatts to be exact, or the equivalent of powering 7,000 to 8,000 homes.

Today, HPC is not just for solving the unsolvable problems of the world. It’s now being used for non-research applications. For rendering the latest CGI effects in movies and shows like Star Trek: Discovery or Star Wars: Rise of Skywalker.  It can be used to teach your car to drive by itself, to unlock your phone by recognizing your face, or help your virtual assistant recognize when you are talking to it and answer your questions.

With a lot of these applications, the latency of communicating with a remote compute farm puts an emphasis on localized processing which means putting HPC in your phone, your virtual assistant, your laptop, or your car.

When you think about the idea of putting a scaled-down Sierra supercomputer on a chip, you start to imagine how HPC can solve your everyday problems. However, scaling that many processing elements on a single chip pose multiple design challenges. First and foremost is power consumption. For any HPC application, you need the lowest power possible because megawatts are costly, even for a government lab, and even if you reduced power by several orders of magnitude, your device would probably burst into flames. Second, the amount of communication between processing elements requires higher placement densities to shorten interconnect and delay latency. And third, HPC processing hardware is pushing clock frequencies beyond 3GHz which requires the latest process technologies and tools to handle new fabrication rules for mask design, lithography, and fabrication.

Fusion Compiler: helping you, help me ... eat better food.

Tight coupling of synthesis and place and route is essential to making the right architectural tradeoffs that will achieve these higher frequencies, increased density, and lower power consumption requirements for HPC. In addition, common analysis engines for extraction, timing, and power are needed to ensure that feedback-based design modifications converge quickly and efficiently to an optimal design.

Synopsys Fusion Compiler is the culmination of years of innovation that brings a holistic approach to design, starting from RTL-to-GDSII. Through the innovation of a first-of-its-kind, single foundation data model, and shared optimization and signoff quality analysis engines, Fusion Compiler is the first true “singular system” that spans RTL synthesis to final implementation and was written to address the next generation of HPC design challenges.

What also sets Fusion Compiler apart is its prodigious use of machine learning (ML) to optimize placement for better density and utilization, to better estimate routing for faster design convergence and for better prediction of post-route timing for early and improved optimization quality.

So, while Synopsys Fusion Compiler is already successfully helping architects and designers bring practical high performance computing to the masses, it can’t come soon enough for me. I’m still eating yesterday’s frozen pizza.