DSO.ai Team Interview

Designer Digest with DSO.ai R&D: How are AI and machine learning changing the way we think about chip design?

Ever since its announcement in March 2020, DSO.ai has been making headlines across the semiconductor industry, most recently when it won the ASPENCORE World Electronics Achievement Award for Most Innovative Product of 2020. Designer Digest caught up with some of the innovators at Synopsys’ Machine Learning Center of Excellence (ML CoE) to discuss trends in the exciting space of AI-driven chip design.

Thomas Andersen, Ph.D. 

Head, Machine Learning Center of Excellence

Q: Thomas, you are heading the team that brought DSO.ai to market. What is DSO.ai and why does Synopsys claim it is a chip design technology breakthrough?

Design Space Optimization (DSO) is a novel approach to searching large design spaces enabled by recent advancements in machine learning. DSO.ai was inspired by AlphaGo, the computer program that, in 2016, taught itself how to play the game of Go and went on to defeat human experts. Chip design is also a very large space of potential solutions, trillions of times larger than the game of Go. Searching this vast space is a very labor-intensive effort, typically requiring many weeks of experimentation, and often guided by past experiences and tribal knowledge. DSO.ai introduces a new, generative, optimization paradigm that uses reinforcement-learning (RL) technology to autonomously search design spaces for optimal solutions.

Stelios Diamantidis 

Director of Product, Artificial Intelligence

Q: Stelios, you have brought a number of innovative EDA products to market. What makes DSO.ai special from the point of view of semiconductor companies?

I believe that the advent of AI and big data is giving the entire EDA industry a new dimension. For the first time we are letting data generate the algorithms, rather than the other way around. This new paradigm is bringing a lot of autonomy into the design exploration process. It essentially creates a pairing of human expertise with AI, whereby DSO.ai massively scales the exploration of choices in chip design workflows, while highly automating a very high volume of less consequential decisions. AI-grade productivity means entire teams can now operate at expert levels – able to take on more projects, push the limits of process scaling, and compress schedules to make tight market windows.

Joe Walston, Ph.D.

Director, Product Engineering

Q: Joe, you have a wealth of experience working with high-performance designs such as Arm® CPU cores. Is DSO.ai a “big green button” for chip design?

[Laughs out loud] Not at all! The true power remains in the hands of the designer. Instead of manually exploring possible input choices in a very limited way, DSO.ai automates the search process, while the user determines what spaces to focus on. This shifts the way we think about chip design in a fundamental way. Tomorrow’s designers will be able to drive the design process at much higher levels of abstraction and with higher throughput, with AI at their side. The role of designers will be less to orchestrate and run experiments, but rather, to guide the AI on what kinds of design spaces to focus on and, ultimately, what goals to achieve based on their experience. This frees designers to spend more time on analyzing specific problems and to make better tradeoffs with regards to the results they are trying to achieve.

Mat Philip

Principal Product Engineer

Q: Let’s go to some of the technologists behind DSO.ai. Mat, you have worked on many complex chips in your career. What are some typical DSO.ai applications?

DSO.ai can be used to optimize input parameters and choices of chip design workflows to the exact needs of a given project. The first and obvious application of this capability is to optimize the design steps and underlying tool settings themselves. However, this is just the beginning. Engineers can use DSO.ai to search many other inputs to the design process. For example, DSO.ai can fine tune which library cells will give the best frequency or the lowest power; take an existing floorplan and try to shrink the die size; determine which operating voltage will produce the most optimal power vs. performance tradeoff; explore the effect of custom clock structures or power distribution networks; and much more. 

Benoit Claudel, Ph.D.

Principal R&D Engineer

Q: Benoit, we’ve been hearing from AI leaders about the vast compute resources that are needed to run AI models. Does DSO.ai require a dedicated data center (and power plant)?

Well, that could certainly be a way to leverage big data but we didn’t feel it would be the right approach for EDA problems. Instead we chose to design our algorithms so they could inherently leverage more compute but, at the same time, also be effective in existing EDA compute environments. Think of it as a process of iterative refinement. Early on in the design evolution there are more choices to explore and hence DSO.ai can efficiently leverage more compute. Later on, design spaces become more constrained and compute bandwidth can be reduced. Another important aspect is learning. DSO.ai incrementally learns from previous versions of a design. For example, when a new netlist comes in, DSO.ai does not start from scratch but uses its learning engine to infer next steps based on previous design drops.

Pranay Prakash 

Director of R&D

Q: Pranay, I saved everyone’s favorite question for last. Is AI going to replace human designers?

AI is new and this conversation comes up in many industries. In EDA, it reminds us of similar discussions going back to the very early days of Design Compiler and how it moved the circuit design paradigm forward through the introduction of RTL synthesis. Today we seem to be at a similar crossroad where, despite Moore’s Law slowing down, a technological leap is unlocking tremendous design-side innovation. Just like Design Compiler in the late 80s, our aspiration is that AI will propel our customers into the next 30 years of semiconductor innovation.