Introducing Next-Generation Verdi Platform for AI-Driven Debug and Verification Management

Robert Ruiz

Sep 07, 2023 / 4 min read

Verification engineers spend roughly one-third of their time debugging their chip designs, which is about the same amount of time needed to increase chip verification coverage to detect these bugs. Clearly this is a huge lift, one that becomes even more burdensome as chips continue to grow larger and more complex. And if you’re working on a multi-die design? Well, all this time and effort to provide assurance that the chip will function as intended goes up exponentially. Therefore, to address growing debug challenges and increase overall productivity for verification flows, we’ve tapped into a combination of innovative and hyper-convergent technologies to create the next-generation Synopsys Verdi debug and verification management platform.

The key challenge is that many debug solutions necessitate a manual process to get to the root of the failures. Automation, of course, eases the burden, and even more so if the automation is enhanced by AI. Indeed, AI is enhancing productivity across the chip development spectrum, including in design, verification, and testing. Better silicon quality of results are also evident as intelligent capabilities are integrated into electronic design automation flows.

A painstaking process with results that can be unpredictable, debug is an ideal candidate for an AI-driven uplift. The next-generation Synopsys Verdi platform is enhanced with an AI-driven debug capability as well as comprehensive verification management functions, making the solution ideal for collaboration amongst geographically distributed project teams. Read on to learn how companies like  MediaTek are improving debug productivity by as much as 10x using the newest features in the Verdi platform. 

soc verification and chip debug

What’s Driving Chip Debug Complexity?

From billion-gate SoCs to domain-specific architectures and multi-die systems, today’s chips are growing in size and complexity to meet the aggressive performance demands of applications such as AI, high-performance computing, and automated vehicles. While the debug process is affected by chip size, functionality and the target end application also have impact. For example, a small design might have thousands of different—and parallel— high-level transactions to simulate a specific condition. Isolating bugs in this scenario calls for examining parallel events to find out which branch is problematic. On the end application side, the signals of interest need to be localized structurally. As a result, the debugging solution should understand the function across the design’s abstraction levels such as RTL and gates to correctly assess any issues and, where appropriate, also provide a simultaneous view of hardware and software interaction for a more efficient debug experience.

Considering the volume of signals to be analyzed, running thousands upon thousands of simulations and tests to verify the design is not an implausible scenario. Of course, such cases generate massive amounts of data for analysis, typically involving a manual categorization and sorting process to determine whether the problem stems from the design itself or the testbench. To accommodate running these simulations and generate much needed reports, many teams rely on internally developed scripts which are often difficult to scale or reuse.

There are additional manual steps taken to find root causes of failure— looking at failures in the categorized reports and log files to determine where to start, examining waveforms and tracing back through the circuit, and then sometimes relying on expertise from a team member who can serve as a “bug killer.” Unfortunately, in the face of a looming semiconductor engineering talent shortage and a time when geographically distributed verification teams are the norm, none of these approaches is particularly scalable or conducive to high levels of productivity. Through techniques such as batch-mode linting, teams can work to prevent some bugs in the first place. But it is nearly impossible to develop a design that’s bug-free from the beginning, so the best teams can hope for is to make the debug process more efficient. 

A Smarter, More Productive Way to Debug Chips

Featuring a refreshed graphical user interface (GUI), AI-driven debug, verification management, and an integrated design environment, the next-generation Verdi platform is ideal for distributed project teams. The platform:

  • Looks at results of simulation runs and extracts information on which tests are passing and which are failing. With its binning techniques and intelligence, the platform can identify whether the bug was in the design or in the testbench.
  • Looks at different code changes and provides an assessment, as bugs typically appear in waves or cycles, triggered by certain use cases. Examining changes through this decision-tree feature provides insight that helps to correlate the errors and pave the way to root-cause analysis.

Knowing where an error occurred, the solution can go back in time and through the circuit to identify what caused the abnormal behavior. By examining why an error occurred, the Verdi platform can help enhance productivity (especially the time required to find a bug) and also quality of results (through better accuracy in bug detection).

The next-generation Verdi platform includes Synopsys Euclide integrated development environment, which identifies complex design and testbench entry mistakes in real time, and a verification regression management system that’s built on Synopsys VC Execution Manager, which manages planning and running regression test execution, data collection, reporting, and tracking of the design verification process. The overall system makes results viewable either through the Verdi platform or a web browser over a secure network. A dynamically generated and extendable decision trees supports the building and sharing of a knowledge base to facilitate higher debug productivity.

“The next-generation Synopsys Verdi platform with AI-driven regression debug automation significantly helps engineers reduce time spent on root-cause analysis of regression failure, from days to minutes." said Chien Lin Huang, senior technical manager at MediaTek CTD.

"By incorporating additional innovative AI technology, we are evolving Synopsys Verdi to a unified debug and verification management system that is scalable for multi-site teams and will significantly reduce our customers' debug turnaround time,” said Pallab Dasgupta, vice president of R&D in the EDA Group at Synopsys. 


Chip design debugging is an activity that chip verification teams desire be efficient and quick. Automation helps take some of the sting out of the process, saving time and effort. Integrating AI into automated debug tools takes the efficiency to an even higher level. The next-generation Verdi platform with AI-driven debug and verification management capabilities can help project teams achieve higher levels of productivity and greater accuracy in bug detection and root-cause analysis. It’s another example of how the combination of AI and EDA are helping engineers in their quest for first-time-right silicon. 

Continue Reading