It is widely accepted that functional verification is the most resource-intensive phase of the chip development process. As in many other phases of the flow, electronic design automation (EDA) vendors are increasingly using AI technologies to “shift left” verification, producing better results while consuming fewer resources. This requires both faster discovery of design bugs and accelerated convergence of the coverage metrics used to gauge verification progress.
Functional verification was once a highly manual process, with engineers hand-writing tests and manually checking results. The introduction of constrained-random stimulus generation and self-checking tests automated much of this process. Because engineers no longer wrote tests for specific design features, coverage metrics were added. If the code coverage and functional coverage metrics associated with a feature are exercised in simulation tests, the feature is considered verified.
Despite wide use of static and formal checking, simulation remains the dominant method for functional verification. The figure below shows how the simulation flow works on a typical chip project. The verification engineers write a test plan that iterates the features in the design and describes the coverage associated with each. Simulation tests using constrained-random stimulus run with functional coverage enabled, and usually some code coverage as well.
If there are any test failures, they must be debugged, usually by the verification engineers and designers working together. Once a problem has been tracked to its source and diagnosed, the design is fixed and the tests are re-run in simulation. Passing tests are collected in a regression suite that continues to run in simulation throughout the project. This catches any new bugs introduced by incorrect fixes or other changes to the design.
Although the above flow is easy to understand, it can be challenging in practice. Most of the verification time is spent in debug, trying to figure out why tests are failing. Sometimes it’s a design bug, and sometimes it’s an error in the testbench. Late-stage changes to the design, for example to improve logic synthesis results, can have ripple effects that cause many previously passing regression tests to start failing. Bug closure can be a long, painful, iterative process.
In parallel, the verification team is trying to achieve as close to 100% coverage as possible. Not achieving coverage goals may be due to insufficient testing, testbench errors, or design bugs that block some of the chip functionality. The verification team often tweaks constraints to try to “steer” the stimulus toward uncovered design features. This is another highly iterative task that frequently delays tape-out or forces the team to settle for coverage below the target.
Experienced verification engineers know that design change is one of the primary reasons why there are so many iterations of the simulation-debug loop. Since design changes are inevitable, the verification team must find a way to reduce their impact on project schedule and resources. By recognizing that bugs are frequently found in changed logic and the local neighborhood in the design, and focusing on the areas of change, bugs can be found earlier and coverage closure can be accelerated.
A novel technique introduced in Synopsys Verification Space Optimization AI (VSO.ai™) called “Change Based Verification” (CBV) improves verification confidence and quality, while closing coverage more quickly. Since regressions usually take multiple days to run, most teams have a smaller “smoke suite” of tests to run after design changes. VSO.ai CBV adds intelligent automation to this process by:
Running an intelligently targeted set of tests takes less time and fewer resources than a full smoke suite run. VSO.ai CBV might reduce a typical smoke test of 100 tests to 10. Since the simulation-debug loop is run thousands of times over the course of a chip project, the savings in time and resources are significant. Verification engineers, and even design engineers, can gain high confidence after design changes by automatically running an efficient, focused set of tests.
As each design change is analyzed and regressions are run, VSO.ai CBV gathers information related to test failures in the simulations and builds a machine learning (ML) database. Thus, the decisions made on which subset of tests to run for each subsequent change are informed by the history of verification for the chip design. It is even possible that a test not in the current smoke suite could be selected based on the project-wide knowledge in the database.
VSO.ai CBV has proven effective on many real-world chip projects. Results have consistently shown:
Left shift failures
Left shift changes
Engineers from Intel presented their experiences with VSO.ai CBV in the talk “Accelerating Change Verification Confidence with VSO.ai: Experiences from Intel Server SoC and Graphics IP Designs” on March 12th at SNUG Silicon Valley, part of the 2026 Synopsys Converge event. In addition, a white paper with the results from another leading-edge project is available from Synopsys.
Finding bugs and converging to coverage goals faster and with less effort have clear value to chip development teams. VSO.ai CBV uses AI and ML to provide these benefits, with more to come as this new technique evolves and adds more capabilities. Adopting it now produces better chips more quickly and establishes the foundation for future innovation.