With unrelenting increases in design complexity and shrinking time-to-market windows, the chip verification debug cycle continues to dominate more than 50% of the time spent in verification. At the same time, the number of chip design and verification engineers per project is remaining relatively constant. Artificial intelligence- (AI-) driven debug automation is essential to deliver a much-needed productivity boost to augment engineering resources and address the challenges faced in catching more bugs before tapeout and avoiding costly re-spins.
Looking closer at the RTL design and verification loop, after the latest changes to the RTL are checked in, the regression is run and a list of hundreds or possibly thousands of failing tests come back that need to be debugged. The debug environment needs to be predictive. It should be able to understand the RTL being checked in and which engineering resource is checking in that code (either the design team or the verification team). This way, it can automatically assess where failures are coming from. However, with most debug environments, this is not the case. Most debug solutions induce a manual process of debugging these failures, which can be daunting. The failures are manually categorized and sorted into “bins” based on the type of error reported. The bins are then manually triaged to determine whether the problem(s) reside in the design or the testbench. Root-cause analysis (RCA) is done to try and pinpoint the actual bug triggering the test failure. This manual process is iterative, consumes valuable project time, ties up expensive resources, and is error-prone.