The semiconductor industry is at a pivotal moment. With $192.7 billion in venture capital flowing into AI in 2025, the demand for verification platforms that match rapid innovation cycles is surging. As AI, multi-die architectures, and edge computing drive exponential complexity, traditional verification methods struggle to keep pace with requirements spanning IP, sub-systems, chiplets, and multi-die verification for AI-driven workloads.
Hardware-assisted verification (HAV) is essential for ensuring functionality, power, and performance. Design teams must optimize the total cost of ownership by investing in future-proof verification systems - addressing scalability for emulation of the largest AI chips and enabling verification hardware to be reused across emulation and prototyping use cases.
While new verification hardware for emulation and prototyping is critical for boosting productivity, the associated workload application software is now even more critical. New hardware must enable continuous scaling, increased throughput, and expanded use cases on existing hardware, accelerating time-to-market.
Just as “software-defined” advances transformed smartphones, automotive, data centers, and networking & IoT, we have now entered the era of Software-Defined Hardware-Assisted Verification (HAV).
So why is that such a big deal?
To better understand today’s verification challenges, let’s have a look at figure 1 below which illustrates the compounding complexity increases across software, hardware, interfaces and verification use cases.
Figure 1: Compounding Verification Challenges
*Image credits: Synopsys, AI and Memory Wall: 2403.14123 (arxiv.org), Baya Systems, https://bit.ly/4hDXCe9, Visual Capitalist http://bit.ly/46B99HW
Let’s dig a bit deeper into this.
AI applications are characterized by software programs (called workloads) and large-language models (LLMs). Workload complexity is defined by the lines of software. And while software applications have become more specific and thus “contained” to certain specialized tasks like AI training vs. inferencing, or the creation of text vs. images vs video, etc. within that scope, software complexity continues to grow as end users expect more from these AI applications and they thus evolve very rapidly to stay competitive. In addition, we are facing a literal wave of AI LLMs, serving generative AI and inferencing needs from the data center to the edge, with models doubling in size every four months.
Figure 2: A tidal wave of AI training
Source: Visual Capitalist, “Eight Years of Consumer AI Deployment in One Giant Timeline”, http://bit.ly/46B99HW, 2025
This also means that to compete in today’s world of ever more complex and specialized applications, customized hardware is required to meet functionality, power and performance metrics. Conversely, hardware needs to be scalable enough to support a variety of applications and be future proof for a couple of years to handle the AI algorithms of tomorrow.
Let’s take NVIDIA as an example.
After the announcement of Blackwell, NVIDIA announced the Rubin AI Platform and the Rubin Ultra family to be available in 2026 and 2027, respectively. When the Rubin CPX architecture, purpose-built for massive-context inference applications, was announced in September 2025, NVIDIA also reported massive software-driven improvements for the existing hardware products, including 2x Blackwell performance since its launch, 4x performance improvement for Hopper during its lifetime so far, and 6x better throughput enabled by the Dynamo software.
For end users, this yielded at the system-level 2-4x speedup on Llama, up to a 6x improvement in first-token latency and 3x higher token output.
So, hardware and software grow more complex and become more specialized, but at the same time software-defined systems ensure that software upgrades can be made during the lifetime of the hardware. Complicating matters to design silicon for these software-defined systems is the fact that Moore’s Law has been facing physical limitations and the way to scale to bigger designs has been multi-die systems. While the creation of each die resembles the development and verification of a SoC, putting the different chiplets together creates its own unique challenges and hardware is still doubling in size every 18 months because of More than Moore innovations. Add to that that increasingly chiplets will come from different vendors, introducing ecosystem plays requiring coordination on communication protocols and verification methodologies.
As a result, rapid evolution of interface IP implementing these communication protocols has become a crucial contributor to verification complexity. To feed more data into AI algorithms to provide more intelligent insights or autonomously perform specific tasks, communication protocols evolve at an astounding pace as illustrated in Figure 3, now doubling in bandwidth every two years.
Figure 3: PCIe and Ethernet Evolution
Furthermore, all this data has to be stored and read back at a very fast pace to reduce the lag between a request to an AI bot or agent and the actual action. The innovation in memory architectures, as illustrated in Figure 4, has been critical to push forward the ability to support the latest AI computing architectures.
Figure 4: HBM Innovations
Finally, use cases define the scope of what needs to be verified, and more and more use cases are emerging at a record pace because of the pressure to characterize and then optimize the end product with the workloads running.
Not only is the software complexity growing faster than ever before, driving new demands on multi-die system scaling and interface IP protocol innovation, the pace at which new silicon is coming out has also increased rapidly to stay competitive in this fast-evolving AI landscape. This is forcing developers to look to “shift left” their software/hardware verification and validation to meet time-to-market requirements and avoid silicon re-spins.
As a result, the scope of verification continues to dramatically grow to ensure that the key metrics on functionality, power, performance, throughput, latency, security, safety, scalability, and many more are meeting product requirements. This has led to new verification use case requirements to pull in most of these tasks to the pre-silicon phase.
The growing complexity of software, hardware, and interfaces - combined with expanding pre-silicon use cases- is driving the demand for quadrillions of verification cycles. And not all these cycles are equal. For example, there are different requirements for RTL verification versus software validation versus performance validation. Which means that just like in the AI space, there is both a need for more specialized verification hardware, for example when it comes to the maximum capacity that needs to be supported, as well as a need to have that verification hardware be more flexible and future proof to handle a growing and changing set of verification requirements.
Just as software-driven updates continually enhance data centers and vehicles, we are now in the era of software-defined hardware-assisted verification, with the requirement to be delivering ongoing improvements and flexibility.
At Synopsys, we’re committed to empowering innovation through continuous re-engineering. And we have engineered our hardware-assisted verification systems with the future in mind. Our software-defined HAV solutions enable engineers to scale verification across industries, meeting ever-expanding pre-silicon demands.
Software-defined HAV is transforming verification. Let’s make progress together - one software improvement at a time.