Verification News 

Increasing Verification Efficiency using Virtualization and Reuse of System-level models 

By Frank Schirrmeister, Director of Product Marketing, Systems-Level Solutions Group at Synopsys

Recent market research indicates the development effort for software running on 90nm chip designs has already surpassed the effort of the hardware development. The projection for 2011 is less than 40% of the chip development cost is spent on hardware. Software now dominates project cycles and determines when a chip can get into volume production.

In addition, the industry is facing the challenge that power envelopes effectively have stopped the traditional evolution of processor performance scaling. To meet the stringent low energy consumption requirements, design teams are adding multiple processors to their designs, in turn increasing software development challenges because traditional, sequential software now needs to utilize multi-core architecture. As a result, the importance of software verification increases and the software itself takes on a new role as a component in the hardware verification process itself.

Using statistical project data, this article confirms common wisdoms regarding the importance of hardware verification and the growing importance of software development itself. Also, it will examine how traditional hardware verification techniques can be incrementally augmented. First, virtualization of embedded hardware components like embedded processors and peripherals improves verification turnaround time. Second, embedded software itself becomes the reference for hardware verification, running as an executable on processor models. Third, hybrid offerings of virtual platforms and hardware prototypes for early software development and verification help to combine the advantages of both solutions and further increase verification efficiency. Finally, the reuse of system-level reference models and testbenches, which were originally used for algorithm optimization and validation, improves verification productivity for blocks in the digital signal processing space.

Common System Design and Verification Wisdoms
Some of the most common system design and verification related wisdoms are (1) verification requires 70% of the effort, (2) software effort becomes dominant and (3) hardware and software need to move closer together, with a specific need of hardware related software development to start as early as possible and to be as productive as possible.

In theory the development of firmware and driver software can be done “blind” based on register specifications provided by the hardware teams, the reality is that integration with the hardware is often a source of unforeseen surprises. The best case is that any integration issues can be corrected in software without having to re-spin the silicon, causing relatively minor but still important delays. The worst case scenario of a silicon re-spin can be catastrophic for a program. As a result, development teams try to start software development and software driven verification as early as possible.

Today, three different basic techniques to execute software on a hardware representation have found adoption. First, in a derivative design, a portion of the software can be developed using the previous-generation chip. This approach often works best for the portions of the software higher up in a layered software-architecture, specifically the hardware independent application software. However, given that the register fields, IP and functionality are updated and enhanced from one chip generation to the next, this approach is difficult for lower-level portions of the software like drivers and middleware.

Second, later in the design flow, after the RTL is complete and has reached a stable state using functional verification techniques, FPGA prototypes can be used. They are pre-silicon, fully functional hardware representation of the SoC, board and I/O implementing unmodified ASIC RTL code. Optimally implemented, they can run at almost real-time with external interfaces and stimulus connected, and provide in conjunction with RTL simulation higher system visibility and control than the actual silicon prototype. Synopsys CHIPit and HAPS offerings are good examples of hardware prototypes and offer significantly higher speed-levels than traditional hardware/software co-verification, combining RTL simulation with cycle accurate processor models.

Third, virtual platforms offer a solution much earlier in the design cycle, as soon as the architecture of the design has been worked out. Virtual platforms are a pre-RTL, register accurate and fully functional software model of SoC, board, I/O and user interfaces. They execute unmodified production code and run close to real-time with external interfaces like USB as "virtual I/O". Because they are fundamentally software, virtual platforms provide high system visibility and control including multi-core debug. They can also serve as an elegant vehicle of collaboration between semiconductor and system house. Since the recent standardization of the OSCI TLM-2.0 transaction-level APIs, SystemC™ has become the suitable infrastructure to develop fast virtual platforms using interoperable transaction-level models and is supported by a variety of commercial products including the Synopsys Innovator product line.

Depending on the required accuracy, speed and desired time of availability, different technologies offer the most appropriate solution for software development and software driven hardware verification. Often hybrid offerings allow designers to capitalize on the advantages of several offerings. For example, RTL simulation can be augmented with fast transaction-level models of processors and peripherals to increase simulation speed and verification coverage. Alternatively, virtual platforms and FPGA prototypes can offer better solutions to design problems in hybrid use modes combining software and hardware based execution than the individual offerings can by themselves.

Confirming the Common Verification Wisdoms Using Statistical Project Data
Table 1 outlines the key characteristics of 12 recent projects including complex hardware and software, which were analyzed for effort and elapsed time. Application domains spanned wireless, consumer, data communications, video processing and graphics. The feature dimension for the chips ranged from 90nm to 65nm, with one project at 130nm. The average gate count was 12M gates, with an average of 14Mb of on-chip memory and in average 10% of the chip being analog. Reuse of hardware and software averaged 70% and 38%, respectively. The number of on chip processors ranged from 1 to 4. In average the software development effort already comprised 45% of the overall project development effort.

Table 1: Key characteristics of 12 projects (Source: IBS, Synopsys 2007)

In terms of elapsed development time, it took in average 62 weeks for the hardware development to get from requirements to GDSII layout representation. On the software side the porting of operating systems took an average of 20 weeks, utility development 32 weeks and application development 44 weeks.

The detailed project data in Figure 1 for those same projects shows, that the single most dominant factor for the effort was application software development, averaging 30% of the project effort, followed by RTL verification combined with netlist development, averaging 21%. Next biggest effort was spent on hardware dependent utility software development with 13% and qualification of hardware IP with 11%. In summary, this data confirms the trends towards software as well as the importance of verification.

Average elapsed time per development task as percentage of elapsed time from requirements to tape out

Effort in man weeks as percentage of the overall effort for hardware and software development
Figure 1: Key project data of 12 projects (Source: IBS, Synopsys 2007)

To summarize, the statistical data seems to confirm the common verification wisdoms:
  1. RTL verification is an average of 21% of the overall hardware/software effort, or 38% of the hardware effort itself. In addition, it consumes about 55% of the elapsed time from requirements to GDSII. Verification continues throughout the flow towards tape out in different variations and the large percentage of effort spent on IP qualification can be considered another form of verification as well. Hence, verification easily reaches the often mentioned 70% figure as percentage of hardware development. It for sure is the most significant issue for the hardware portion of the development.
  2. Across these 12 projects the importance of software was significant, took between 28% and 62% of the overall effort with an average of 45%.
  3. The time it took to develop the software associated with the project was 32% for OS porting, 54% for utility software and 72% for application software compared to the time it took for the hardware alone to get from requirements to GDSII layout representation. With an average of 62 weeks (more than a year) to get to GDSII, a fully serial development process in which software development starts when engineering samples are available, would add another half to three quarters of a year to the project schedule. Unless being developed in parallel, the software development delays the ability to ship hardware in volume, delaying the time to ROI (Return on Investment).

In order to properly understand how software and hardware development possibly can overlap using different technologies, it is important to understand the elapsed time for each of the development phases. Table 2 also shows the elapsed time for each of the phases as a percentage of the elapsed time from frozen requirements to tape out (note that the percentages of the individual phases do not add up to 100% as they overlap).

It becomes clear that in average a stable specification – the pre-requisite for virtual platforms - is available 17% after project start, while it takes almost 70% of the time from requirements to tape out to arrive at stable RTL – the pre-requisite for hardware prototypes. Virtual platforms and hardware prototypes are available at very different times of a project and therefore applicable to very different development phases. They are in reality complimentary rather that competitive solutions for early software development and verification.

Enhancing Verification Efficiency Using Virtualization
As the SoC design cycle progresses, if a virtual platform was made available early for software development, it can evolve to meet different needs. There are three main use models of "software driven verification", which utilize the integration of virtual platforms with signal level simulation at the RT-level:

  • When an RTL block becomes available, for example, it can replace its transaction-level model in the virtual platform. Software can then be verified on this version of the platform as a way to validate both hardware and software. Knowing that real system scenarios are used, increases verification confidence. Furthermore, simulation used in verification is faster, given that as much of the system as possible is simulated at the transaction-level.
  • The virtual platform can also provide a head start towards RTL verification testbench development and post silicon validation tests by acting as a testbench component running actual system software. The virtual platform can be used to generate system stimuli to test RTL, and then verify that the virtual platform and RTL function in the same way. Users can efficiently develop on the TLM model “embedded directed software” tests, which can also be used for system integration testing. As a result productivity of verification test case development increases.
  • Additionally, as portions of the virtual platform are verified as equivalent to their corresponding RTL, the virtual platform can become a golden or reference executable specification. As a result users gain a single golden testbench for the transaction-level and the RT level.

The transactor interface between virtual platforms using TLMs and traditional RTL can be written in SystemVerilog to allow the bus functional model to be synthesizable in order to allow co-execution with hardware based environments. Alternatively, the transactor can be written in SystemC and the interface to RTL simulation can be at the signal level.

Figure 2 and Figure 3 illustrate an USB OTG example in the Synopsys Innovator virtual platform development environment and a USB verification environment using TLM processor models and embedded software, respectively.

Figure 2: USB Example in Synopsys Innovator

Figure 3: USB Verification environment

Even when a virtual platform has not been available from the start of the project, virtualization of hardware components can be very important to incrementally increase verification efficiency starting from an RTL verification environment:

  • Replacing the RTL representation of on-chip processors in the system with virtual processor models at the transaction level can significantly increase simulation speed, which in turn shortens verification turnaround time. In concrete customer examples we have seen up to 32x speed up of simulation when replacing a single processor model. In the same examples the execution of the virtual platform itself was about 7000x faster than RTL, while still being functionally and register accurate to allow embedded software development.
  • Incorporating software drivers in functional RTL verification to execute real product test cases does not require a complex virtual platform. Only the appropriate sub-system needs to be modeled and connected to RTL simulation. This can be as easy as adding a transaction-level processor model from a library, connecting it via a simple bus model to the transaction-level model of the peripheral under verification and connecting that to RTL (see Figure 2). The Synopsys DesignWare® System-Level Library for example contains over 100 transaction-level models of popular processor architectures like ARM®, MIPS®, PowerPC® as well as models for most of the Synopsys DesignWare cores like USB, SuperSpeed USB, PCI, SATA, etc. For verification environments using these models no new model development is needed.

Hybrid Prototypes for Embedded Software Development and Verification
To even further increase verification efficiency by increasing simulation speed and execution of the embedded software in the system, hardware prototypes can be used. Given that virtual platforms and hardware prototyping are available at fundamentally different stages of a project, hybrid solutions are a viable solution letting developers capitalize on the advantages of both worlds. While virtual platforms are available very early in the design flow – often only weeks after the specification has stabilized – they typically do not represent the full implementation detail which FPGA prototypes can expose. In contrast, FPGA prototypes run full accuracy of the design are fairly high speed levels, but are available later in the design flow but still long before silicon returns from production.

It takes three technology components to enable hybrid solutions of virtual platforms and FPGA prototypes. Starting on the hardware side, physical interfaces must be provided to connect the actual hardware prototype to the workstation running the simulation. PCI Express is a common solution here and used for example by the Synopsys HAPS and ChipIT solutions. Second, data must be transported using an agreed upon protocol between the virtual platform running on the workstation and the implementation executing on an FPGA prototype. SCE-MI has become a standard in this domain. Finally, for conversion from the transaction-level model to the transport interface, transactors are necessary to translate high-level TLM interfaces to the pin-level, allowing mixed abstraction-level simulation.

Figure 4: Hybrid between Virtual Platform and FPGA Prototype

A principle diagram of a hybrid between a transaction-level model and hardware prototype is illustrated in Figure 4. There are several hybrid use models combining the advantages of virtual platforms and FPGA prototypes:

  • RTL Reuse: Given the high IP re-use rates indicated above, RTL may exist from previous projects or may be acquired. While more and more IP users request high-level models as part of an IP purchase, it may not always be available. Hybrids of virtual platform and FPGA prototype allow a virtual platform to re-use existing RTL and avoid modeling effort of potentially complex IP blocks. Given that FPGA prototype execution is essentially cycle accurate, it also often increases overall fidelity.
  • Accelerated Software Execution: Due to FPGA implementation optimization for algorithm execution and not processor implementation, software typically runs on workstations and virtual processor models faster than in FPGA prototypes. Hybrids of virtual platform and FPGA prototype with processor models on the workstation allow overall faster execution while maintaining accuracy of accelerators and peripherals.
  • Virtual Platform as test bench for FPGA prototype: Given that verification often starts at the pre-RTL level for validation purposes, system-level development efforts can be re-used for the actual RTL verification. Hybrids of virtual platform and FPGA prototype with the virtual platform acting as testbench avoid duplicate efforts and enhances model re-use.
  • Joint system environment connections: For popular interfaces like USB and SATA virtual platforms already provide real-world and virtual I/O interfaces, for example connecting to physical USB devices. In addition a daughter cards in FPGA prototypes provide real world I/O with interfaces to real life streams like the wireless physical interfaces. Hybrids of virtual platform and FPGA prototype with real world I/O on both sides allow real world stimulus to be used where most appropriate.
  • Virtual platform "Virtual ICE" connected to FPGA prototype: Re-use of the virtual development environment running in a virtual platform including for example disks, USB virtualization, visualization etc. allows better access to FPGA prototypes and decreases set-up time. Hybrids of virtual platform and FPGA prototype with the virtual platform executing the development environment avoid additional development efforts, allow the FPGA prototype to be kept remote and increase familiarity for software developers who often prefer to just see a keyboard and screen

Figure 5: System Studio Model connected to RTL via VMM

Reusing System-level Models
Re-use of system level models can even further increase verification efficiency, extending beyond embedded software and virtualization of embedded hardware to allow early and fast execution of embedded software for verification purposes, . This is achieved by avoiding re-coding test benches and reference models which had been already developed elsewhere.

A typical design flow for algorithmic blocks like modems, video and audio processing blocks and general communication systems will start with a floating point representation of the design to optimize the algorithm itself. Through several steps of refinement these models will be transferred to fixed point arithmetic, reflecting implementation effects, and finally will be implemented in RTL. During this process, environments like Synopsys System Studio allow the efficient re-use of test benches to ensure functional correctness of the model using simulation.

When it comes to verification of the RTL implementation of such a block, the system-level model can serve as a verification reference. It is exported as a plain C++ or SystemC model, which can be connected to RTL using the VMM verification environment. This allows to efficiently reuse the early work performed by the algorithm designer in hardware verification. This eliminates communication errors often introduced using manual verification plans by directly connecting an executable verification reference to the RTL simulation.

Figure 4 illustrates such an environment, in which a System Studio reference model, exported as a SystemC model, is connected using VMM to VCS RTL simulation as executable verification reference.

With verification still being one of the key issues determining project efforts and timelines and software’s increasing influence on project success, smart verification taking into account embedded software becomes more and more important

Using virtualization of embedded hardware, verification efficiency can be improved both incrementally starting at RTL verification bottom up and top down, starting with virtual platforms originally intended for early pre-silicon software development. Incrementally verification efficiency is achieved by augmenting traditional RTL simulation with virtualized transaction-level models of processors and peripherals, simply increasing the speed of simulation and directly executing executable reference models as part of the testbench. In top down flows, verification efficiency can be increased by re-use of existing virtual platforms and their models, which can provide a head start for verification scenario development by simply replacing the RTL under verification until it is available and can become a reference for RTL verification to follow.

Hybrids of virtual platforms and FPGA prototypes as well as hybrids of RTL simulation and transaction level models allow developers to capitalize on the combined advantages of the individual solutions. The immediate effect on verification efficiency largely stems from faster execution of simulations, in turn enabling faster turnaround of verification.

Extending beyond embedded software, system-level test benches and models of algorithmic blocks can be directly used as verification reference after implementation. Verification efficiency is achieved by the avoidance of re-coding models and test benches which had already been made available as part of the algorithm development and optimization process.

With its unique portfolio of system-level and verification technologies – System Studio, Innovator, DesignWare System-level Library, VCS, HAPS, ChipIt and VMM - Synopsys provides all the key components to increase developers verification efficiency.

Further Reading
For more information on virtual platforms and a specific case study check out our whitepapers: