Synopsys Insight Newsletter 

Insight Home   |  Previous Article   |   Next Article

Issue 3, 2012

Technology Update:
The Benefits of Static Timing Analysis-Based Memory Characterization

Ken Hsieh, Synopsys product marketing manager for NanoTime – an advanced transistor-level static timing analysis solution, explains how extending the product to enable static timing analysis of memories is enabling design teams to improve turnaround time and verification coverage for designs with embedded memories.

Having fast and accurate models at all stages of a design is essential if SoC designers are to succeed in designing chips with embedded memories. That’s why embedded memory characterization is of increasing concern to design teams. However, the move to new process geometries is exacerbating the challenge – the number of memory instances per chip increases significantly at advanced process nodes. To support the full range of process, voltage, and temperature corners (PVTs) and to cater to the sensitivity of process variation, designers have to perform more and more memory characterization runs. On top of that, the data processing per characterization grows exponentially.

Modeling Memories: Two Approaches
There are two general approaches to creating memory models. The first is based on the characterization of the memory compiler-generated models (memory compilers are tools that automate the creation of many different and unique memories very quickly), and the other is an instance-based memory characterization. The characterization process involves fitting timing data to polynomial equations, where the design team derives the coefficients for the equations from a small sample of memory instances. Although this approach allows design teams to generate models very quickly, the resulting accuracy of the characterization is less than optimal.

To overcome these inaccuracies, memory designers perform the characterizations specific to each memory instance over a range of PVTs, but this takes time. Although there are several approaches to improving the instance-based characterization throughputs, all of them require the use of SPICE or FastSPICE dynamic simulators to trade off between performance and accuracy. This still does not guarantee the full verification coverage needed to ensure silicon success.

Typically, design teams focus around 70% of the engineering effort and time performing memory characterization on timing analysis and model generation. With the traditional approach, given a specific scenario of input and clock transitions, designers commonly use a dynamic simulator such as SPICE or FastSPICE to determine if a particular sensitization leads to a timing violation in the circuit block. By simulating under all possible sequences of transitions, a designer can determine whether or not the block-under-test can operate at the given clock frequency without any timing violations. Unfortunately, the number of vectors required for such an exhaustive validation is exponential to the number of inputs and state elements, so using dynamic simulation is impractical for all but very small blocks.

Static Timing Analysis for Memories
Timing verification is a process of validating that a design meets its specifications by operating at a specific clock frequency without errors caused by a signal arriving too soon or too late. Transistor-level static timing technology has been available for well over a decade. Today, the technology has evolved and expanded to cover a selected range of memory blocks like single/dual-port embedded SRAM, as well as the traditional “black-box” timing approach.

Unlike the dynamic simulation approach, static timing analysis (STA) tools remove the need for simulating the entire block under all possible scenarios. Instead, they use fast but accurate approaches to estimate the delay of sub-circuits within the block and use graph analysis techniques to quickly seek out the slowest and fastest paths in the block. The result is that an STA tool can typically find all timing violations in a block in a fraction of the time it would take a dynamic circuit simulator.

STA-based timing analysis through control logic and memory core array
Figure 1: STA-based timing analysis through control logic and memory core array

The latest development in STA technology — Synopsys’ NanoTime – has made it possible to time not only the control logic (transistor-level digital circuits) but also paths through the memory core array (i.e., bit-column, word-line, column-muxes, and sense-amps) as illustrated in Figure 1. Using NanoTime for memory characterization helps to improve design turnaround time and verification coverage, while maintaining accuracy to within ±5% of SPICE. NanoTime supports memory compiler- and instance-based memory characterization. It does not require netlist reduction techniques as commonly practiced in the dynamic simulation approach. Another significant advantage of using the STA approach with NanoTime is that the design team does not have to create vectors to perform the timing analysis. This alone saves tedious verification planning and processing time as well as reducing the potential of human errors when generating the stimulus for the dynamic simulations.

Figure 2 illustrates a typical memory architecture design and characterization flow based on the STA technology. In this flow, the design team uses NanoTime to identify timing violations in the memory designs and forwards the information to SPICE/FastSPICE to further determine what went wrong.

Memory architecture design and characterization flow
Figure 2: Memory architecture design and characterization flow

Modeling teams also use SPICE and FastSPICE simulators to fine-tune the detailed characterization of selected critical timing paths for “golden” accuracy. The STA memory architecture design and characterization flow also allows users to quickly and accurately generate memory library models within ±5% of SPICE accuracy.

Similarly, for characterization of memory instances generated by a memory compiler, it is possible to establish the STA characterization flow to perform the tasks illustrated in Figure 3.

Instance-specific memory characterization flow for the IP users
Figure 3: Instance-specific memory characterization flow for the IP users

This flow is ideal for design teams that use memory IP because it allows them to carry out the in-context instance-specific memory characterization for various PVTs without needing simulation vectors or pre-selected critical timing path for analysis. The timing models generated can be in the form of CCS models or the standard NLDM models. The implementation and analysis tools can then use the generated models for golden signoff.

NanoTime also performs special SRAM setup/hold timing checks as part of the STA process to ensure accuracy. Using traditional dynamic simulation, simple setup and hold checks would take a long time to complete if the designers weren’t confident that the simulation tools had done all of the checks exhaustively. But the STA approach gives peace of mind, and it generates timing models quickly for full-chip SoC signoff with gate-level STA tools like PrimeTime.

Summary
The STA-based memory characterization flow using NanoTime has many benefits over traditional dynamic simulation memory characterization approaches. It allows the user to perform memory characterization timing checks without needing simulation vectors and with the peace of mind that the verification coverage is complete. NanoTime’s standard features include automatic critical path identification, SPICE netlist extraction, SI crosstalk delay and noise analysis, and CCS timing and noise model generation for memories. It is designed to exploit memory core array regularity and abstraction to support large memories at the expected STA performance. But, most importantly, the accuracy of the analysis is guaranteed to within ±5% of HSPICE.



More Information:

About the Author
Ken Hsieh is a Product Marketing Manager for NanoTime and ESP-CV at Synopsys, Inc. He has 23 years of EDA experience with the past 15 years focusing on EDA marketing. Ken had worked for EDA companies like Nassda, IKOS, Mentor Graphics and Cadence in various application engineering and sales roles before joining Synopsys in 1998. Ken received his Bachelor of Science in Electrical Engineering from Texas A&M University and his Master of Engineering in Electrical Engineering from California State Polytechnic University, Pomona.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative