Innovative Ideas for Predictable Success
      Volume 2, Issue 1

  NEWS  |   CALENDAR  |   PAST ISSUES SYNOPSYS.COM  |  CONTACT US


  Technology Update
Spotlight A Practical Approach to Measuring IC Design Productivity
If a project goes beyond its predicted schedule, it directly impacts profitability. With this in mind, Synopsys consulting and design services organization evaluated their own physical design processes across a broad spectrum of their customer design projects to better understand how productivity can be measured and improved. Michael Solka, Director of Physical Design Methodologies for Synopsys Professional Services explains the key findings, and how the resulting concepts can be applied within any design organization.

In order to analyze and improve productivity, it must first be measured. While this may seem like a straightforward requirement, on reflection there are a number of key issues and questions that must be addressed. What does “IC design productivity” actually mean? What exactly is meant by a productivity improvement of, say, 30 percent? Should productivity be measured in terms of lines of code, gate count, transistor count, or by the effort spent on a project?

Measuring Productivity
Gathering all the information required in order to measure productivity takes some considerable effort. Realistically, within the context of a real project, can design teams afford this overhead? Even if someone is given the task of managing a productivity initiative, cooperation from the rest of the team is needed to gather metrics.

Understanding how productivity can be improved involves looking at the obstacles to productivity. A survey of a number of Synopsys customers revealed that timing, silicon closure and functional verification were consistently quoted as the top three project bottlenecks. Other critical problems related to the way that a project is run, including multi-site development issues, staffing, concurrent flow development and third party IP quality.

Design Characteristics and Resource Utilization
It is clear that any measurement of productivity on a chip design project should take all of the above issues into account. A simplistic complexity metric that relates just to transistors, gates, or lines of code is inadequate.

The chosen definition within Synopsys is a function of the quantity and quality of units produced, as well as the labor per unit of time. In an IC design context, this can be expressed as ƒ(design characteristics)/ƒ(resource utilization), where the design characteristics numerator takes both quality and quantity into account.

To determine efficiency of resource utilization, factors such as CPU execution time, CPU type, memory usage and EDA tool usage must be considered. People are obviously a key resource, so it’s important to look at the number of days that they work on a project, the types of task they are working on, and the standard milestones achieved.

There are many design characteristics that can be measured. Total negative slack, worst negative slack, area utilization, power dissipation, instance count, clock latency and clock skew all determine the ‘health’ and arguably the maturity of the design. Other characteristics determine the physical complexity of the design: size, frequency, process technology, application attributes and IP content.

‘Design health’ is assigned to the quality part of the definition. This can be assessed to achieve predictability. Quantity, on the other hand, is something that requires more thought. What is really needed is a metric that will allow comparison of different projects. Deriving a quantitative value for relative complexity (complexity factor) requires a number of key metrics to be combined and normalized in a way that enables a linear relationship between complexity and productivity to be produced.

The productivity formula becomes:

design health and chips of a given complexity
resource utilization

Design Health
So how can each item in the formula be measured? As far as design health is concerned, there are physical-related factors, which include size, utilization, DRC status, number of nets, number of pins and number of instances. Timing-related factors include number of clocks, TNS, WNS, clock skew, clock latency and number of violations. Power-related factors include IR drop/rise and dissipation.


Figure 1: Metrics Capture Environment

All of these are metrics that can be captured within the design environment. To keep the measurement overhead low, metrics capture capabilities are built into the Synopsys Pilot Design Environment at each point in the design process: synthesis, design for test (DFT), design planning, place and route, and chip finishing. The metrics are recorded in a database, to allow reporting within a project context. The metrics can also be aggregated within a global database, which enables us to examine trends across a number of projects. By automating the measurement within Pilot, the burden of manual data collection is removed for chip designers.

Example: Metrics for Design Health
By considering a metric such as total negative slack, a key parameter in gauging the health of a design, relative health can be compared across a number of projects. Metrics can be tracked which show how well each project is converging towards meeting the timing of the design (Figure 2). Project A started off with a very large negative slack which decreases predictability towards the end of the project in time for tapeout. Project B is out of control; it is difficult to tell where things are going, so tapeout predictability is poor. Project C looks as if it suffers a lack of convergence; again, it is difficult to tell if it is possible to tapeout. By contrast, project D has positive slack and tapeout is not likely to be held up due to timing.


Figure 2: ASIC Total Negative Slack - View Larger Image

By comparing the four projects in this way, the team working on project C can learn from project D and take corrective action at an earlier stage. Identifying and monitoring key trends over time enables these decisions to be made.

Resource Utilization
Measuring the number of staff days per unit of time devoted to a project provides a snapshot of which people are working and – importantly – which activities they are working on over the course of the project. In Figure 3 below, each unit of staff effort is color-coded by different parts of the design process: set-up, IP qualification, floor planning, physical synthesis, routing, analysis, physical verification and project management. The height of each bar represents the amount of effort expended during a particular time period (in this case, one week). Some important milestones have been mapped into this figure.


Figure 3: Resource Utilization

Again, monitoring resource utilization allows trends to be identified within projects. This information provides a closed-loop system to predict the needs of future projects – one that incorporates overhead tasks as well as tasks which are integral to the design process. The database shows where similar projects and tasks can benefit from the same resource utilization, and where changes are necessary.

An obstacle to measuring resource utilization is that it requires a lot of discipline. However, it is important to track and categorize in real time, because by the time that the project is over, it is very difficult to reconstruct how the time was spent. There are further obstacles: analysis of the data is an overhead in itself, and there may be no immediate payoff; resource utilization planning is a long-term investment.

Chips of a Given Complexity
ASIC size, operating frequency and process technology are simple to measure: one million gates is easier to design than five million, 100MHz is easier to achieve than 500MHz, and 90nm designs are more difficult than 130nm designs. These are relatively simple comparison points. However, other factors make it difficult to determine and compare the complexity of different projects. The number of sources of IP is a major determinant of the complexity of physical design projects. Details relating to the particular application of the design are also relevant: how congested it is, whether it is flip-chip or wirebond, whether it has extreme low power requirements, and so forth.

This complexity factor is used to compare, scope and evaluate projects. It has been tuned for our own productivity-focused objectives purposes within our design services organization. One of the important requirements is that the complexity measurement should be lightweight and easy to use. The complexity factor should be apparent for any given project within a 30-minute interview.

The complexity factor of a project can be plotted against total project effort to show the correlation between the two. In order to understand these trends across projects, it is necessary to determine a method for normalizing the project analysis.


Figure 4: Effort vs. Normalized Complexity

To illustrate the range of complexities, at the bottom-left of the scale (Figure 4) could be a flat design, approximately one million gates, 130nm, with the predominant operating frequency less than 150MHz. At the top-right of the scale the design is still 130nm, but now perhaps five million gates, 400MHz is the predominant operating frequency, there are probably special I/O interfaces, a large number of clocks, and inevitably a lot of manual work and special flow requirements.

Using Complexity and Productivity Data
With access to complexity trends, resource analysis and health metrics, bottlenecks become apparent. Take the issue of multiple sources of IP. This can sink a lot of time on qualifying IP for the project. Putting into place QA procedures to help analyze IP as it comes in can enable problems to be discovered earlier rather than during the last couple of days before tapeout. This positively impacts productivity and predictability.

Improving resource utilization is another opportunity. The CPU and memory usage of each job is captured. This means that proper CPUs can be targeted with the right amount of memory in the future, ensuring an efficient mapping between the tool application and the available CPU resources. Identifying engineer’s training requirements can be more accurately tracked if there is an accurate understanding of how long tasks are taking at different parts of the design process.

Ultimately all of this leads to better overall project predictability. Understanding the critical design obstacles enables prediction of the holy grail of design metrics – when is the chip going to tapeout?

Summary
When Synopsys started looking closely at productivity, it was apparent that design productivity was influenced by a number of factors beyond the characteristics of the specific design itself. This includes the impact of new releases of the tools, the use of faster computers, the skills and training of the design team, and project management practices.

Assessing the potential for productivity improvement is clearly possible, but understanding what to measure is critical. Measurement must be a lightweight low-overhead process, because it has to be done in real time. A low-overhead measurement process enables real-time capture of characteristics about the design and the characteristics of resource utilization. For Synopsys, normalizing the analysis based on chips of a given complexity provides a basis for predictability and the ability to track improvement over time.


©2010 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative

Register Buttom

Email this article

Michael Solka
Michael is Director of Physical Design Methodology for Synopsys Professional Services. He has over 20 years of experience in the semiconductor industry. He has worked in engineering, marketing and management roles at Motorola, Advanced Micro Devices, Ross Technology, and was president of The Silicon Group, a design services company in Austin, Texas. He graduated from The University of Texas with a bachelor’s degree in electrical engineering and from The Wharton School with a master’s degree in business administration.

Horizontal
  WEB LINKS

-   Synopsys Professional Services
Horizontal
Understanding the critical design obstacles enables prediction of the holy grail of design metrics – when is the chip going to tapeout?
Horizontal

Full Productivity White Paper
This white paper examines IC design productivity from the perspective of the design organization to clarify the key factors affecting design team productivity. Based on insights gained from hundreds of design projects conducted by Synopsys Professional Services and productivity measurements gathered on dozens of chip tapeouts, the productivity analysis presented here outlines a practical methodology for measuring, comparing, and improving IC design productivity.