Innovative Ideas for Predictable Success
      Volume 1, Issue 4

  NEWS  |   CALENDAR  |   PAST ISSUES SYNOPSYS.COM  |  CONTACT US


  Technology Update
Spotlight For SoCs, Size Matters
As 90 and 65nm processes move into the mainstream, it is becoming increasingly evident that one of the major drivers for adoption has been the volume and cost advantage offered by cramming more die onto a single wafer. But until now, there has been no way of assessing whether a design made the most efficient possible use of the available silicon area: let alone of automating the process of reducing those area requirements. Bernadette Mortell, Marketing Manager for Galaxy Design Planning, Synopsys, explains how the new MinChip technology from Synopsys does just that.

The move to 90 and 65nm processes is now in full swing. The findings of Synopsys User Group (SNUG) surveys bear out figures coming from the wider industry, that suggest that these two nodes now account for well over half of current-generation designs. The figure for the next batch of customer projects looks like being nearer three-quarters.

Volume Benefits Drive 90 and 65nm Adoption
With the technologies established and accepted, it is already possible to take a step back and assess what has driven the move to these two geometries – a valuable exercise for a company such as Synopsys, wanting to know how its tools might be improved to help its customers achieve their goals. The answer in a nutshell is that, whilst performance and power consumption have been a factor, the most common reason for 90 and 65nm adoption has been the volume benefit – the ability to create more die per wafer and hence cut costs.

In some ways, these survey results come as no surprise: immediate deployment in high-volume, cost-sensitive consumer applications has been another frequently-observed feature of the use of these newer processes, and one that marks them out from previous nodes which were more commonly used first on high-value, leading edge products. Moreover, die size reduction can be used to mitigate some of the design for manufacture and design for yield issues common in today’s ICs: fitting more die on each wafer can reduce yield loss for a given defect rate per wafer. Finally, size reduction can also have a positive effect on power performance. Individual circuit elements are closer together, and physically closer to the power mesh, so that unwanted voltage drops are reduced in a smaller design.

Defining an Optimum Die Size
But until now, defining an optimum die size – and fitting a design within it – has been a manual process – and one which fits uneasily within the overall flow. The typical approach has been to take the design through floorplanning, trial routing and optimization and design closure, all the way to final routing. At that point, the design team may look at the results and assesses whether they believe that further size reductions are possible: the key measure is the level of overflow on the device’s global route. If there is a great deal of routing resource still available – for instance if the figures are 60-70 percent utilization – then it is very likely that improvements are possible.

Even given such a likelihood, many teams simply choose to tape out and produce a first pass of silicon at the initial die size, waiting until subsequent respins to achieve optimization. They reason that, deep into the project schedule, it is too big a decision to rebuild the design, a process that requires looping back through almost the entire flow. The risks are worsened by the absence of any certainty of what an attainable size might be. The team must come up with a “best guess” of the smallest size they think they can achieve; there is therefore a distinct possibility that any optimization they attempt will either go too far, or not far enough. This makes the size reduction process inherently iterative, and therefore time-consuming.

Setting Target Area
Nor does the traditional floorplanning process earlier in the flow offer a solution. At this stage the team typically comes up with a target area based on the package size that the chip must fit into, traded against previous experience of typical utilization levels for the particular manufacturing process that will be used. But in consumer markets, the tendency is to be conservative in setting the size: the over-riding objective is to get to market in a timely fashion, and whilst a reduced die size is valuable, it cannot be achieved at risk of making the chip unroutable and delaying the project.

Die size reduction has, therefore, remained a process that is ‘tacked on’ at the end of the flow as a ‘nice-to-do’. But despite the overhead and the risks, many design teams remain willing to take on the task, because a relatively small change can reap large rewards. A 7 x 7mm chip built on a 300mm wafer, for instance, produces 1,274 die: just a 10 percent reduction in size increases that number to 1,415 die, a substantial gain.


Figure 1. Comparing Manual Die Size Reduction with One Pass MinChip Flow

The traditional process has therefore typically been to run scripts – customized for the specific design – that shrink the design after routing. After each script has been executed, an engineer needs to analyze and interpret the results, to decide the next type of shrink script to run – or indeed if the minimal size is being approached. This is not only time-consuming, but requires considerable skill and experience on the designer’s part.

MinChip Die Size Optimization
MinChip technology from Synopsys aims to automate the process of gaining those benefits, without the repeated iterations normally required. A new addition to the range of design planning and feasibility analysis features within the JupiterXT and IC Compiler tools, MinChip fits within the existing flow, working from an existing floorplan, and shrinking it whilst preserving features such as block shape, relative placements of macros and pin locations. The team creates a floorplan, performs a trial route and optimizes the design just as they would within the traditional flow. MinChip is then used before design closure and final routing (Figure 1).

By inserting a step at this point in the flow, MinChip is able to preserve not just physical features of the design, but also the work that has already been done on timing closure and optimization. And this approach also preserves intact the design team’s knowledge of these optimizations: they are already aware of issues such as the known congestion factors, and the routing requirements associated with implementing clocks and closing timing on the particular design in question.

“Under the hood”, MinChip runs a virtual placement and route cycle, using engines based on the placer used for standard cell and macro placement in Jupiter XT and IC Compiler. This is a very quick process, allowing many passes to be run. Built-in heuristics are used to assess what are the invalid die sizes, with techniques focusing on speeding up the search and identifying the lowest possible bounds for the die size, as quickly as possible.

Eliminating Manual Iterations
Although it adds a step into the design flow, MinChip executes die size reduction just once in the flow, and eliminates the iterative manual size reduction work at the end of the process. The cost is a run-time of less than nine hours for a one-million instance design. Early customer experiences bear out the value of the new technology. For instance, one team working on a consumer automotive product found that a single pass of MinChip could replace two weeks – five iterations – of script-based manual size reduction work. They directly compared the results achieved on a real-world design with the two approaches and found that, whilst the time taken was substantially different, the quality of result was not.

Other customers have used MinChip to automatically verify that they are working to the smallest attainable die size, whilst still staying within the confines of the JupiterXT/IC Compiler design flow. They thus gain a low-overhead way of ensuring that they maximize their figures for gross die per wafer, power performance and yield enhancement.

Having a reliable indication of smallest attainable die size has also emphasized the fact that there is a high variance in the quality of existing designs when measured by their area efficiency. As would be expected, mature designs that have already been iteratively optimized are substantially more silicon-efficient than newer designs in which little attention has been paid to die size. As a result, the actual area reductions achieved with MinChip working on real designs have varied from 4 to 36 percent - with an average figure of somewhere in the region of 9 percent.

Immediate Return on Investment
But the real benefits of MinChip are to be found in the increases in predictability and reductions in engineering resource that it enables. With the inclusion in the Synopsys flow of MinChip, customers can be assured that all of their devices will tape out in the smallest possible silicon area, without the need to undertake an unpredictable iterative process at the end of the design flow. For high-volume consumer applications, this knowledge can rapidly produce a return-on-investment.


©2010 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative

Register Buttom

Email this article

About Bernadette Mortell
Bernadette (Bernie) Mortell is general marketing manager for Design Planning and Chip Assembly products at Synopsys.

Horizontal
  WEB LINKS

-   JupiterXT

-   IC Compiler
Horizontal