Innovative Ideas for Predictable Success
      Volume 2, Issue 4

  NEWS  |   CALENDAR  |   PAST ISSUES SYNOPSYS.COM  |  CONTACT US


  Technology Update
Spotlight Achieving Higher Yielding Silicon
As leading edge semiconductor fabrication has moved to 65nm and 45nm, IC designers have become increasingly aware that their designs must fit the characteristics of the manufacturing process which they are targeting. Dan White, Product Marketing Manager at Synopsys, describes three key yield issues and potential remedies to produce higher yielding designs.

As leading edge semiconductor fabrication has moved to 65nm and 45nm, IC designers have become increasingly aware that their designs must fit the characteristics of the manufacturing process which they are targeting. Previous generations of designers could be fairly certain that, if they stuck to design rules, their chip would work. This is no longer true at technology nodes below 130nm and the concept of design for manufacture (or design for yield) has been firmly established.

A successful DFM strategy needs to mitigate three primary sources of yield loss: lithographic sensitivity; poor surface planarity; and the perennial problem of random particle defects. All of these are challenges that can be addressed. But to do so requires a high degree of information sharing between design and manufacturing, as well as the right design automation tools to analyze, prioritize and fix potential problems.

The essential approach is first to analyze the design to understand the effects of the three primary yield-affecting concerns. Analysis results can then be fed back into the design flow, allowing the design team to implement the changes necessary to enhance yield. The flow must help to ensure that these fixes do not create new yield issues in a different domain. Nor must they adversely affect timing, power, signal integrity and other critical design metrics such as chip area. To achieve all this, it is certainly useful to have a flow that employs the same core technologies across the design and manufacturing disciplines.

Tackling Lithography Issues
One of the most challenging aspects of producing today’s leading edge semiconductor devices is lithography. The equipment used today to print the fine features inherent in a 65nm design simply does not produce the necessary resolution. As a counter-measure, resolution enhancement techniques (RET) are widely used to increase design printability. But even using such technologies, design patterns commonly become distorted under some process conditions.

Put simply, the consequence is that the lithography process may not print the device features as expected. This in turn may change the electrical behavior of the circuit. Typical problems include pinching in metal or polysilicon lines, which in some cases can be sufficiently severe to cause a break in the line and hence an open circuit. In other cases, poor lithography can cause contacts or vias to become uncovered. Bridges between two layout structures can be equally devastating.


Figure 1: Layout and SEM view for the same structure: a pinch condition can be seen in an area where the design rules were met

More likely than a catastrophic defect, however, is the possibility that changes in the printed features will cause variations in circuit performance. Parameters such as transistor leakage, switching delay, metal capacitance and timing are adversely affected by variations in printed silicon. Such degradation in circuit performance and power consumption can lead to parametric yield loss.

Lithography Correction
The first DFM step in correcting for lithography-related losses is to understand the location of potential trouble spots. This is done by applying a process simulation to a DRC-clean design layout and scoring the results. Such a simulation will aim to match the golden mask synthesis flow, reproducing each step in the RET process and producing a result that accurately matches the actual production process. The simulation output will be a report of design hotspots, with an option to review the locations in a layout editor. Utmost accuracy is required to ensure that the tool finds all of the potential yield-affecting features while avoiding detection of false positives.

A typical simulation may identify dozens of hotspots. Correction guidance, in the form of documented remedies for common hotspot types, or actual annotations viewable in the layout editor, can help the designer to understand recommended fixes for identified hotspots. A better approach is to automate interconnect corrections within a place and route tool. In this flow, the analysis tool works in the background and feeds information on hot-spots – and guidance on how to fix them – back to the router. Corrections are applied automatically by the router within the normal place and route flow, greatly simplifying the hotspot correction process.

Wafer Planarization
While lithography problems have become more common at 90nm and below, wafer planarization first became a major issue at the 130nm process node, when many semiconductor companies switched from aluminum to copper interconnects. This entailed a complete change in the process used for metallization. The aluminum process involves depositing the metal and then etching it to create the interconnect lines. A further deposition of interlayer dielectric (ILD), followed by planarization, completes the process.

Copper, in contrast, is applied by first depositing ILD, then etching the metallization pattern and electroplating the metal into the resulting trenches. The excess copper is removed in a chemical-mechanical polishing (CMP) step.

This process sequence for copper can result in variations in the wafer surface height in areas where the metal density is not uniform. The CMP process itself can produce dishing in the dielectric, accentuating the variations in wafer surface height. Moreover, there is a danger of erosion in wide metal lines, causing too much copper to be removed. The results are electrical variation, and issues with lithographic depth of focus, caused by the surface height variations.

As with lithographic effects, such uncertainties may produce catastrophic functional failures or parametric yield loss. Increased resistance in the interconnects may lead to timing variability which will lower the attainable operating frequency; or internal timing violations that may cause functional failure.

The best cure for planarization problems is to build corrective action into the design flow itself. Once again this requires an automated analysis. The planarization analysis tool subdivides the design into regions, from which it extracts parameters such as overall metal density and perimeter. This data is then used to create a model for the surface height of each region. The tool produces a wafer surface profile that identifies areas of excess surface height variation, and a heat map to show the design team where problem areas exist.

Planarity Correction
Correcting for planarity issues is usually achieved by inserting dummy metal fill to produce a more uniform metal distribution. A design rule checking (DRC) tool is used, driven by a set of CMP design rules that constrain the fill algorithm. Simple fill algorithms may apply the extra metal in a single pass, while more complex strategies make use of a library of fill patterns, and perform several passes using successively smaller fill shapes to achieve the desired result.

Although this technique can be effective at achieving good planarity, it does not dynamically account for effects on circuit timing due to the capacitive effects of the extra metal. Once again, the answer is to make the correction process part of the normal router flow. The router is used to perform timing-driven model-based metal fill, with the analysis tool working in the background to generate the information the router needs to create an optimal fill strategy. The analysis tool can also be used to validate the design for planarity, and remains in place to flag potential issues to the design team.

Random Particle Defects
Random particle defects are perhaps the best understood of all semiconductor manufacturing problems, having driven the need for cleaner facilities with increasingly stringent particle defect requirements over many process generations. In fact, for most of the lifetime of the semiconductor industry, particle defects have represented the primary source of yield loss.

Particle defects produce yield loss when a particle contaminates the wafer surface, causing either a bridge and consequential short circuit, or by severing a physical net and leading to an open-circuit fault. The impact on yield of these defects is dependent upon two factors. First is the defect density (DD), or the number of defects of a given size for a defined area size. This is essentially determined by the “cleanliness” of the process. As such it is beyond the control of the designer and can only be addressed by improvements at the manufacturing facility.

Where the designer may have an impact is on the critical area (CA) of the design. In yield analysis, CA is simply a design-specific quantity that can be used to link DD with its statistical impact on yield. But CA does have a more practical representation. Any design will contain certain areas where a particle may land without adverse impact on the function of the chip, whereas in other areas, the occurrence of a particle will cause a short- or open-circuit defect.

CA can be viewed as the summation, over various particle sizes, of design areas which can be affected by those particles. By reducing the areas of the design that are vulnerable to failure caused by random particles, the designer can contribute to increased yield.

Predicting Yield Loss due to Random Particle Defects
Enabling yield improvements requires an analysis of the design’s CA characteristics, which can then be used along with known DD data, to predict yield loss. CA can be reduced by widening interconnect lines to make open circuits less likely, and increasing track spacings to reduce the possibility of shorts. Double vias can also be incorporated to introduce interconnection redundancy.

The primary trade-off in applying wire spreading and wire widening to a design is a potential increase in total chip area. Increased litho sensitivity due to additional jogs in the routing lines, as well as impact to timing, should be considered as well. Once again, it can be difficult to make appropriate improvements manually, and designers need to rely on yield-aware routing tools that can balance the impact of wire spreading, wire widening and via doubling on timing, power consumption and area.


Figure 2: Layout view illustrating critical area before (left) and after (right) wire spreading and widening. There is a significant decrease in critical area on the right. (Figure 5 from original paper)

Design for yield is now an established part of the work flow for chip designers working at leading edge technology nodes. The complexity of the problems and non-intuitive nature of their solutions, however, mean that engineering teams must lean heavily on their design automation environment to achieve good results. By deploying a well-integrated design flow with well-developed information sharing capabilities and in particular routing technology that is DFM aware, designers can maximize yield while continuing to produce more complex, feature-rich designs.


©2010 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative

Register Buttom

Email this article

Dan White
Dan White is a Product Marketing Manager in the Physical Design and Verification group at Synopsys. Dan has more than 15 years of experience in various aspects of yield management, yield improvement and test.

Horizontal
  WEB LINKS

-   Synopsys Design For Manufacturing Product Family
Horizontal
"A successful DFM strategy needs to mitigate three primary sources of yield loss: lithographic sensitivity; poor surface planarity; and the perennial problem of random particle defects."