Compiler






The Economics of Scan Test Compression
Test costs can be reduced by compressing the scan ATPG pattern set, with no reduction in fault coverage. Chris Allsup, Marketing Manager of Test Automation Products, Synopsys, explains the technology and economics behind this approach, and considers the factors that determine how much scan compression is needed for today’s designs.

In an era of multimillion gate complexity and increasing density of nanometer manufacturing defects, a key challenge today is creating the highest quality deep submicron (DSM) manufacturing tests in the most cost effective manner possible. In an effort to contain costs at the tester, designers have begun to embrace a new DFT methodology known as scan compression. This approach utilizes on-chip circuitry to "compress" the scan ATPG pattern set, without otherwise compromising its fault coverage. Scan compression technology seems to have emerged at just the right time, offering designers the promise of reducing tester costs with only negligible impact on design performance, limited silicon overhead, and only relatively minor engineering resources needed to implement compression on-chip.

The Requirements of Compression
To understand the motivation for test compression, it is useful examine two distinct metrics related to the generation of high-quality manufacturing tests: test application time and test data volume. The first of these, test application time, is the time it takes to execute test patterns on a tester on a per die basis. Semiconductor firms may utilize compression to reduce test application time if they are sensitive to test execution costs, which depend on both the test application time and the cost per unit time of a tester. The second metric, test data volume, is the amount of stored data required by the tests. Companies want to reduce test data volume to the extent that it will reside within the dynamic memory of the tester, since this avoids the time consuming—and therefore expensive—operation of halting the testing of parts to load more test data into memory to execute the remaining tests. Further reductions in test data volume are needed if these companies wish to further increase test quality by applying substantially more test patterns.

Most design teams in fact want compression solutions that provide both test application time reduction (TATR) and test data volume reduction (TDVR). But how much of each is really needed? To find out, the benefits and limitations of increased TATR and TDVR are examined for organizations seeking to add compression to their existing DFT design flows. In the analysis that follows, references are made to a comprehensive test cost model, developed by CMU (Carnegie Mellon University) researchers for use by SEMATECH consortium companies, which was augmented to take into consideration the effects of compression. The model breaks down the total cost of testing ICs into several distinct categories: test execution, DFT implementation, silicon area overhead of DFT and imperfect test quality (i.e., the risk of faulty products ‘escapes’ into the field)1.

TATR Requirements
Firms wish to minimize tester costs by reducing the time it takes to apply tests to each die. Consider the case of implementing scan compression to improve TATR relative to test application time based on regular scan. 20x compression reduces test time by 95%. However increasing compression from 20x to 50x further reduces test time by only 3%, and increasing it from 50x to 100x reduces it by 1%. (Figure 1)

Figure 1
Figure 1. Test Application Time Reduction vs. Compression

Since test execution cost is just one of several cost categories, we observe that beyond 10-20x compression, the incremental decrease in test execution cost contributes to an even lower percentage decrease in total test costs. (Figure 2) The model reflects the low additional implementation effort and silicon overhead associated with deploying compression using Synopsys’ Adaptive Scan synthesis technology. It is noteworthy that for all scan compression implementations, the total cost of test actually increases beyond a certain compression level, so these two category costs must be well contained.

Figure 2
Figure 2. Costs of Test vs.Compression

TDVR Requirements
If semiconductor firms are highly sensitive to the cost of field escapes, they will generate more test patterns to target more silicon defects and utilize compression to reduce the increased volume of test data so that the size of the entire test pattern set can be stored in tester memory. This approach to increasing test quality takes advantage of different types of tests to target different types of defects. For example, path delay and transition delay tests target delay defects whereas bridging tests target resistive shorts. Typically, the increase in test data volume that results from applying at-speed and bridging ATPG patterns in addition to stuck-at patterns is in the range 4-6x.

Figure 3 models the relationship between field escapes, as measured by defective parts per million (DPPM), and compression for an example production IC. Each point on the graph represents the number of field escapes observed for a given compression factor, assuming the compressed ATPG pattern set just fits within the dynamic memory of the tester. Increasing compression by a small amount makes “room” for more patterns in memory that detect additional defects, thereby decreasing the number of field escapes. The cost of imperfect test quality shown in Figure 2 is directly proportional to the field escape rate and, depending on the manufacturer’s tolerance for field escapes, this cost component may be the largest single contributor to total test costs.

When the key benefit of compression is linked to improving test quality, is TDVR needed beyond the level of 4-6x originating from additional DSM tests? ATPG fault models and techniques in use today are limited in their ability to resolve all possible types of physical failure mechanisms, including small delay defects that could lead to functional, speed- or noise-related failures. This limitation places an upper bound on the quality that can be achieved for any given design by generating more test patterns and utilizing compression to reduce the test data volume. This is why, all other factors being equal, TDVR beyond ~10x is associated with diminishing improvements in quality, as shown in Figure 3.

Figure 3
Figure 3. Quality vs. Compression

The Role of Moore’s Law and Future Expectations
As designs increase in circuit complexity, the number of scan elements and length of scan chains increase approximately by the same factor so that test data volume and test application time also increase. However, using today’s existing ATPG technologies, the previous observations regarding compression limitations are valid for any given design: TATR beyond ~20x and TDVR beyond ~10x achieves diminishing returns on DFT resources as a percentage of total cost savings. Incremental savings related to further improvements in test application time and test data volume, assuming pattern size is less than tester memory capacity, are small compared with the design's total test cost structure.

While these conclusions are valid for any given design, design organizations will base their maximum compression requirements—and hence their compression technology—on future expectations: anticipated needs of many designs over a time horizon of several years. Moore’s Law predicts that gate density will double every 18 months. Therefore semiconductor firms may reasonably expect an order magnitude increase in circuit complexity over several years, encompassing several generations of products. They will require relatively higher levels of compression if they plan on utilizing their current tester equipment over the same period until fully depreciated. Higher compression must compensate for the limitations of older testers and thus permit gradual deployment of newer, more expensive testers with greater memory capacity and higher clocking frequencies.

To illustrate, Figure 4 summarizes maximum compression requirements for three scenarios assuming an expected order magnitude increase in gate complexity, and assuming all current designs approach the limits of tester memory and tester clocking frequency. The darker shaded rows represent interim requirements during the technology transition period whereas the lighter shaded rows represent longer term requirements after newer testers have been fully deployed. The first column describes the type of ATPG tests used for current designs: stuck-at only (“SA”) or stuck-at plus at-speed and bridging tests (designated “DSM”). The columns under “Future Requirements” designate the anticipated future type of ATPG tests; the factor increase in tester capabilities, in terms of memory capacity and clocking frequency; and the maximum required compression. In this example, newer testers will have double the memory and frequency of the older testers, though this is likely a conservative assumption.

In the first and second scenarios, designers plan to use the same types of ATPG testing, SA or DSM, respectively. The highest compression, 10x, is needed during each interim phase to compensate for the 10x increase in gate count expected before new testers are deployed. In the third scenario, pressures to improve test quality, as previously discussed, combine with future expectations to increase the maximum compression requirement to the range 10 • (4-6x) = 40-60x in the interim transition period.

Figure 4
Figure 4. Maximum Required Compression

Impact of Nanometer Processes and Ultra-High Quality Goals
Nanometer processes (65nm and below) and ultra-high quality goals (DPPM < 100) place additional demands on DFT methodologies to target ever more subtle physical defects. Very high compression may be needed to meet these demands, though the compression requirements are closely linked with faults model types and ATPG techniques. Small delay defect testing offers the potential to detect subtle speed-related failures by deterministically targeting small delay faults starting from the longest paths in the circuit. This technique itself would produce more than an order magnitude increase in pattern count, all other factors being equal. However the technology of small delay defect testing is still under investigation and is not supplied by any EDA vendor to-date.

An alternative methodology, built-in self-test, also has the potential to meet the quality needs of nanometer processes without requiring high compression, though production solutions for self-test must be highly automated and non-intrusive to the design flows to minimize implementation costs and performance impact; otherwise companies do not benefit relative to other approaches.

Conclusions
As designs increase in complexity, semiconductor companies are seeking to lower their test costs even as they endeavor to achieve higher test quality. In response, designers have already begun to use scan compression techniques to reduce test execution costs, and are seeking more cost effective compression solutions that provide the largest return for investment in DFT resources: nonrecurring engineering time and effort, recurring silicon overhead for compression, testers and software. In most instances the benefits of scan compression are incremental above 95% TATR and TDVR. Firms may seek higher compression—in the range 40-60x—if they anticipate higher gate counts and transition gradually to more advanced testers. Very high compression solutions may be required to meet ultra-high-quality goals and test challenges related to nanometer processes, though the adjunct ATPG technology itself is still in development. Built-in self-test represents an alternative approach that does not require high compression, but designers implementing self-test could be better served by more automated, and hence cost effective, solutions. Adaptive Scan technology is a scan compression architecture developed at Synopsys that was designed to provide the benefits of test application time and test data volume reduction while minimizing impact on design flows and design performance. Of the various scan compression technologies evaluated internally, adaptive scan was proven to be the least design flow intrusive, and provided the lowest area overhead, lowest power consumption and negligible timing impact on designs.

1 S. Wei, P.K. Nag, R.D. Blanton, A. Gattiker and W. Maly, “To DFT or Not to DFT?” Proc. Of IEEE International Test Conference, 1997, pp. 557-566.

About Chris Allsup
Chris Allsup is a veteran in the EDA industry with over 20 years combined experience in IC design, field applications, sales and marketing. Currently, he is marketing manager of test automation products at Synopsys. Chris earned a B.S.E.E. degree from U.C. San Diego and an MBA degree from Santa Clara University.

horizontal grey line

©2007 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.

horizontal grey line

Having read this Compiler article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative



Email this article
By Chris Allsup

WEB LINKS
 -Synopsys Test Solutions