Innovative Ideas for Predictable Success
      Issue 1, 2010

  NEWS  |   CALENDAR  |   PAST ISSUES SYNOPSYS.COM  |  CONTACT US


Industry Insight Industry Insight
Nanometer Challenges
Marco Casale-Rossi, Synopsys, considers the challenges that are slowing down the rush to nanometer technologies. This article is based on a webinar series presented in conjunction with TSMC. More details.

The very first integrated circuit has just celebrated its 50th birthday. It was a flip-flop made from four transistors and five resistors, manufactured at about 100 microns. Now in 2010 we are witnessing the birth of the latest integrated circuits – dual core processors consisting of approximately 400 million transistors, manufactured at 32 nanometers. This is the result of what we sometimes call scaling. More often we refer to it as Moore’s Law.

When Dr. Gordon E. Moore, a co-founder of Fairchild Semiconductor and Intel, first put forward his conjecture, he took an engineer’s viewpoint. He predicted that every new technology node would enable manufacturing – in the same silicon area (and therefore at the same cost) – twice as many transistors as the last; as a “by product” these transistors run 1.4x faster at each new node, and the power density is the same. Moore’s Law has proven to be valid for over 40 years, and has been instrumental in growing the value of the semiconductor industry from nothing in 1959, to the massive $270 billion that has been forecast for 2010.

The Nanometer Rush
The 32/28-nm technology node is about the 23rd generation of integrated circuits. It has taken 50 years to get this far, but the last 10 years or so has become a rush: the “nanometer rush”.

Figure1. shows the technical progress of the first dual core processors shipped at the last four technology nodes: 90, 65, 45, and 32 nanometers.


Figure 1. Dual Core Processors from 90 to 32 Nanometers
Source: M.Bohr, Intel IDF 2008. www.intel.com 2010. These examples are for illustrative purposes only.

After Moore’s Law, from one technology node to the next, integration capacity should double, frequency (clock rate) should improve by 40% and power should remain constant. What Figure 1. actually shows is that over the last four technology nodes integration capacity has increased, but is below the historical 2x level, while both frequency and power have remained about the same.

The nanometer rush, which has being going on exponentially for decades, is slowing down. Although Moore’s Law is “alive and well”, the main problem is power. We can either achieve higher performance, or lower power, but not both. Transistors get smaller but, because of higher power densities, they struggle to deliver any further performance improvement.

Design, and EDA (which helps engineers to automate electronic design) are increasingly important in dealing with nanometer challenges: they unleash the value – higher performance, lower power, sometimes both – of nanometer process technology.

Top Ten 32 Nanometer Challenges
The industry must tackle the following 10 specific challenges in moving to 32/28-nm technology.

#1 Complexity
The designs shown in Figure 2. are an example of Moore’s Law at work: over six technology nodes, integration capacity should increase by 25 = 32x. In this case, the absolute number of transistors has increased by 32x, but the die size has shrunk by 20%; therefore, the integration capacity has increased by 39x. Amazingly, power has remained the same.


Figure 2. Moore’s Law at Work Between 1996 and 2006
Source: M. Taliercio, STMicroelectronics 2001; L. Bosson, STMicroelectronics 2006. These examples are for illustrative purposes only.

It is very easy to distinguish between the six blocks in the 1996 chip. However, because the 2006 chip interleaves logical and physical hierarchy, it is much harder to tell which block does what. The 2006 chip is not only more challenging because of the technology node, the real challenge is the sheer complexity of the device. There are hundreds of millions of transistors, thousands of macros, and multiple levels of hierarchy, which are pushing computer hardware and EDA software to the limit.

#2 Lithography
We are still using 193-nm optical lithography to manufacture 32/28-nm chips. So, our ruler is currently about one order of magnitude bigger than the objects that we are trying to draw. At 32/28-nm, double patterning may be required, at least for some layers. Double patterning – and moving forward multi-patterning – has a profound impact on the rules of our game. The path from GDSII to masks and silicon becomes more complex and costly. Designs and designers also need to account for double patterning in the layout, the routing rules, the design rule checks (DRC), and so on.

Besides double patterning, another lithography challenge has arisen. The semiconductor industry introduced strained silicon some time ago to improve transistor performance: compressive strain at 90-nm, and tensile strain at 65-nm.

Unfortunately, the strain propagates itself within a certain area – across a radius of about 2-um – which strains things that were not meant to be strained. Such effects, due to the proximity of different types of transistors, have become visible, both in variations of timing and static power. These variations – typically 5% at 45/40-nm and in the 10% range at 32/28-nm – must be accounted for.

#3 Static Power and #4 Power Density

There are two reasons why power density keeps doubling from one technology node to the next:

(i) We have already reaped the benefit from using high-k dielectric and metal gates, and static power is still increasing, both in relative terms compared to dynamic power, and in absolute terms; design and EDA are increasingly being relied upon in order to keep static power below the 50% threshold of total power consumption. The problem is further worsened by the progressive integration of multiple, heterogeneous functions – baseband, connectivity, multimedia, etc. – onto a single process technology platform: e.g. LP or HP, exclusive or, with differences in static power of up to 100x from one another.

(ii) The benefits of using copper interconnect were realized when the industry introduced it a few technology nodes ago. Today, approximately 50% of dynamic power consumption is due to interconnects’ RC. Designers predict that this percentage will increase. Global interconnect length doesn’t scale with smaller transistors and local wires. Chip size remains relatively constant because chip function continues to increase. RC delay is increasing exponentially. At 32/28-nm, RC delay in 1 mm global wire at minimum pitch is 25x higher than intrinsic delay of a 2-input NAND fanout of 5.

#5 Reliability
Reliability issues are on the rise. Many chips have multiple functional modes to shut down and power up major functional blocks as part of the power management scheme. This approach has implications for the power and ground grid. As designers strive to achieve even lower power levels, they will have to deal with even more complex scenarios during physical implementation.

In modern electronic systems there is significant potential for interference between applications. Consider a car: it uses a variety of electronics operating at frequencies from 100 MHz to 2.5 GHz in the power train, car body, safety, communications, multimedia and navigation sub-systems. Reliability is an issue that now reaches far beyond simple IR-drop analysis. Designers must analyze and fix EM emissions during the design process, not after.

#6 Physical Verification
The number of design rules doubles with each new technology node, while the complexity of those rules increases at a rate of 5x. The net result is that each new design rules manual contains three times as many pages as the last, and that each rule requires 3x to 4x more DRC steps. This has a big impact on process development and design and layout complexity and, of course, impacts verification accuracy and runtime.

Any attempt to exhaustively describe the design rules for the router dramatically worsens its runtime. On the other hand, postponing design rule checks until after routing requires huge numbers of errors to be fixed outside of the design environment, with uncertain effects on timing, and possibly power.

#7 Variability and Uncertainty
Nanometer transistors now consist of very small numbers of atoms. The atomic uncertainty, both in dimensions (how many?) and nature (which kind?), leads to increasing timing and power variability. Figure 3. shows the differences between two transistors that arise because of a variation in the distribution of just 130 atoms of dopants, which translates to a 30% variation in VTH.


Figure 3. Atomic Uncertainties Lead to Increasing Power and Timing Variability
Source: C. Kim, University of Minnesota, DAC 2007

Random variations, such as random dopant fluctuation and edge roughness, are on the rise. These make optimization more complex and signoff more difficult to achieve. VDD, VTH and LEFF variations now account for 85% of timing and 80% of power variability. New models, current-based, and techniques are required for optimization and signoff.

#8 Test
The number of defects, and therefore faults, is increasing. So too is the number of test patterns required to guarantee coverage and quality. The compression ratio needed to maintain the status quo is getting very high.

During a given process technology node development, we strive to model what we see and what we want to use in the design flow: SPICE models, design rules, and so on. However, the information available during this phase is abstract because we use process qualification vehicles and not real applications or products. It is limited because we are bound by both cost and time considerations, and it is unstable because the process technology is still maturing.

Over the life of the process technology node, especially during the volume production phase of its lifespan, thanks to metrology and test we accumulate priceless information which we use today only in a very limited manner.

The industry is used to DFT, DFM and test for manufacture (TFM). Perhaps test for design (TFD) is next. We should analyze the data that we accumulate through test and metrology in order to generate new, better models for use in the design flow. These models may be very different from the initial ones, since they reflect the mature, stable phase of the process technology lifespan, and therefore may lead to better designs, with higher manufacturing yield.

#9 Computing
Unfortunately the CPU runtime required to perform a certain task increases at a pace that exceeds even Moore’s Law. (See Figure 4.)

There are two categories of EDA tools:

(i) Design implementation and functional verification, whose computational complexity has risen by approximately 100x as we have scaled from 350 to 32 nanometers

(ii) Signoff and mask data preparation, whose computational complexity has risen by approximately 100,000x

While we can explain the first number using Moore’s Law (seven technology nodes, 27 = 128 or about 100x), we cannot explain the second. It is clearly a drawback of nanometer process technology.


Figure 4. Computational Requirements Exceed Moore’s Law
Source: CA Malachowsky, co-founder, NVIDIA. EDPS 2009

#10 “Bandwidth”
Besides the specific nanometer technology challenges we have described above, there are also more “macro” challenges.

For example, multi- and many-core architectures require the bandwidth to increase at the same pace as Moore’s Law. 2D technology is running out of steam.

Even assuming an infinite scaling, interconnect is a big challenge moving forward, and an impediment to better performance and lower power. Multicore devices require communications bandwidth to increase in line with Moore’s Law. A state-of-the-art multicore device requires a bus bandwidth of 100 GBps. An off-chip memory solution can deliver a bandwidth of up to 100 GBps. A 2D planar multi-chip package solution can support around 100 to 200 GBps, so multicore is already pushing the limits of 2D packaging. Many-core will need several hundreds of GBps. It is apparent that “more of Moore” requires “more than Moore”.

Complexity Remains #1 Issue
The sheer complexity of design remains the biggest challenge at nanometer process nodes.

While waiting for EUV, or higher refraction index fluids, we have to live with what we’ve got, i.e. 193-nm immersion (water) lithography. EDA can help a lot in making sure that GDSII is as lithography-friendly as possible, and that the life of the mask data preparation tools is reasonably acceptable. Stronger methodologies and more automated flows are helping designers tackle chip power problems. EDA can also help engineers to model and compute variability in order to reduce uncertainty.

Over the last 50 years we have successfully implemented design for manufacturability and design for testability; now the challenge is to exploit the wealth of information that test generates, in order to improve design and manufacturability, i.e. yield.

Reliability analysis and physical verification are becoming an integral part of design implementation flow. This will ensure that problems are dealt with as they arise and not as an afterthought to design.

Multi-everything is an increasingly important part of our lives. We can still dream about processors running at 10GHz or more, but the reality is 3 to 4 GHz cores, software tools and Tcl scripts, which we must adapt to the multicore, many-core era.

Last, but not least, 3D IC can keep Moore’s Law “alive and well” longer than the current 2D technologies. 3D technology is coming, possibly sooner than one would expect.

Marco Casale-Rossi
Marco Casale-Rossi joined Synopsys in 2005 after 20 years at STMicroelectronics’ Central R&D and ASIC R&D departments. At ST he worked on the development and deployment of one of the first industrial EDA solutions for ASIC implementation. Marco’s most recent responsibility at ST was the management of the technology collaborations with EDA partners. In the last few years, Marco has contributed to building Synopsys’ vision of the nanometer design and technology roadmap.


©2010 Synopsys, Inc. Synopsys and the Synopsys logo are registered trademarks of Synopsys, Inc. All other company and product names mentioned herein may be trademarks or registered trademarks of their respective owners and should be treated as such.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative

Register Buttom

Email this article

WEB LINKS
- Synopsys Discovery Verification Platform

"The sheer complexity of design remains the biggest challenge at nanometer process nodes."