Synopsys DTCO Flow: Technology Development

The Evolution of Logic Scaling

Throughout the era of planar CMOS scaling, process nodes retained a physical meaning associated with the minimum critical dimension of the gate length, and scaling advanced through progressive reductions in critical dimensions at a rate governed by Moore’s Law.

Nowadays, scaling of logic process nodes has adopted markedly different characteristics, with process node naming detached from physical dimensions and scaling governed by a different set of factors.

The main reasons for the departure from the traditional scaling of planar CMOS are the rapidly increasing development and manufacturing costs associated with advanced processes, particularly in lithography, and the physical limits that small dimensions present to the realization of properly functioning transistors and interconnects.

Central to the scaling of current and future logic process nodes is the need to evaluate and select technology options based on design-level criteria, embodied in the oft-quoted power, performance, and area (PPA). While the concept of guiding process technology with the goal of achieving certain circuit-level targets is certainly not new, the DTCO methodologies devised to achieve these aims have taken a definitive turn, and the impetus remains to make these methodologies more efficient and adaptable to future requirements.

Candidate technology options for incorporation into new logic process nodes include new transistor architectures and other innovations designed to achieve area gains or reduction in variability, known as scaling boosters, which are often implemented in the middle-of-line (MOL) interconnects. New transistor architectures and scaling boosters are then embodied in new standard cells designs to be evaluated through block-level design experiments. Unavailability of processing lines and engineering wafer cost motivate the use of simulation tools for guide the development, particularly in the early pathfinding phases. 

Synopsys DTCO Flow

DTCO can be aptly described as a software-based methodology for developing new semiconductor process nodes with a holistic consideration of how technology elements impact circuit performance. Application of DTCO leads to faster process node development at lower cost for the target PPA. The pillars of DTCO are the technology and design activities, with the very important design enablement function connecting them, as depicted in Figure 1. 

Figure 1. DTCO Functional Areas 

DTCO is neither a one-size-fits-all solution for all process node development nor a deterministic software system operating on right-by-construction principles. DTCO must be adaptable and support iterative functions with feedback loops from design to technology. The Synopsys DTCO flow has been defined with both these considerations in mind. The adaptability is provided through sub-flows that link simulation tools in ways that produce needed outputs for evaluation. The feedback loops emerge from one or more sub-flows connects in ways that support major design targets through iterative passes through the loop.  Figure 2 illustrates these concepts for the technology development and design enablement functions. 

Figure 2: DTCO tasks and feedback loops in technology development 

Transistor Design Loop

The transition from planar CMOS to FinFET was motivated by the need to improve electrostatic control of the channel to control leakage. FinFET has undergone multiple generations of improvements and, in highly scaled versions, remains an alternative transistor architecture for future logic nodes. Nevertheless, there is currently significant research in post-FinFET structures, starting with gate-all-around (GAA) architectures, which are a natural evolution from FinFET toward further improvements in electrostatics. Other transistor architectures operate on new principles, e.g. tunnel FET, or achieve better electrostatics with suitably engineered gate stacks retrofitted on older nodes, e.g. negative capacitance FET.

In current and future transistor design, materials engineering is a primary knob for performance improvement, particularly for improving carrier transport in the channel, reducing the semiconductor-metal contact resistance, and engineering the high-k metal gate (HKMG) stack to achieve the target Vt.

The impact of interconnect parasitics, which was already a major component of delay in planar CMOS, is now a primary consideration. It would not be an exaggeration to state that the performance advantages of a very well-designed transistor can be disappear if the interconnect parasitics are not properly controlled. Here too materials engineering has a role to play by finding alternative metals to replace the currently dominant copper. Copper interconnects require barrier layers to prevent copper diffusion into the inter-metal dielectric layers. As line widths shrink, the higher resistivity barrier layers have a greater impact in the overall wire or via resistance. This effect is exacerbated by surface and grain boundary scattering of electrons in the copper adjacent to the barrier interface. The search for alternative metals, both elemental and alloys, is a key R&D area.

The combination of all these factors means that transistor design must be done in context with the impact of new materials and nearby interconnects, particularly the contact plugs and other MOL structures. The vehicle of choice for evaluating transistor architectures is the ring oscillator.

The Transistor Design Loop is enabled by the Materials Modeling, TCAD-to-SPICE and FEOL Integration sub-flows. The Materials Modeling sub-flow links QuantumATK, the Synopsys materials modeling tool resulting from the acquisition of QuantumWise, with the Sentaurus TCAD tools. This flow supplies parameters for Sentaurus to properly model new materials or geometries where the bandstructure is modified through quantum confinement or stress.

The TCAD-to-SPICE sub-flow links Sentaurus TCAD transistor modeling and compact model extraction with Mystic, producing a compact model for insertion into the netlist of the RO or other test circuits. The FEOL Integration sub-flow emulates the component cells of the RO in Process Explorer and extracts the parasitic RC with Raphael FX. In this flow Raphael FX operates directly on the 3D structure emulated in Process Explorer. The parasitic extraction includes the parasitic annotation of the netlist for the test circuit.

The natural outputs of the transistor design loop are transistor architectures which exhibit the potential for realizing the target PPA of the process node. 

Scaling Booster Loop

Beyond the transistor architecture, another key technology consideration is the routing within standard cells of the MOL, M1 and M2 layers, and the power delivery network, with a view toward optimizing PPA when the standard cells are placed and routed in a design block. The need for these technology considerations results from aggressive dimensional scaling, and has given way to the emergence of scaling boosters designed to alleviate routing congestion without sacrificing performance through parasitic loading.

Many of the scaling boosters – for example, super via, self-aligned gate contact, fully self-aligned vias, buried power rails, self-aligned block for metal layers – rely on new process techniques such as atomic-layer etching and selective deposition of materials. The common theme is to build 3D structures using self-alignment and selective etching and deposition. At the intersection of scaling booster constructs and standard cell designs embodying these constructs are the design rules anchored on the process techniques used to implement the scaling booster. Often there are multiple process avenues for implementing a scaling booster construct, each with its cost, performance and design rule characteristics. The cost of scaling booster implementation can also be an important factor.  

The Scaling Booster Loop is made up of the MOL Integration sub-flow which emulates the booster integration in Process Explorer and checks the parasitic impact in Raphael FX. As in the transistor design loop, the Process Explorer to Raphael FX link is seamless, with Raphael FX operating directly on the structure emulated with Process Explorer. Beyond verifying that the booster does not exceed the target parasitic loading, the key function of this loop is to determine the design rules consistent with the chosen implementation method. 

Design Enablement and Characterization of a Mini-Library for Design Experiments

The evaluation of the combined transistor architecture and scaling booster constructs requires PPA analysis at the block level, often described as design experiments. These analyses must, of course, be preceded by the design and characterization of a standard cell library. Here agility in generating alternative cell topologies and efficient characterization of the library is necessary so that the design experiments cover a wide-enough range of options and can be performed in a timely basis to provide design-level PPA feedback to technology development. Consequently, the Library Characterization flow has been adapted to provide the agility required for efficient DTCO. 


DTCO is a methodology that helps semiconductor fabs reduce cost and time-to-market in advanced process development. The Synopsys DTCO Solution enables the efficient evaluation and down-selection of new transistor architectures, materials and other process options using power, performance and area (PPA) design metrics.