How Will Angstrom-Scale Chips Advance the Electronics Industry?

Deepak Sherlekar, Rob Aitken

Nov 15, 2023 / 7 min read

This article was originally published on All About Circuits.

Every time you stream a 4K movie from your phone or play an online video game, you require bandwidth—high rates of data transfer that enable your connected devices to deliver engaging, interactive, and immersive experiences. Our digital world—with its increasing levels of intelligence—continues to demand more from the underlying technologies that make all these activities possible. But there are bottlenecks that threaten to thwart real-time responses or slow down what can otherwise be a quick transaction.

Engineering ingenuity has had a way of pushing innovation forward, and the semiconductor industry has certainly been at the forefront of this push. We’ve seen Moore’s law stretched to single-digit nanometers, as designers pack billions of transistors onto a single chip to satisfy the needs of compute-intensive applications such as AI, high-performance computing, and networking. With scale and system complexities growing, nanometer-scale physical chip features may no longer suffice, paving the way for angstrom-level scaling.

At one ten-billionth of a meter, an angstrom is a unit of measurement often used to convey sizes of atoms, molecules and, in the case of the semiconductor industry, dimensions of IC parts. In 2021, Intel was the first to lay out a process roadmap that introduced the angstrom era, anticipated to be manufacturing-ready in 2024. Meanwhile, imec, the independent research hub for nano and digital technologies, has outlined a chip scaling roadmap that takes the industry to two angstroms by 2036.

Achieving the promise of angstrom-scale designs will require collaboration and ingenuity across the semiconductor ecosystem. New techniques and technologies—from innovations in lithography to new transistor structures such as gate-all-around (GAA) and complementary FET (CFET) as well as multi-die systems—have already emerged to usher in this next era of chip design.

What will the angstrom era bring to our smart everything world? And how can the electronics industry best tap into its full potential? Read on for more insights into the next generation of semiconductors, where nanometers are no longer small enough. 

angstrom scale chip design

Slowing of Moore’s Law Drives New Innovations

Under Moore’s law, chip designers have come to expect that they could double the density of their chips roughly every two years. Through ever smaller features in process technologies, design teams could still extract power, performance, and area (PPA) benefits to meet the demands of our smarter, more connected world. However, the ability to fabricate successively smaller features per Moore’s law, called feature scaling, is slowing.  In response, the industry has uncovered new ways to maintain exponential improvements.

Angstrom-level scaling offers a way forward. It represents a collection of new technologies to compensate for the slowdown in feature scaling while maintaining the targets of Moore’s law, such as the doubling of transistor density with every successive process generation. With angstrom-level scaling, design engineers can fit more transistors on a chip, so their devices can deliver greater performance at lower power. For applications like natural language processing, genome sequencing, Industry 4.0 manufacturing, and scientific computing, this sets the stage for a new world of computing possibilities.

Given the number of transistors that angstrom-scale chips will be able to support, the future could bring:

  • Manufacturing lines with more compact robotic equipment, trained to complete tasks with greater speed and precision than today’s factory automation gear
  • Faster and more accurate modeling to project the impacts of climate change, to ramp up new vaccine discovery, and to deliver financial portfolio and risk management insights
  • More efficient R&D and product design processes for industries like automotive

What angstrom scaling also does is extend the benefits of Moore’s law, providing an avenue to break through the bottlenecks that can be detrimental to chip performance. 

Battling the Bottlenecks that Choke SoC Performance

It goes without saying that any lag in performance can lead to subpar results for various applications. Yet, bottlenecks do happen at various levels in chips. Consider neural network processing. Neural networks are used in deep-learning algorithms that can recognize patterns and correlations in raw data, clustering, classifying, and learning from them for continuous improvement. These algorithms benefit from the efforts of a huge number of parallel processors. The more processors that can be placed on a piece of silicon, the faster the chip can run these massive workloads. But chip designers must address multiple bottlenecks to achieve the PPA needed for SoCs supporting these types of applications:

  • At the transistor level, there’s a set of bottlenecks around the interconnects that tie the transistors together.
  • At the processor level, there’s a tradeoff between the complexity and number of processors and the amount of interconnect required to connect them, along with the need to move data swiftly between processing elements and system memory.
  • At the memory level, there’s a gap because on-chip memory isn’t scaling as quickly as the size of standard cells. As a result, one can only extract so much out of increasingly smaller logic if the memory footprint can’t shrink along with it.

At some point, it might seem easier to have bigger processors that are easier to program and that can do more things. However, this comes with the complexity of designing and manufacturing these larger devices efficiently while simultaneously reducing the amount of achievable parallelism and increasing the power usage for simple tasks.

Angstrom-scale processes are being designed through a massive research and development exercise spanning a large number of technologies across the entire design chain, from core process definition to chip design building blocks to a suite of design automation tools and flows which enable chip design. This is made possible by:

  • Augmenting traditional lithography-enabled dimensional scaling with new transistor structures
  • Technologies to build digital twins of candidate transistor structures, as well as process definitions to evaluate and select the most promising ones
  • New logic library and memory architectures that are building blocks of chip designs
  • And new algorithms in electronic design automation (EDA) tools to enable designers to implement and verify chips with exponentially larger transistor counts designed using these building blocks

Advanced lithography tools, such as high-numerical-aperture (High-NA) extreme ultraviolet (EUV) lithography, currently under development and expected to be delivered to fabs in 2025, will enable the printing of smaller structures. Meanwhile, GAA transistor structures allow stacking of multiple channels on top of one another to increase chip density.

Moving power distribution in angstrom-scale architectures from above the transistors to under them is called backside power distribution. Backside power distribution will enable GAA structures to achieve their full density potential. Placing the power delivery on the backside enables designers to shrink the logic cell height because the cells no longer need wide wires, called power rails, at the top and bottom to carry power. It also frees up significant wiring resources on wiring layers above the cells, reserving the front side of the chip for signal routing and preventing the interconnects from becoming a bottleneck. GAA may also allow the memory scaling that is no longer possible with FinFET structures, while reducing leakage current and increasing drive current for better overall chip performance. A more complex version of the GAA, CFETs consist of vertically stacked transistors that deliver significant area and performance benefits, especially for memories. As they are targeted for designs at 2.5nm and beyond, CFETs are anticipated to play an integral role in the angstrom era.

Another innovation that could go hand-in-hand with angstrom-scale dies is the multi-die system, comprised of multiple dies, often referred to as chiplets, stacked on top of one another and/or connected with an interposer, integrated in a single package. This interdependent architecture can be created through disaggregation, the partitioning of a large die into smaller dies for better system yield and cost, or by assembling dies from different process technologies for optimal system functionality and performance. Compared to a large, monolithic SoC, a multi-die system enables accelerated scaling of system functionality, along with benefits such as reduced risk and time to market, lower system power, and the ability to rapidly create new product variants. Angstrom-sized dies could play a central role in a multi-die system, supporting the processing prowess needed for bandwidth-intensive applications, while dies at older nodes enable less taxing chip functions. 

A New Way Forward for the Semiconductor Industry

The sheer volume of components being packed on chips these days is driving greater complexity into their design and verification processes. Considering billions of transistors at angstrom scale, it’s fortuitous that AI and machine learning (ML) are being integrated into the algorithms driving EDA flows. By looking for patterns or efficiency in repetitive, large-scale tasks with orders of magnitude speed up, AI and ML can uncover, for example, a one-in-a-billion fault of interest that could be impossible to discover using legacy EDA solutions. Similarly, ML allows applications at the front end of the implementation cycle, like synthesis, to get a good sense early on for what might happen later in the flow, so engineers can make pre-emptive decisions to guide the flow toward an optimal solution. While helping to increase engineering productivity and enhance quality of results, AI and ML can also contribute to faster turnaround times for angstrom-scale dies.

In addition to AI-driven design and verification flows, silicon-proven IP can reduce integration risks while accelerating time to market for advanced semiconductor devices. And solutions such as silicon lifecycle management, with on-chip monitoring capabilities, can help track the health and performance of chips throughout their lifetime, triggering methods such as modulating supply voltage to extend the lifespan of a chip and requests for replacement before it encounters a failure.

As it becomes more challenging to squeeze ever greater PPA from chips, engineers continue to find ways to advance semiconductor design. Angstrom scaling is one of those innovations that can deliver chips to fuel new generations of smart, connected electronics that are impacting our world in ways that, perhaps, many of us never expected. 

Continue Reading