Chiplets are small dies that, when integrated into a single package, form a larger, multi-die design. By partitioning a larger design into chiplets, designers gain the benefits of product modularity and flexibility. Separate dies—even those developed on different process nodes—can be assembled onto a package to address different market segments or needs. They’re also easier to fabricate and produce better yields compared to a large, monolithic die.
As for chiplet packaging, there are a variety of options to support higher transistor density, including multi-chip module (MCM), 2.5D, and 3D technologies. The earliest type of a system-in-package (SiP), available now for a few decades, the MCM brings together at least two ICs, connected via wire bonding, on a common base in a single package. A 2.5D design includes a GPU and high-bandwidth memory (HBM) assembled side-by-side on an interposer in a single package. Even though the logic is not stacked, in some 2.5D designs, the HBM consists of 3D stacked memory, thus bringing 3D content into the 2.5D design. In a 3D package, heterogeneous dies are stacked vertically and connected with through-silicon vias (TSVs); the architecture paves the way for very fast memory access bandwidth.
An HPC design typically utilizes chiplets that come in various packaging types. MCMs are ideal for smaller, low-power designs. 2.5D designs are suited for artificial intelligence (AI) workloads, as GPUs connected closely with HBM deliver a powerful combination in terms of compute power and memory capacity. 3DICs, with their vertically stacked CPUs and fast memory access, are ideal for general HPC workloads.
Globally, data centers accounted for 200 TWh in 2019, or about 1% of electricity demand. Even with a projected 60% increase in service demand, this usage level is anticipated to remain almost flat through 2022, so long as hardware and data center infrastructure efficiencies continue, according to a report by the International Energy Agency. Clearly, any reduction in power consumption at the chip level—particularly if the reduction can scale across the multi-die design—will be beneficial. To that end, the next frontier for HPC and data center applications could be optical ICs. Integrating optical ICs into the same packets as silicon provides substantial benefits in power reduction and increased bandwidth. While optical technology is starting to find its way into data centers, providing another way to scale up, reduce power, and maintain costs, it is already a proven method in the supercomputing world to connect hundreds or even thousands of CPU nodes.