The use of silicon-proven IP is a well-established practice in the world of chip design. In fact, this timesaving and quality-enhancing method to develop complex systems-on-chip (SoCs) is being used in greater volume and across a growing range of applications. Particularly in high-growth and dynamic market segments, IP-based design has proven itself as a means to significantly reduce development time, ensure better quality of results, and enable more efficiency in engineering resources that can instead focus on their unique value-add and differentiation.
Nowhere is this truer than in the fast-paced high-performance computing (HPC) space. The scope of HPC has expanded over the past several years as chip developers find innovative and more efficient ways to pack additional horsepower into smaller, more energy-efficient chips and more interconnected silicon architectures. What used to be the domain of large-scale, super high-end computing use cases, HPC-enabled applications now run the gamut of market sectors, from enterprise to consumer to automotive and even into edge-based applications.
At the core of the HPC market’s growth is the massive and relentless increase in data consumption. While hyperscalers building large data centers to manage this huge increase in digital traffic are the most visible manifestation of this, it’s a trend that permeates across all areas of our hyper-connected society. We see tremendous data traffic growth from online collaboration, smartphones and other IoT devices, video streaming, augmented and virtual reality (AR/VR) applications, and connected artificial intelligence (AI) devices.
Chip developers of all types—including the companies directly providing the data center resources—are driving new chip architectures for these data-intensive needs. The most obvious needs are in the traditional critical areas of compute, storage, and networking which must scale to unprecedented levels. On top of that, data consumption is pushing the need for innovative approaches in other emerging areas. For example, the expansion of cloud services to the edge of the network requires new compute and storage models. The same goes for the broad deployment of AI for processing and extraction of insights from extreme quantities of data, a trend similarly pushing the envelope of chip performance, capacity, and interconnect. In addition, as machine-to-machine communication, streaming video, AR and VR, and other applications generate increasing amounts of data, the entire cloud infrastructure must be re-thought.
All of this is driving a new generation of approaches to simultaneously minimize data movement and maximize the speed at which data is transferred from one location to another, whether that data transfer is across long distances or from one chip to another within a server.
In all cases, SoC and system developers are looking to proven, scalable, and quick-to-integrate IP to enable the key attributes they need to manage the processing, networking, storage, and security needs of data inherent in state-of-the-art HPC applications. Performance is the underlying must-have, and designers building SoCs for HPC applications need a combination of high-performance and low-latency IP solutions to help deliver the total system throughput that provides the benefits of HPC to many different application areas.
Let’s look at some of the key functions that IP can help enable in the world of HPC.