Growth in unstructured data—think video, images, text, and voice—is driving new compute architectures, with a greater portion of data center workloads running on accelerators. CXL is known as the “breakthrough” CPU-to-device, cache-coherent interconnect for processors, memory expansion, and accelerators. Running across the standard PCI Express® (PCIe®) physical layer (PHY) (and therefore using standard PCIe PHYs), CXL uses a flexible processor port that can auto-negotiate to either the standard PCIe transaction protocol or the alternate CXL transaction protocols, targeting extremely low latency for new cache and memory transactions.
Largely used by designers of data center servers, supercomputers, and enterprise computing systems for applications like AI and machine learning, the protocol allows CPUs and accelerators to access each other’s memory. Its technical specifications are developed by the CXL Consortium, an open industry standard group. CXL 3.0 doubles the speed of its predecessor, providing data rates up to 64GT/s (the same as PCIe 6.0) without any added latency compared to previous generations. According to the CXL Consortium, the newest specification also features:
- Advanced switching and fabric capabilities
- Efficient peer-to-peer communications
- And fine-grained resource sharing across multiple compute domains
What these features mean is more scalability and optimized system-level flows. There’s also backwards compatibility with preceding generations of CXL, and more importantly, designers may choose to use CXL 3.0 at lower link speeds to take advantage of many of the latest capabilities.
What makes CXL ideal for SoCs used in compute-intensive applications is its ability to maintain memory coherency between the CPU memory space and memory on attached devices. Memory coherency paves the way for higher performance via resource sharing, less complexity of the software stack, and lower overall system cost. Designers can then focus on their application’s target workloads rather than redundant memory management hardware in their design’s accelerators.
The current trend toward data center disaggregation is well-served by CXL 3.0. In a disaggregated architecture, homogenous resources such as storage, compute, memory, and networking are connected via optical interconnects. This approach leads to better platform flexibility, higher density, and better resource utilization, with data center designers able to tap into resources based on the needs of particular workloads. CXL 3.0 treats resources as interchangeable, allowing the more flexible provisioning and management of resources that disaggregated data centers require.