Understanding UALink Architecture: A Protocol Deep Dive

Date: Apr 21, 2026 | 9:00 AM PST

As AI workloads scale into the thousands of accelerators and hundreds of terabytes of distributed memory, traditional interconnects cannot deliver the deterministic latency, bandwidth efficiency, or memory semantic operations required for modern training clusters. UALink provides a purpose built accelerator fabric leveraging 224G SerDes, fixed 64 byte flits, compressed transaction formats, and high efficiency TL/DLL aggregation to achieve predictable, low overhead load/store communication across large GPU pools. With multi virtual channel flow control, source ordered routing, and integrated AES GCM encryption via UALinkSec, the architecture is engineered for high performance, secure AI fabrics.

This session will review a breakdown of how UALink enables scalable memory pooling, reduces communication overhead, and supports pod  and rack scale GPU integration. We will examine the behavior of the UPLI, Transaction Layer, and Data Link Layer and discuss silicon level implementation considerations for accelerators and switches.

What you’ll learn:

  • How UALink implements low latency, memory semantic GPU to GPU communication
  • Internal structure of 64 byte flits, compressed request/response formats, and TL/DLL packing
  • How multi VC flow control, link level retry, and RS FEC ensure deterministic, lossless throughput
  • The role of UALinkSec in enforcing end to end AES GCM encryption and authentication
  • How UALink enables scalable memory pooling and GPU clustering across pods and racks
  • System level design considerations for integrating UALink controllers, PHYs, and switches

Register Now

Featured Speaker

Diwakar Kumaraswamy
Sr. Staff Technical Product Manager
 

With over 15 years of experience in Application Engineering and SoC design, Diwakar has built a career spanning FPGA development, global IP support, and technical leadership. Beginning at CoreEL Technologies with Xilinx FPGA implementations and corporate training, he went on to lead customer success for PCIe, CXL, AMBA, and other interface IP at Synopsys, followed by NoC architecture work at Intel. Now a Technical Product Manager at Synopsys, he drives high-speed interconnect solutions—such as Ethernet, PCIe, and UALink—for next-generation AI infrastructure.