AI Accelerator IP for Cloud Computing

AI accelerators process tremendous amounts of data for deep learning workloads including training and inference which require large memory capacity, high bandwidth, and cache coherency within the overall system. AI accelerator SoC designs have myriad requirements, including high performance, low power, cache coherency, integrated high bandwidth interfaces that are scalable to many cores, heterogeneous processing hardware accelerators, Reliability-Availability-Serviceability (RAS), and massively parallel deep learning neural network processing. Synopsys offers a portfolio of DesignWare IP in advanced FinFET processes that address the specialized processing, acceleration, and memory performance requirements of AI accelerators. For more information, visit the DesignWare IP for AI web page.

Core AI Accelerator

Edge AI Accelerator


  • DDR memory interface controllers and PHYs supporting system performance up to 6400 Mbps, share main memory with compute offload engines plus network and storage I/O resources
  • HBM2/2E IP allows high memory throughput operating at up to 3.6Gb/s with minimal power consumption
  • Complete USB IP solution reduces engineering effort while reducing area
  • CCIX IP solution supports data transfer speeds up to 32 Gbps and cache coherency for faster data access
  • Very high-bandwidth and extremely low latency Compute Express Link (CXL) IP solution supporting CXL 1.0, 1.1, and 2.0 specifications, all three CXL protocols (, CXL.cache, CXL.mem) and device types 
  • HBI and USR/XSR IP solutions for reliable die-to-die connectivity leverage high-speed SerDes PHY technology up to 112G per lane and wide-parallel bus technology enabling 4Gbps per pin
  • 56G and 112G Ethernet PHYs and Ethernet controllers for up to 800G hyperscaler data center SoCs
  • High-performance, low-latency PCI Express controllers and PHYs supporting data rates up to 64GT/s to enable real-time data connectivity
  • Highly integrated, standards-based security IP solutions enable the most efficient silicon design and highest levels of security
  • Low latency embedded memories with standard and ultra-low leakage libraries provide a power- and performance-efficient foundation for SoCs