Cloud Computing AI Accelerators

AI accelerators process tremendous amounts of data for deep learning workloads including training and inference which require a large memory capacity, high bandwidths, and cache coherency within the overall system. AI accelerator system-on-chip (SoC) designs have myriad requirements: high performance; low power; need for cache coherency; integrating high bandwidth interfaces that are scalable to many cores; implementing heterogeneous processing hardware accelerators; addressing Reliability- Availability-Serviceability (RAS); and massively parallel deep learning neural network processing. Synopsys offers a portfolio of DesignWare IP in advanced FinFET processes that address the specialized processing, acceleration, and memory performance requirements of AI accelerators.For more information visit the DesignWare IP for AI web page.

DesignWare IP for AI Accelerators

Highlights: