Cloud native EDA tools & pre-optimized hardware platforms
Designing chips with artificial intelligence? Need to learn about the latest techniques to optimize memory, security, bandwidth and more? In this AI SoC Chats Video Series, you’ll learn about the latest IP and technologies for chips in use from data centers to the edge. Join your host Ron Lowman for in-depth conversations with Synopsys experts.
Will your next system require high performance AI? Learn what the latest systems are using for computation, including AI math, floating point and dot product hardware, and processor IP.
AI chipsets are data hungry and have high compute intensity, leading to potential power consumption issues. Ron talks with Synopsys Fellow Jamil Kawa to learn how in-memory or near-memory compute, 3D stacking, and other innovations can address the challenges of making chips think like the human brain.
As AI chips are packed with SRAM and processing, and AI algorithms get more complex, designers are turning to die-to-die techniques to improve yields, latency, and power. Join Synopsys Interface IP expert Manmeet Walia to understand the trends around scaling AI SoCs and systems while minimizing latency and power by using die-to-die interfaces.
Understand the threat profiles and security trends for AI SoC applications, including how laws and regulations are changing to protect the private information and data of users. Secure boot, secure debug, and secure communication for neural network engines is critical. Join Ron and Dana Neustadter as they discuss how DesignWare Security IP and tRoot with Hardware Root of Trust can help designers create a secure enclave on the SoC and update software remotely.
Primitive math is anything but primitive when it comes to AI. Join in the conversation with John Swanson as they discuss the new requirements around custom primitive math functions in AI chipset development, what designers are doing with bfloat16, dot product and custom primitive math functions, and how to tune your design for the requirements of today and the future.
When building AI SoCs, how do you choose the optimal memory interface? In this conversation, Graham Allen and Ron chat about the market trends and challenges for DDR, LPDDR, HBM, and GDDR, and how Synopsys DesignWare IP can help designers get their projects to market faster and with the ideal memory for their SoC.
To support host-to-AI accelerator connectivity, AI chipsets can use PCI Express, CCIX, and/or CXL, and each have their benefits. Ron talks with Gary Ruggles about how designers can find the right interconnect for their AI SoC designs.
Want to get in touch with any of the experts you’ve met in these videos? Drop us a line at email@example.com and we’ll connect you!