How AI Architecture IP Adoption is Migrating to Edge AI

AI hardware is rapidly evolving, with innovations from data center computing now powering Edge AI. This webinar explores how semiconductor IP advances are accelerating performance, overcoming system bottlenecks, and enabling new optimizations.

Key Takeaways:

  • Uncover the biggest hardware challenges in Edge AI and how leading companies are overcoming limitations around computing, memory, and power to enable advanced AI at the edge.
  • Learn about the latest IP innovations: from memory optimization to interface advancements that are migrating from data center AI to edge applications, unlocking new opportunities for real-time performance.
  • Explore practical techniques to optimize AI models for Edge Computing, such as quantization, pruning, and knowledge distillation, and understand their impact on accuracy and efficiency.
  • See how hardware-software co-optimization and disruptive technologies are shaping the future of Edge AI, driving rapid improvement in processor performance, system scalability, and application responsiveness.

 

*This presentation was part of EE Times AI Everywhere 2025 Conference.

Watch On-Demand

Featured Speaker

Ron Lowman
Principal Product Manager, Synopsys
Ron Lowman joined Synopsys in 2014 and is currently a Principal Product Manager for the Synopsys IP Group. He holds a Bachelor of Science in Electrical Engineering from the Colorado School of Mines and an MBA from The University of Texas at Austin.