Why Attend?

Connect with hundreds of product and application developers, business leaders, investors and customers—all focused on embedded vision, compute vision and physical AI.

Join Synopsys at the Embedded Vision Summit to see how the latest AI enhancements of the ARC® NPX6 NPU IP portfolio can accelerate your next SoC design. You’ll learn how Synopsys’ latest NPX6 NPU IP enables physical AI implementation enhances the latest generative AI models, ASIP Designer can help SOC developers to build custom NPU for AI, and latest trends in perceptive AI and generative AI.

With Synopsys, you can address demands of real-time compute and bring visual intelligence into embedded systems for various application including automotive, industrial and consumer.

Who Should Attend?

If you are an SoC designer interested in embedded vision, a chip architect for Physical AI including safety-critical radar/vision applications, or focus software implementation for AI Inference, you'll find something to enhance your next project at the Embedded Vision Summit.

Synopsys at 2026 Embedded Vision Summit

Reach out to our sales team to book a meeting and discover how we can support your needs! 

Synopsys Booth #616

Join us to meet Synopsys technical experts and explore the latest demos

Monday, May 11, 2026  | 12:30 pm - 7:30 pm
Thursday, May 12, 2026  |  11 am - 5 pm

Custom Vision accelerator built with ASIP designer ​​in action
Built with ASIP Designer, NVIDIA Jetson PVA delivers proven, scalable real‑time 3D vision processing.

Qwen 2.5‑VL on NPX6 NPU Silicon  
Real‑time vision‑language intelligence with Qwen 2.5‑VL, accelerated on NPX™ NPU for efficient on‑device AI.

Visionary.ai Single Network ISP Optimized for NPX 6
Efficient, High-Throughput AI-ISP on Synopsys NPX6 Discover a new standard in silicon efficiency. See how Visionary.ai and Synopsys are evolving the ISP for the AI era by delivering a high-throughput solution on the ARC® NPX6 NPU that extracts peak performance from any CMOS sensor to empower the next generation of machine vision.

Synopsys Speaking Sessions

Don't miss these insightful sessions:

From Generative AI to Physical AI: Enabling VLM/VLA Multimodal Performance on Enhanced NPX6 NPU IP  
Gordon Cooper, Principal Product Manager 
Date/Time: Monday May 11, 1:30pm

In this talk, we’ll explore the evolution and implementation of multimodal generative AI at the edge targeting physical AI applications, including autonomous vehicles and humanoid robots. We’ll outline the challenges and trade-offs for implementing multimodal generative AI on resource-constrained systems and discuss the performance and bandwidth requirements NPUs must meet to support them. Finally, we’ll demonstrate how these advanced multimodal and action-capable models are efficiently accelerated on the Synopsys ARC enhanced NPX6 NPU IP, enabling the next wave of intelligent, adaptive edge devices.

Understanding Transformers: From LLMs to Context-Aware Multimodal Models 
Tom Michiels, System Architect 
Date/Time:  Monday May 11, 4:15pm

Transformers have become the foundation of modern AI, reshaping how products are built and how businesses operate. In this talk we will explain why transformers replaced earlier models, what makes them highly scalable and how they expanded from language systems to multimodal models that reason across text, images, audio and more. We’ll introduce core concepts like attention, embeddings and tokenization, examining how these models learn and generalize. We’ll trace the evolution from GPT‑style language models to vision transformers and multimodal systems that enable capabilities such as in‑context learning.  We’ll explore practical considerations, including latency, memory, cost, KV caching and quantization. And we’ll highlight trends like long‑context models, on‑device AI and mixture‑of‑experts. Attendees will gain a practical understanding of how transformers work and how to apply them in product decisions.