Embedded Vision Summit 2020

September 15 - 25, 2020
Virtual Experience 

Why Attend?

Connect with hundreds of product and application developers, business leaders, investors and customers—all focused on embedded vision. See the latest in practical technology to bring visual intelligence into cloud applications, embedded systems, mobile apps, wearables and PCs. Hear inspiring case studies from leading innovators. See live demos of the latest enabling technologies. Dig deep into the practical applications and techniques of computer vision.

Synopsys is happy to extend a 10% Summit discount to SoC designers and partners with the code MEMPARTNER20-V

Embedded Vision Summit Online Exhibit

Synopsys Virtual Booth

Meet with Synopsys technical experts and discuss our latest technologies during our online booth hours:

• Tuesday, Sept 15: 10:30 am - 1:00 pm Pacific

• Tuesday, Sept 15: 6:00 - 8:00 pm Pacific (Special hours for Asia)

• Thursday, Sept 17: 7:00 - 9:00 am Pacific (Special hours for Europe)

• Thursday, Sept 17: 10:30 am - 1:00 pm Pacific

Demos + -

Are you developing smart SoCs for gesture recognition, facial recognition, ADAS, or other AI & vision applications? Visit our virtual booth to see video demos on SLAM acceleration, deep neural networks for automotive applications, and machine learning inference. The demos use Synopsys DesignWare ARC Processors to deliver high power/performance efficiency and accurate results. After viewing the demos, meet with Synopsys engineers who are developing the cutting edge technology used in in-cabin automotive systems, ASIL D lane and object detection, secure facial recognition, and more.

Synopsys Seminar: Beyond 2020 - Vision SoCs for the Edge

September 16 & 18, Online

Day One: Wednesday, September 16

Keynote + -

Title: Enabling ​Deep Neural Networks at the Extreme Edge: Co-optimization Across Circuits, Architectures, and Algorithmic Scheduling

Time: 9:00 a.m. - 9:45 a.m. PT

Speaker: Marian Verhelst, Associate Professor at KU Leuven and Scientific Director at imec 

Description: Deep neural network inference comes with significant computational complexity, making their execution until recently only feasible on power-hungry server or GPU platforms. The recent trend towards embedded neural network processing on edge and extreme edge devices requires a thorough cross layer optimization. The keynote will discuss how to exploit and join​tly optimize NPU/TPU processor architectures, dataflow schedulers and quantized neural network models for minimum latency and maximum energy efficiency. 

Title: Embedded Processor IP to Address a Wide Range of AI & Vision SoC Designs

Time: 9:45 a.m. - 10:15 a.m. PT

Speaker: Gordon Cooper, ARC EV Product Marketing Manager, Synopsys

Description: Vision and AI can be addressed with a range of embedded processors, depending on the application and specific requirements. From low-power IoT to high-performance automotive applications, processors are integral to power-efficient SoCs. This presentation will give an overview of the Synopsys DesignWare ARC Embedded Processor IP families and use cases including artificial intelligence and embedded vision applications at the edge. From the EM family for machine learning, to the VPX family for radar and LiDAR, to the EV family for high-end vision systems, and functional safety processors across the range of families, Synopsys offers the embedded processors you need for the most efficient SoC possible.

Title: Addressing EV & AI Implementation Challenges in Edge Applications

Time: 10:45 a.m. - 11:15 a.m. PT

Speaker: Pierre Paulin, R&D Director, Synopsys

Description: In this presentation, we will discuss how recent embedded vision and AI trends impact the implementation of hardware and software for high-performance, low-power solutions for applications in the edge. We will describe the latest updates to the EV7x processor to adapt to these trends, including how the architecture can provide a complete vision solution combining classical vision algorithms with AI-based approaches. We will introduce our scalable vision DSP and the standards-based programming environment which supports well-known OpenCV, OpenVX and OpenCL C standards for classical vision and DSP processing. We will then present our scalable DNN engine that supports state-of-the-art CNN graphs. We will describe techniques used for efficient scaling of CNN graph performance on multiple DNN accelerators, with a particular focus on bandwidth reduction technologies. This includes data compression, layer merging and efficient data sharing across multiple accelerators. These optimizations are supported by our CNN mapping tools which take Tensorflow, ONNX and Caffe descriptions and map them onto the parallel architecture.

Title: Estimating Power Early & Accurately for Smart Vision SoCs

Time: 11:15 a.m. - 12:00 p.m. PT

Speaker: Derya Eker, R&D Manager, Synopsys

Description: Today’s high-end SoCs need to handle increasingly compute-intensive workloads but must carefully balance power-to-performance tradeoffs. The demand for wide deployment of artificial intelligence (AI) and deep learning is surging. Face recognition is paramount in mobile phones and extending to smart wearables. Identifying objects and surroundings in augmented- and virtual-reality headsets further push the envelope. Self-driving cars apply deep learning to interpret, predict and respond to data coming from surroundings for safer, smarter autonomous driving.

To optimize for both power and performance, hardware becomes more tightly intertwined with software. This presentation will describe the key architectural choices that designers must consider throughout the development process, such as IP vendor selection in the early phases of product development. hardware/software workload partitioning, and how and when to estimate power tradeoffs for the most accurate results. 

Day Two: Friday, September 18

Title: Sensor Fusion for Autonomous Vehicles: Strategies, Methods, and Tradeoffs

Time: 9:00 a.m. - 9:45 a.m. PT

Speaker: Robert Laganière, Professor, School of Electrical Engineering and Computer Science, University of Ottawa

Description: To operate safely, an autonomous vehicle (AV) needs accurate environmental perception. To this end, AVs are equipped with a multitude of sensors that capture the surrounding environment and a perception system that transforms this incoming stream of data into semantic information identifying the road agents, the drivable space, the traffic infrastructures, etc. However, the intrinsic limitation of each sensor affects the performance of the perception task. One way to overcome this issue and to increase the overall performance is to combine the information coming from different sensor modalities. This is the objective of sensor fusion. Using this technique, the perception system can i) increase its accuracy by using complementary information provided by the different sensors and ii)  better operate under challenging environmental conditions by relying on the sensor data that is the least impacted by the current situation (e.g. poor lighting, adverse weather). In this talk, we will overview the advantages and disadvantages of the different sensors used in intelligent vehicles. We will present the main sensor fusion strategies that can be used for combining heterogeneous sensor data. In particular, we  will discuss the three main fusion methods that can be applied in a perception system, namely early fusion, late fusion and mid-level fusion. 

Title: Neural Networks for Radar & LiDAR 

Time: 10:00 a.m. - 10:30 a.m. PT

Speaker: Tom Michiels, ARC EV System Architect, Synopsys

Description: Over the past 8 years, neural networks have been very successful in tackling object detection in vision. Now, neural networks are being applied to radar and LiDAR, taking input from 3D sensors and using it for object detection and segmentation. Future graphs are expected to combine radar and LiDAR - what does this mean for execution hardware? This presentation will describe the state-of-the-art neural networks for radar and LiDAR, why radar and LiDAR are similar and different in these applications, and the compute, bandwidth, and architectural requirements for SoCs that are incorporating 3D neural networks.

Title: How to Get What You Want (and Need) with an Advanced Neural Network Compiler

Time: 10:30 a.m. - 11:00 a.m. PT

Speaker: Essaid Bensoudane, Software Engineering Manager, Synopsys

Description: Deep learning has revolutionized pattern recognition by introducing new neural network architectures for computer vision, natural language processing, and automatic speech recognition. The computational complexity and performance require specialized hardware and advanced software tools. The Synopsys DesignWare ARC EV processors deliver inference acceleration with a heterogeneous architecture and are supported by the use of an advanced neural network compiler, delivered as part of the MetaWare EV Development Toolkit. The neural network compiler, which optimizes networks by automatically tiling, partitioning and applying advanced pattern matching that merges operators to improve latency, throughput, bandwidth, memory usage and power efficiency. In addition to these optimizations, the compiler also offers automatic quantization, efficient use of pruned networks and feature map compression to further reduce bandwidth. In this presentation, we will present the DesignWare ARC Neural Network Compiler workflow and elaborate on the compilation passes to enable efficient use of Synopsys DesignWare ARC EV processors.

Title: Accelerating Intelligent SLAM Applications with the Synopsys ARC EV Processor and KudanSLAM

Time: 11:00 a.m. - 11:30 a.m. PT

Speaker: Liliya Tazieva, R&D Engineer, Synopsys

Description: Simultaneous localization and mapping (SLAM) is used for building and updating a map of an unknown environment while keeping track of the viewpoint's location within it. Applications for SLAM include autonomous driving, augmented/virtual reality and robotics. When executed in parallel with a convolutional neural network (CNN) engine, SLAM applications can become more ‘intelligent,’ able to identify objects in the environment and make decisions based on what appears. Multiple open source (like ORB-SLAM2) and commercial SLAM algorithms can simplify implementation of SLAM functionality. Synopsys has collaborated with Kudan to accelerate their KudanSLAM product on Synopsys DesignWare ARC EV Processors. In this talk, we will describe how to offload a significant portion of the SLAM algorithm to the EV Processor to boost the algorithm’s performance, while simultaneously running an object detection graph on the EV’s CNN engine.

Title: Safe & Secure SoC Architectures for Autonomous Vehicles

Time: 11:30 a.m. - 12:00 p.m. PT

Speaker: Fergus Casey, R&D Director, Synopsys

Description: Let's face it: People are bad drivers. The Driver is the biggest uncertainty factor in cars, and computer vision is helping to eliminate human error and make the roads safer. Autonomous vehicles are expected to save almost 300K lives each decade in the United States, but after 4-5 decades of autonomous car proof of concepts and years of development, driverless cares still seem a long way off. This presentation will describe the challenges that SoC designers and OEMs face when developing self-driving vehicles, from understanding how a pedestrian looks to software/silicon, to understanding an entire scene. It will then describe the key milestones that the industry, and each chip design, must reach on the road to autonomous driving, and how to know when you've reached them.