Embedded Vision Summit 2019

May 20- 23, 2019

Santa Clara Convention Center

Save 15% on conference registration with code MEMPARTNER19

   

Synopsys Embedded Vision Seminar: Navigating Intelligent Vision at the Edge

Thursday, May 23, 2019

Join our full-day seminar to learn about the latest trends in artificial intelligence and computer vision, and how to use the latest embedded vision technologies to navigate your way from concept to successful silicon. This seminar provides a deep dive into deep learning, embedded vision, and standards-based programming for automotive, mobile, surveillance, and consumer applications. Discussions include the latest techniques for balancing performance, power, area and bandwidth for designs ranging from 1 TOPS to 100 TOPS, the need for security in AI designs, the trade-offs between traditional computer vision techniques and deep learning, and detailed case studies on the implementation of computer vision in an embedded environment. Our post-seminar reception provides an opportunity to discuss your specific questions with key members of our R&D staff. 

There is a $25 registration fee for the workshop. Open to current and potential Synopsys customers. Although the registration systems are the same, you do not need to join the Embedded Vision Summit to register for and attend Synopsys' Seminar.

Workshop Agenda

Embedded Vision Summit Presentations

May 21-22, 2019

Making Cars that See - Failure is Not an Option (Business Insights Track)
Dr. Burkhard Huhnke, Vice President of Automotive Strategy

Drivers are the biggest uncertainty factor in cars, and computer vision is helping to eliminate human error and make the roads safer. Autonomous vehicles are expected to save almost 300K lives each decade in the United States, but after 13 years of development, the question is still, “Where’s my driverless car?” The three key areas where development has been slower than expected are 1) Robust design with lowest failure rates not as easily achievable as expected, 2) The technology is more expensive than expected, and the business case is not supporting the costs, and 3) The scalability to mass-produce self-driving cars hasn’t ramped up. In this presentation, we will review why these areas are taking longer than expected. It will cover the vision processing performance requirements that have proven to be challenging, and what innovative semiconductor suppliers need to deliver to fix the supply chain.  

5+ Techniques for More Efficient Implementations of Neural Networks (Fundamentals Track)
Dr. Bert Moons, Hardware Design Architect, Embedded Vision & AI Processors

Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep neural networks (DNNs) to make them more energy-efficient and less demanding of embedded processing hardware. In this talk we’ll provide an introduction to today’s established techniques for efficient implementation of DNNs: advanced quantization, network decomposition, weight pruning and sharing, and sparsity-based compression.  We’ll also preview up-and-coming techniques such as trained quantization and correlation-based compression.

Fundamental Security Challenges of Embedded Vision (Fundamentals Track)
Mike Borza, Principal Security Technologist

As facial recognition, surveillance, and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that passes through their systems. Training data, the resulting model data, and how decisions are made and acted on can be proprietary information for the product, and important to keep out of competitors’ hands. Inputs from sensors and cameras can contain legally protected data, and the data that may create ethical and privacy concerns as cameras and microphones in homes, cars and public settings explode in number. This presentation will describe the state of security in vision systems today, and describes the business impacts breaches. It will explain potential weaknesses in training-to-inferencing systems where data can be compromised. Finally, it will provide a use case of securing an inference AI SoC for an automotive application, including methods that designers can use to secure the system.

 

Technology Showcase

Tuesday, May 21 - 12:00 – 8:00 PM
Wednesday, May 22 - 10:30 AM – 6:00 PM

Be sure to visit us at booth #405 in the Technology Showcase for demos of our latest vision solutions, including deep learning and real-time object detection.

Synopsys Seminar Abstracts

Keynote: Solving Computer Vision Problems Using Traditional and Neural Network Approaches
Robert Laganiere, Professor, University of Ottawa & Founder & Chief Science Officer at Sensor Cortek & Tempo Analytics
While deep neural networks have become the de facto standard for many computer vision tasks, research and development on the combined use of traditional computer vision methods and convolutional neural networks (CNNs) is well underway. In this keynote presentation, Dr. Laganiere will describe the current trends in deep learning and neural network and will compare them with more conventional vision algorithms. He will describe recent approaches for the detection and tracking of objects of interest in the context of autonomous driving and smart visual surveillance.

Designing Highly Scalable Embedded Vision Solutions from Consumer to Automotive Applications
Pierre Paulin, Director of R&D, Embedded Vision, Synopsys
Embedded applications in mobile, AR/VR, video-surveillance and autonomous driving require increasing intelligence and compute power to interpret data from multiple sensors, including video, audio, radar and lidar. In addition, deep learning and CNNs are revolutionizing the computer vision space. Up until recently, executing high-end CNN graphs required high-cost and high-power general-purpose CPUs and GPUs. In this presentation, we will discuss hardware trends in the implementation of vision applications, specifically CNN bandwidth, functional safety in hardware, security, and more. We will also touch on software trends such as moving from Caffe to Tensorflow and ONYX. Next, we will describe latest updates to the EV6x processor to adapt to these trends, including how the architecture can provide a complete vision solution or be paired with a designer’s own neural network engine. We will present our scalable CNN engine that supports state-of-the-art compact and region-based CNN graphs. We will describe techniques used for efficient scaling of CNN graph performance on multiple CNN accelerators, with a particular focus on bandwidth reduction technologies. This includes data compression, layer merging and efficient data sharing across multiple accelerators. Finally, we will introduce our standards-based programming environment which supports well-known OpenCV, OpenVX and OpenCL C standards for classical vision and DSP processing. 

Architecture and Design Techniques for Embedded Deep Learning
Tom Michiels, System Architect, Embedded Vision, Synopsys

Embedding deep learning at the edge remains challenging today, due to the huge computational and memory requirements and the large algorithmic diversity of modern vision and sensing tasks. The challenges can be overcome by simultaneously co-optimizing smarter applications, leaner neural networks, optimized computer architectures with even more efficient circuits. This talk is an overview of techniques used by Synopsys to enable embedded deep learning in its DesignWare EV6x Embedded Vision Processor IP.

Tracing OpenVX and CNN Applications on Synopsys EV6x Embedded Vision Processors
Dr. Johan Kraft, Founder and CEO, Percepio AB
Synopsys EV6x vision processors allows for great processing performance, but the performance you get depend heavily on how well your solution takes advantage of the hardware capabilities. With today’s parallel and pipelined architectures, it can be difficult to understand if an application utilizes the hardware efficiently, or if more efficient designs are possible. Given better insight, developers gets better means to optimize their application for maximum performance. A common approach for gaining runtime insight is event tracing. However, adequate visualization is needed to really understand the data. This presentation will give an introduction to the Synopsys EV6x support in Percepio Tracealyzer and the related tracing support in Synopsys MetaWare EV Development Toolkit.

Emerging Neural Network Topologies for Vision Applications
Dr. Bert Moons, Hardware Design Architect, Embedded Vision, Synopsys
Real-life vision systems in VR, AR, autonomous vehicles and industrial automation require an ever-better real-time understanding of their surroundings. Therefore, industry and academics are moving on from tackling crude classification and detection problems towards more complex tasks in scene- and instance segmentation. Ultimately the goal is to achieve real-time single-shot panoptic segmentation –assigning both instance and segmentation labels using a single neural network architecture. This talk focuses on novel scene, instance, and panoptic segmentation algorithms emerging into the marketplace. It will describe the challenges facing SoC designers when implementing the different types of algorithms as well as some potential solutions.

What You Don’t Know Can Hurt You: Security 101 for Embedded Vision
Mike Borza, Principal Security Technologist, Synopsys
Machine learning, artificial intelligence, and embedded vision are now commonly used for biometrics, surveillance, automotive, healthcare and other sensitive applications. As they emerge, hackers wonder: How can I take control? Cybersecurity chips on a board can’t protect against every attack, and relying on system-level security can lock you out of markets with stringent requirements. This presentation will provide case studies of attacks that could have been prevented with SoC-level security, and it will describe the direction that attacks are taking. It will describe the assets that different types of EV and AI systems handle and how to set appropriate security levels. Finally, it will provide options for SoC designers to consider to mitigate threats, protect user information, and defend company liability.

 Introduction to EV6x Vector DSP Programming with OpenCL C
Pete Couperus, Staff Software Engineer, Synopsys
This talk will provide a brief introduction to the EV6x architecture, from the viewpoint of the Vector DSP (VDSP) OpenCL C kernel programmer. We will cover basic concepts, capabilities, and memory organization of the VDSP. These concepts will be correlated with OpenCL C examples that can be used with the MetaWare OpenCL C compiler. We will highlight some Vector Floating Point accelerators available on EV6x hardware, and how to utilize the Single Program Multiple Data style vectorization (WFV) available in the MetaWare OpenCL C Compiler.

Partitioning Graphs Across Multiple CNN Engines for Performance & Latency Improvement
Jamie Campbell, Software Applications Engineer, Staff, Synopsys
There is often a demand to maximize inference performance on a given system. This presentation will introduce partitioning techniques designers can use with the EV6x CNN 3520 to improve design metrics, including performance, bandwidth and latency. We will discuss various application scenarios for single and multiple CNN graphs, and show, by way of examples, how to use the CNN inference APIs supplied with the MetaWare EV Development Toolkit to parallelize operations and increase performance. The presentation will conclude with a demo comparing the performance of the MobileNet CNN graph running on CNN 880, 1760 and 3520, and an analysis of the execution using the Percepio Tracealyzer tool.