Thursday, May 23, 2019
Join our full-day seminar to learn about the latest trends in artificial intelligence and computer vision, and how to use the latest embedded vision technologies to navigate your way from concept to successful silicon. This seminar provides a deep dive into deep learning, embedded vision, and standards-based programming for automotive, mobile, surveillance, and consumer applications. Discussions include the latest techniques for balancing performance, power, area and bandwidth for designs ranging from 1 TOPS to 100 TOPS, the need for security in AI designs, the trade-offs between traditional computer vision techniques and deep learning, and detailed case studies on the implementation of computer vision in an embedded environment. Our post-seminar reception provides an opportunity to discuss your specific questions with key members of our R&D staff.
There is a $25 registration fee for the workshop. Open to current and potential Synopsys customers. Although the registration systems are the same, you do not need to join the Embedded Vision Summit to register for and attend Synopsys' Seminar.
8:00 Doors open for badge pickup
9:00 Opening remarks
9:15 Keynote: Solving Computer Vision Problems Using Traditional and Neural Network Approaches
Robert Laganiere, Professor, University of Ottawa and Founder & Chief Science Officer at Sensor Cortek & Tempo Analytics
4:30 Reception & Raffle of Amazon Echo, Osmo Mobile 2 Gimbles, and more
May 21-22, 2019
Making Cars that See - Failure is Not an Option (Business Insights Track)
Dr. Burkhard Huhnke, Vice President of Automotive Strategy
Drivers are the biggest uncertainty factor in cars, and computer vision is helping to eliminate human error and make the roads safer. Autonomous vehicles are expected to save almost 300K lives each decade in the United States, but after 13 years of development, the question is still, “Where’s my driverless car?” The three key areas where development has been slower than expected are 1) Robust design with lowest failure rates not as easily achievable as expected, 2) The technology is more expensive than expected, and the business case is not supporting the costs, and 3) The scalability to mass-produce self-driving cars hasn’t ramped up. In this presentation, we will review why these areas are taking longer than expected. It will cover the vision processing performance requirements that have proven to be challenging, and what innovative semiconductor suppliers need to deliver to fix the supply chain.
5+ Techniques for More Efficient Implementations of Neural Networks (Fundamentals Track)
Dr. Bert Moons, Hardware Design Architect, Embedded Vision & AI Processors
Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep neural networks (DNNs) to make them more energy-efficient and less demanding of embedded processing hardware. In this talk we’ll provide an introduction to today’s established techniques for efficient implementation of DNNs: advanced quantization, network decomposition, weight pruning and sharing, and sparsity-based compression. We’ll also preview up-and-coming techniques such as trained quantization and correlation-based compression.
Fundamental Security Challenges of Embedded Vision (Fundamentals Track)
Mike Borza, Principal Security Technologist
As facial recognition, surveillance, and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that passes through their systems. Training data, the resulting model data, and how decisions are made and acted on can be proprietary information for the product, and important to keep out of competitors’ hands. Inputs from sensors and cameras can contain legally protected data, and the data that may create ethical and privacy concerns as cameras and microphones in homes, cars and public settings explode in number. This presentation will describe the state of security in vision systems today, and describes the business impacts breaches. It will explain potential weaknesses in training-to-inferencing systems where data can be compromised. Finally, it will provide a use case of securing an inference AI SoC for an automotive application, including methods that designers can use to secure the system.
Tuesday, May 21 - 12:00 – 8:00 PM
Wednesday, May 22 - 10:30 AM – 6:00 PM
Be sure to visit us at booth #405 in the Technology Showcase for demos of our latest vision solutions, including deep learning and real-time object detection.