Cloud native EDA tools & pre-optimized hardware platforms
As embedded systems continue to become more complex and integrate greater functionality, SoC developers are faced with the challenge of developing more powerful, yet more energy-efficient devices. The processors used in these embedded applications must be efficient to deliver high levels of performance within limited power and silicon area budgets.
Join us for the ARC® Processor Summit to hear our experts, users and ecosystem partners discuss the most recent trends and solutions that impact the development of SoCs for embedded applications. This event will provide you with in-depth information from industry leaders on the latest ARC processor IP and related hardware/software technologies that enable you to achieve differentiation in your chip or system design. Sessions will be followed by a networking reception where you can see live demos and chat with fellow attendees, our partners, and Synopsys experts.
Whether you are a developer of chips, systems or software, the ARC Processor Summit will give you practical information to help you meet your unique performance, power and area requirements in the shortest amount of time.
Comprehensive solutions that help drive security, safety & reliability into automotive systems
Power-efficient hardware/software solutions to implement artificial intelligence technologies in next-gen SoCs
Solutions to accelerate SoC and software development to meet target performance, power and area requirements
Associate Professor, Dept of Computer Science and Engineering, University of Louisville, KY
10:15 - 11:15 A.M.
Artificial Intelligence Safety and Security
Many scientists, futurologists and philosophers have predicted how AI will enable humanity to achieve radical technological breakthroughs in the years ahead. In his keynote, Dr. Yampolskiy will cover current progress in artificial intelligence and predicted future developments, including artificial general intelligence. The talk will address some obstacles to progress in development of safe AI as well as ethical issues associated with advanced intelligent machines. The problem of control will be covered in the context of safety and security of AI. The talk will conclude with some advice on avoiding failure of smart products and services.
New use-cases and architectures are driving changes in the manner in which automotive Electronic Control Units (ECUs) are being designed. OEMs and their suppliers are gearing towards a changing landscape precipitated by adoption of autonomous vehicles, new designs for EVs and business model transformation (Maas for e.g). How do auto-makers and their ecosystem partners adapt to these new paradigms and address advanced compute and software solutions in this new landscape? We’ll discuss the changes that the automotive supply chain, including SoC suppliers, tier-1s and tier-2s, HW & SW partners are embracing to address these new challenges.Read Less
Next-generation autonomous driving and advanced driver-assistance systems (ADAS) require complex safety-critical electronic components. The SoC designs used in these electronics must adhere to the ISO 26262 functional safety (FuSa) standard to achieve the highest automotive safety integrity level (ASIL). Synopsys offers a broad portfolio of certified functional safety compliant processor IP for developing these safety-critical SoCs. This session will cover Synopsys' vision of current and evolving SoC level safety architectures, safety compliant ARC processors, and functional safety software and tools. It will also touch on the combination of safety and security which both need to be carefully architected in the early stages of SoC development.Read Less
“Virtualization” uses software to simulate hardware functionality, allowing multiple operating systems to share the same hardware resources. Applying virtualization to automotive zonal architectures enables additional levels of security and safety, as well as reducing hardware costs and power consumption. In this presentation, we will describe the requirements for virtualization of processors and AI accelerators used in automotive applications, the uses of spatial and temporal isolation, and case studies on virtualization for third-party applications and for functional safety.Read Less
The Sondrel’s Scalable Architecture Framework (SAF) defines a set of processes on Requirements Engineering, Systems Architecture and Virtual Prototyping. Several reference SoC Architectures have been derived from this framework, each targeting specific application use cases. For automotive ASIL-D applications, the SFA350A reference architecture provides the necessary feature set and scalability options to support a wide range of automotive compute requirements.
Recent industry trends show that automotive AI applications are starting to employ ever more sophisticated neural network algorithms, such as Vision Transformers (ViT), which are now out-performing CNNs and RNNs on several benchmarks. In this talk, we will show how the requirements of complex AI workloads, such as ViT, are analysed, so that the System Architecture of the SFA350A is tuned accordingly.
Rapid design space exploration is accomplished using performance models of an ARC NPX6 NPU with a VPX5 DSP companion, a FlexNoC Interconnect, and an LPDDR5x memory subsystem to balance all available features and determine the optimal hardware configuration of the SFA350A. A notable attribute of the ARC NPX6-VPX5 combo is that it is compatible with the “slice architecture” formalism employed in the SAF. This is key to achieving fast design space exploration of the demanding AI applications that automotive SoCs are required to support now and in the foreseeable future.Read Less
Typical DSP benchmarks published in marketing collateral assume an ideal scenario: Data is available in local memory, and it is arranged in such a way that optimal results are achieved for the compute part of the targeted application. Yet for most real-world applications, the limited size of the local memory requires that data is loaded and stored from L2 / L3 memory using DMA transfers. Hence, users need to focus as much on the efficiency of the DMA transfers as they would for the compute, to arrive at a balanced system solution. Specifically, a Vector DSP must enable that DMA transfers can be performed in parallel to the compute, so that their latency can be hidden. Further, to support efficient compute, data should be organized properly in local memory. This demands advanced DMA capabilities, to reorganize data on the fly during data movement, as well as a versatile suite of load/store instructions for efficient access to data in local memory. We will discuss the above aspects in detail, using the VPX Vector DSP as a reference. Using an example Radar application we will show how high-performance DSP processing can be implemented with efficient access to local memory and multi-dimensional DMA transfers happening in the background, to arrive at an efficient system solution.Read Less
Safety and security standards require justification regarding the safe usage of tools. This is typically achieved through an approach based on tool qualification. TASKING will provide insight into how tool qualification helps your project meet these standards as well as what tasks a project team must perform itself. This session will be based on an automotive use case using a PPU (Parallel Processing Unit) based on ARC EV71, and will discuss connecting software running on a main compute core and the ARC-based PPU.Read Less
AI applications are driving the need for more efficient Neural Network processing across a broad range of performance, power and price points, leading to various processor-based implementation options. This session will discuss the trade-offs between selecting an AI enabled DSP or adding a dedicated AI accelerator. We will present customer use cases covering AI enabled ARC processors including ARC VPX and accelerators -- including Synopsys’ newest Neural Processing Units (NPUs). The importance of software support across processors will be covered.Read Less
Conventional Image Signal Processors (ISPs) do an excellent job, so long as lighting conditions are good. As society becomes increasingly reliant on image sensors for both human and machine vision, however, we need to find ways of extending performance for more challenging light conditions to achieve product robustness. In this session, Benny Munitz of Visionary.ai talks about using embedded AI algorithms, running on the Synopsys ARC EV72 processor, to implement a sophisticated new software ISP capable of dramatically reducing image noise, and increasing dynamic range. This provides much-needed additional degrees of freedom in the image pipeline implementation to achieve better results both for human and machine vision applications.Read Less
Programming SoCs for AI workloads can be a daunting task. Machine learning algorithms can run on a variety of processor types – CPUs, GPUs, DSPs, NPU, custom accelerators – which has traditionally limited software portability. In addition, neural networks continue to evolve (e.g., CNNs, LSTMs, RNNs, Transformers) and competing AI frameworks (e.g., TensorFlow, PyTorch, Caffe2) make standardization a challenge. This session will introduce a programming environment that will accept neural networks in virtually any industry-standard format and efficiently map them to a variety of AI processor types, abstracting the underlying hardware from the AI programmer. Optimization techniques that improve execution performance and hardware resource utilization will also be discussed.Read Less
AI Recommender systems, particularly Deep Learning Recommendation Models (DLRM), are the dominant ML application in terms of cloud resource usage. DLRM is a fascinating business and technical challenge. The Social Media and Entertainment industries have far from exhausted the business value that can be achieved with more accurate and more intelligent predictions of consumer/user behavior. Rapid innovation is yielding novel adaptations of DLRM that produce markedly more useful predictions, commanding ever increasing compute capacity under fixed energy and space constraints. Moreover, DLRM is a hybrid dataflow that mates ML models with not-exactly-ML big data analytics.
NEUCHIPS is pioneering a first-of-its-kind engineering approach to accelerating software with purpose-built SoC hardware alongside carefully co-designed compiler and runtime software.
The RecAccel N3000, is purpose-built for AI recommendation inferences, especially for DLRM. We will discuss its asynchronous heterogenous dataflow architecture, where each type of IP/processor is carefully tailored to optimize a component of the DLRM logical architecture. We will also show how the configurable ARC processor efficiently participates in delivering groundbreaking DLRM performance on widely accepted industrial recommendation benchmarking.Read Less
The neural network architectures used in embedded real-time applications are evolving quickly. Transformers are a leading deep learning approach for natural language processing and other time-dependent, series data applications. Now, transformer-based deep learning network architectures are also being applied to vision applications with state-of-the-art results compared to CNN-based solutions. In this presentation, we will introduce transformers and contrast them with the CNNs commonly used for vision tasks today. We will examine the key features of transformer model architectures and show performance comparisons between transformers and CNNs. We will conclude the presentation with insights on why we think transformers are an important approach for future visual perception tasks.Read Less
Today we see a large variety of SoCs with dedicated accelerators for the efficient processing of AI applications. Successful products in this competitive environment need to be highly optimized for the target application domain. Data-driven architecture analysis is required to optimize the AI processor configuration alternatives and SoC integration choices, like the dimensioning of the shared interconnect and memory sub-system. Synopsys Platform Architect Virtual Prototyping tools combined with ARC Processor IP architecture models enable early analysis of architecture alternatives and quantitative assessment of IP configuration choices.
In this presentation we will discuss the available IP, tools, and models to accelerate the early analysis and optimization of AI SoC architectures.
- Recent advancements in embedded AI applications and architectures
- Challenges in the design and verification of AI SoCs
- Synopsys DesignWare Processor IP portfolio for the design of AI SoC platforms
- Synopsys Platform Architect Virtual Prototyping solution for early architecture analysis and optimization
- Case-study of an AI SoC platform design with ARC VPX and NPX Processors
- How to get startedRead Less
Zephyr RTOS is quickly becoming one of the most popular general purpose open source Real-Time Operating Systems on the market. Zephyr is more than just an OS kernel with protocol stacks and driver enabling building all kinds of embedded applications.
In this session we'll discuss how software features of the Zephyr RTOS can be leveraged across the broad of ARC processor offerings. We'll start with an overview of ARC cores and features supported in the Zephyr RTOS and then we will examine some specific use-cases which utilize key features of the Zephyr RTOS such as single-threading mode, POSIX compatibility layer and SMP support for embedded multicore configurations up to 12 coresRead Less
Bluetooth, and Bluetooth Low Energy specifically, is now a part of our everyday lives. Shipments in 2022 are forecasted to be 5.1 billion devices and 7 billion by 2026, that’s a CAGR of 9%. While forecast for the Host or “Platform” side of the solution (such as mobile phones, tablets and PCs) is relatively flat, the growth of BLE will be on the peripheral side. The predominant applications or use cases driving this growth are hearables (headphone and earbuds) supported by LE audio, wearables including AR/XR, locations services, electronic shelf labels (ESL) and a variety of tags and sensors. These segments have projected growth rates of 12-25% over the next 4-5 years.
All of these major growth segments are battery powered devices thus driving the need to have the most power-efficient solutions possible. Based on its extremely low power requirements, the sub 1 volt BLE IP solution from Synopsys is perfectly suited to be integrated into these power sensitive SoCs to extend product life times for non-chargeable devices and to reduce time between charging for reusable devices.Read Less
Post-Quantum Cryptography has been receiving a fair amount of attention over the past few years, especially with the quantum threat becoming a closer reality. NIST’s PQC standardization process is fully underway. Just recently, a big milestone of the path where the PQC algorithms are gradually becoming the cryptographic default was achieved – NIST has announced the first set of the standardized PQC algorithms. This means that it will be used as widely or possibly even more than the current conventional cryptography in the near future. This talk will provide the overview of PQC, the standardization process, and current and next practical steps to prepare for the transition to PQC. For this transition, there are various challenges to overcome. It will require crypto agility in protocols and implementations such that today’s algorithms can be seamlessly replaced with the PQC alternatives. Agility in software via firmware updates is much easier than agility in hardware. However, just like for today’s algorithms, hardware acceleration and hardware implementations are required for PQC to meet performance as well as security targets. In this talk, we’ll explain how acceleration of PQC algorithms can be done in a flexible way, such that a single accelerator can be used for traditional algorithms as well as for various PQC algorithms. Finally, we’ll complete the ‘from software to silicon’ view by covering end-to-end aspects for managing the PQC transition using a service-based architecture to perform the provisioning and security management of the agile crypto solutions embedded in connected devices.
The ever-increasing number of connected devices around us introduces major security issues. Connecting billions of devices can only be done securely if every device has some form of dedicated hardware for protecting sensitive data and securing communications. How can this be done in a way that scales with the most advanced technology nodes without becoming cost-prohibitive?
The answer lies with SRAM Physical Unclonable Function (PUF) technology. Combining SRAM PUF technology from Intrinsic ID with the Synopsys embedded tRoot HSM, provides a new level of protection by generating secure cryptographic keys based on device-unique variations within the silicon of the chip itself. With the SRAM PUF, the root key is re-generated every time the chip is powered up and is only available in volatile memory when needed. Since the key is never present in persistent memory, even when the chip is powered down, it is not stored anywhere on the device, making it significantly harder for attackers to find. This substantially increases the level of security.
This talk will explain how SRAM PUF eliminates the need for OTP memory, while cost-effectively providing a hardware root of trust. In this presentation, you will learn:
• The fundamentals and benefits of SRAM PUF technology
• How SRAM PUFs allow you to scale your security architecture to the most advanced nodes
• How SRAM PUF technology combines with the Synopsys tRoot HSM
• Some example use casesRead Less
Get an optimized starting point for implementing Synopsys ARC HS68 64-bit processors for high performance embedded designs with Synopsys Fusion QuickStart Kits (QIKs). The ARC processor QIK includes tool scripts, a baseline floorplan, design constraints and documentation. In this session, you will learn how the QIK was used along with Synopsys Fusion Compiler and Design Space Optimization (DSO.ai) tools to achieve the best PPA and faster time-to-market.Read Less
Almost all of today’s SoCs are multicore designs, initially driven by the need for higher performance. Yet the need for energy efficiency became another driver for such multicore design, moving to heterogeneous architectures where different cores are selected for different processing tasks. In this session we will use the example of an always-on smart home application to illustrate the tradeoffs to be analyzed.