Easily Map AI Workloads to Different SoC Architectures to Resolve AI Power and Performance Design Challenges
Quickly address challenges of evolving algorithms, highly parallel compute and high memory requirements with Platform Architect
- Automated generation of workloads from AI frameworks, including an AI operator library for Convolutional Neural Network (CNN) modeling
- AI centric HW architecture model library consisting of Virtual Processing Units (VPUs) with specific parameters to represent AI compute and DMA engines, relevant interconnect and memory subsystem models, and example NVDLA performance models to rapidly represent custom AI accelerators
- AI specific analysis views to determine memory and processing rooflines
Hardware-Software Partitioning and Optimization of Multicore Systems
Platform Architect Ultra enables architects to create task-driven workload models of their end-product application for early architecture analysis.
- Generic task models are easily configured to create a hierarchical workload model of the application, called a task-graph
- The application workload model is mapped onto a model of the hardware platform, based on Virtual Processing Units (VPUs) and other system TLM performance models from the rich Platform Architect Ultra model library
- Platform and workload analysis enable hardware-software partitioning to be optimized for best system performance well before the application software is available
- Task graphs are fully reusable as elastic task-driven traffic generators for Interconnect and Memory Subsystem Performance Optimization
Interconnect and Memory Subsystem Performance Optimization Using Trace-Driven Traffic Generation
Trace-driven traffic generation enables architects to focus on the challenges associated with the optimization and performance validation of the backbone SoC interconnect and global memory subsystem.
- Dynamic application workloads are modeled using traffic generation, enabling early measurement of system performance and power before software is available
- Simulation sweeping enables parametric collection of analysis data, exploring all workload scenarios against a range of architecture configurations
- Powerful tools for analysis visualization provide graphical transaction tracing and statistical analysis views that enable you to identify bottlenecks, determine their root-cause and examine the sensitivity that system performance and power may have to individual or combined parameter settings
- The result is an executable specification used to carefully dimension the SoC interconnect and memory subsystem to support the latency, bandwidth, and power requirements of all SoC components, under all operating conditions
Hardware-Software Performance Validation Using Processors Models and Critical Software
After exploration the model of the candidate architecture can be refined to replace the trace-driven and task-driven traffic generators with cycle-accurate processor models.
- This enables architects to validate the candidate architecture using the available performance critical software
- Software and hardware analysis views can be visualized together to provide unique system-level visibility to measure performance and power and confirm goals are met