To help edge AI system architects bring their visions to life, SiMa.ai offers a heterogeneous compute platform that balances power and performance with ease-of-use and time-to-market. In addition, in the ever-changing world of AI inference, architects need the flexibility to easily add machine learning to legacy applications and also meet future application demands by easily upgrading machine learning capabilities as the technology and application needs evolve.
SiMa.ai’s new Machine Learning SoC (MLSoC™) platform supports ML and traditional compute with high-performance, low power, and a software-first philosophy to accelerate design velocity. Based on feedback from dozens of customers, we are building an architecture that is software-centric and easy to use that consistently performs 30x better as measured by frames per second/watt (FPS/W) than alternatives. We are working closely with customers to understand their applications and map them to our hardware.
By providing early access to our SDK, we are beginning to help customers accelerate their time-to-market and deliver machine learning enabled products before their competition. Supporting industry standard machine learning frameworks like Tensor Virtual Machine (TVM), as well as Open VX, OpenCV, and OpenCL, allows customers to focus on their applications and not on the hardware or the interfaces. In addition, using a standard open-source approach greatly increases the designers’ ease-of-use and can be used in conjunction with the selected hardware to be future-proofed for next-generation demands.
Getting software into customers’ hands early is critical, though we also know that the selection of IP for the hardware is paramount to the success of our product. After gathering significant customer input to narrow down the list of IP to integrate onto our platform, we determined that we needed an IP vendor who could provide a robust and complete IP portfolio as well as verification and validation tools. We selected Synopsys as they offer the complete processor, interface, and security IP portfolio we need to address our customers’ challenges, along with tools such as the Fusion Design Platform to enable optimized implementation.
For example, the MLSoC platform offers up to 50 Tera-operations per second (TOPS) total performance running at 10 TOPS per watt to enable ML workloads on the edge that would traditionally require cloud-level performance. SiMa.ai selected the DesignWare® ARC® Embedded Vision Processor as its power/performance profile meets our requirements for computer vision processing (Figure 1). The DesignWare ARC EV7x Vision Processors’ heterogeneous multicore architecture includes up to four high-performance VPUs. Each EV7x VPU includes a 32-bit scalar unit and a 512-bit wide vector DSP and can be configured for 8-, 16-, or 32-bit operations to perform simultaneous multiply-accumulates on different streams of data.
In addition to the processing function, we enhanced both power and performance with the selected DesignWare Security and Interface IP. Synopsys’ Security IP helps protect the system-on-chip’s (SoC's) data and algorithms while MIPI CSI-2, Ethernet, PCI Express, and LPDDR IP provide high-speed camera, host processor, and memory connectivity at the lowest power.