There are many ways to bring AI to your edge vision-processing application. While a GPU definitely has the horsepower to serve as an AI accelerator, it’s not the most efficient choice for space-constrained and/or battery-operated devices. Another option is a low-cost, off-the-shelf microprocessor, but only if the application does not require a real-time response, as these processors generally don’t provide the speed or latency needed. You can also develop your own processor or license one from an established vendor. The performance you’ll need, along with your available resources, will guide you to the right choice. Will you be able to meet your frame-per-second target at a certain power budget? Do you have the in-house skill set to do so? Does the volume of end devices you’ll produce justify the expense of having your own chip development team? Are you also prepared to invest heavily in software resources to create needed development tools to support your SoC?
Edge AI chips are designed with models that are trained to deliver insights for their particular end applications. Running a neural network (NN) algorithm through your architecture will give you an indication of the processor’s performance. For real-time, embedded systems, you’ll also need to factor in parameters like power, latency, bandwidth, and area for a more realistic benchmarking picture.
Designing your own processor provides the freedom to differentiate, provided you have the team and expertise to do so. However, you need to ensure that your processor can support new AI algorithms as they become available. For very custom requirements, application-specific instruction-set processor (ASIP) tools allow you to create ASIPs to simplify the design process for a specific domain or set of applications.
Licensing processor IP shortens your time-to-market and eliminates the need to invest in a design team. Proven and tested processor IP that can be programmed and configured provides the flexibility to support new AI algorithms as they emerge.