Synopsys Insight Newsletter 

Insight Home   |   Previous Article  |   Next Article

Issue 2, 2013

Technology Update
Developing Embedded Vision Systems

Markus Willems, senior product marketing manager at Synopsys, explains why application-specific processors will be found at the heart of most embedded vision systems and how Synopsys’ Embedded Vision Development System enables design teams to quickly meet their power, performance and programmability goals when implementing such systems.

At the heart of all embedded vision systems is the fundamental need to process video data. Video processing can be extremely computationally demanding, often involving tens of billions of operations per second. For example, one implementation of an optical flow algorithm for a single frame of a 720p video required about 200 million cycles when using optimized software on a typical DSP processor.1

Being successful in many of the market opportunities identified in “Embedded Vision: Systems that See and Understand,” written by Jeff Bier, depends on implementing high-performance embedded vision processing in mobile systems, where power consumption is a critical issue.

Another issue that design teams have to take into account is that embedded vision algorithms are a competitive differentiator and hence evolve constantly. It’s common for a design team to refine an algorithm throughout the development process and even after the product implementing the algorithm is in production. It is essential that the implementation is programmable so that manufacturers can accommodate new features and algorithms over time.

Processor Solutions
Typically, standard RISC cores, DSPs and GPUs don’t deliver the levels of performance, power efficiency and programmability required by today’s embedded vision applications. As a result, many design teams tackling embedded vision systems are turning away from standard off-the-shelf processor IP in favor of designing their own application-specific instruction-set processors (ASIPs) that typically become coprocessors in a larger design.

Design teams can create and optimize their own ASIPs to meet the exact needs of embedded vision algorithms. They can optimize the instruction set, register architecture, memory and bus interfaces, select parallel execution units and pipeline structures, and customize every other aspect of the ASIP in order to minimize power consumption, support an appropriate level of programmability, and meet the performance needs of the algorithm. This degree of configurability and customization is critical in the embedded vision space because the variety of embedded vision applications means there is no one-size-fits-all embedded vision processor implementation.

Tackling ASIP Design
Design teams must address both hardware and software challenges when developing ASIPs.

The design of the hardware architecture is key in determining the performance and power of the final ASIP. It’s imperative that the design team be able to fully explore a range of architectural options in order to select the best starting point for the design.

Having built its own processor, the design team must take responsibility for software development – both the application code and the development tools, including the assembler, linker, simulator and debugger – all of which depend on having a specification for the ASIP architecture.

ASIP development is a classic case of “chicken and egg” engineering. In order to effectively explore the hardware architecture, the design team needs early access to software development tools. However, the software team can’t create the instruction set and development tools without a specification for the hardware architecture.

Synopsys Embedded Vision Development System
Synopsys’ Embedded Vision Development System (EVDS) enables design teams to rapidly develop application-specific processors tailored to the power and performance needs of embedded vision applications. EVDS (Figure 1) incorporates the Synopsys Processor Designer™ toolset, a number of pre-designed embedded vision processor examples and pre-verified design methodologies. The examples and methodologies act as building blocks that enable design teams to explore and refine new architectures in hours instead of weeks, saving many staff months of effort.

Embedded Vision Development System
Figure 1: Embedded Vision Development System

Using EVDS, design teams can describe different architectures using LISA – a language created for specifying instruction set processors. LISA is a processor description language that includes components such as register files, pipelines, memories and instructions. Design teams can use LISA to specify processor architectures and then use automated tools within Processor Designer to create the instruction set simulator (ISS) and a suite of development tools including the assembler, linker, archiver and C/C++ compiler.

Equipped with an ISS, design teams can then execute the compiled code to profile the architecture performance. If the design doesn’t meet the specified performance, the team can easily modify the architecture by changing memory access, register configuration and the instruction set. This flexibility gives design teams a significant advantage over off-the-shelf configurable processors that are built on fixed pipelines and register structures.

Embedded Vision Development System
Figure 2: ASIP design flow using Synopsys Processor Designer

A design team can recompile and simulate the C/C++ model until it achieves the design performance goals (Figure 2). At this stage, the design team can generate synthesizable RTL code and follow a traditional implementation flow and also investigate the power and area (cost) goals. By using EVDS, design teams can efficiently describe different architectures and automatically create the software tools and hardware description, accelerating their design schedules.

Using Processor Designer, the “chicken and egg” problem is resolved. Architectural alternatives are explored in minutes, where the processor description in LISA serves as the golden reference, in addition to the automatic generation of the software development tools and RTL.

System Prototyping
Hardware-based prototyping is a prerequisite for embedded vision. It enables design teams to validate the algorithm(s) and the hardware/software integration and allows them to demonstrate the capabilities of their designs to prospective customers. Simulation alone is too slow for hardware/software validation because embedded vision applications use compute-intensive algorithms and large amounts of data.

When it comes to prototyping ASIP designs, FPGAs must be used since processor samples won’t exist for the design team’s custom design. FPGA-based prototypes provide SoC design teams with cycle-accurate, high-performance execution and real-world interface connectivity.

Embedded vision systems require dedicated I/O capabilities for video data. They also require dedicated memory to be connected to the system in order to handle large amounts of data. Setting up all of these elements and configuring them takes a lot of time. Designers can focus on optimizing the actual design rather than spending time configuring the FPGA prototype.

Synopsys’ HAPS® is a portfolio of FPGA-based prototyping solutions consisting of modular, easy-to-use hardware systems supported by an integrated tool flow, including a multi-FPGA ASIC prototyping environment, FPGA synthesis, and interactive debugging software. Design teams can configure HAPS systems to suit their end applications. HAPS systems offer a variety of daughter boards to provide high-performance, physical interfaces, such as DDR memory, video, and USB. Design teams can reduce system bring-up time by months by using the pre-validated embedded vision reference flows for HAPS FPGA-based prototypes included in the EVDS.

Design teams can implement optimized ASIP RTL from Processor Designer in the HAPS prototyping system, enabling them to use the exact same RTL for both design and validation. HAPS prototypes allow those adopting the application-specific processor to integrate other digital IP into the SoC design and connect the prototype with real-world I/O, such as to interface with cameras, monitors and memory cards, to validate hardware-software integration and application performance.

Canny ASIP Performance
Canny edge is a well-known detection algorithm that is widely used in embedded vision applications, such as lane departure warning, traffic sign recognition and iris detection.

An implementation of the Canny edge detector algorithm is provided as part of the EVDS and has proved to be a useful reference design for comparing ASIP performance against the performance of a typical RISC architecture. The benchmark is summarized in Table 1.

Embedded Vision Development System
Table 1: Benchmarking RISC and ASIP performance for the Canny edge detector

The ASIP implementation demonstrates significant performance benefits over the typical RISC architecture. The speedup factor of the hysteresis function is significant, but not as high as that of the block-based data-paths since it is frame-based and requires a different memory access concept, plus it uses control operations extensively.

Summary
Embedded vision is an exciting and dynamic application space that can be implemented across a broad range of products. Meeting the needs of the embedded vision market depends on creating embedded systems that combine high performance, low power and programmability.

Synopsys’ EVDS is an integrated solution that accelerates the design of processors for embedded vision based on Synopsys’ Processor Designer tool set and Synopsys’ HAPS FPGA-based prototyping system.

EVDS enables designers to quickly and efficiently develop application-specific processors that are tailored to meet the power and performance specifications that their embedded vision applications require. By automating hardware and software implementation and having pre-verified examples and building blocks, design teams can save many staff-months of effort exploring and tuning new processor architectures to meet their specific application.

EVDS can also reduce system bring-up time by months thanks to its pre-validated reference flows for HAPS FPGA-based prototypes.

References
1 Jeff Bier, “Implementing Vision Capabilities in Embedded Systems” Embedded Systems Conference San Jose, April 2013.


More Information

About the Author
Markus Willems is responsible for Synopsys' system-level solutions with a focus on processor development and signal-processing solutions. He has been with Synopsys for 14 years, having served in various system-level and functional verification marketing roles. He has worked in the electronic design automation industry for more than 20 years in a variety of senior positions, including marketing, applications engineering, and research. Prior to Synopsys, Markus was product marketing manager at dSPACE, Paderborn, Germany. Markus received his Ph.D. (Dr.-Ing.) and M.Sc (Dipl.-Ing.) in Electrical Engineering from Aachen University of Technology in 1998 and 1992, respectively. He also holds an MBA (Dipl.Wirt-Ing) from Hagen University.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative