Embedded Vision Development System FAQ 

  1. What is embedded vision?

    Answer: Embedded vision refers to machines that understand their environment through visual means, extracting meaning from visual inputs. Embedded vision is the merging of two technologies: embedded systems and computer vision (also sometimes referred to as machine vision).

  2. What is the market relevance of embedded vision?

    Answer: Embedded vision applications span many different domains. Application examples include advanced driver assistance systems (automotive), augmented reality (consumer, industrial), gesture recognition (consumer, medical), surveillance cameras (security) and many more. Embedded vision is predicted to be one of the fastest growing technology domains. Data points:

    • IMS Research predicts an annual revenue growth of 6-9 percent worldwide for special-purpose processors in under-the-hood automotive applications, to $187 million by 2017
    • ABI Research forecasts 600 million smartphones with vision-based gesture recognition to be shipped in 2017
    • Juniper research predicts that augmented reality applications will generate close to $300 million from mobile devices in 2013

  3. What is the Embedded Vision Development System?

    Answer: The Embedded Vision Development System enables OEMs and semiconductor companies to rapidly design and prototype specialized embedded vision processors that are optimized for power and performance needed in embedded vision applications. It is an integrated solution based on Synopsys' Processor Designer and Synopsys' HAPS FPGA-based prototyping system. The Embedded Vision Development System includes pre-verified design examples to help designers quickly implement an application-specific instruction-set processor (ASIP) optimized to meet their specific SoC objectives for power consumption and performance. It provides a ready-to-use, modifiable base processor including a full C/C++ compiler, which supports all functions provided by the OpenCV library.

    The Embedded Vision Development System saves months of engineering effort by combining software and hardware tools that enable designers to analytically arrive at the best processor implementation for their specific application, then quickly prototype the entire SoC to complete the hardware/software integration.

  4. Why do designers consider building their own processor for embedded vision applications?

    Answer:Embedded vision is processing-intensive and many applications demand extreme cost- and energy-efficiency, while also requiring programmability to accommodate new algorithms and new functions over time. Standard RISC cores, DSPs and GPUs are typically not efficient enough or do not have the processing power that is needed for these applications. Application-specific processors (ASIPs) often yield the best mix of performance, efficiency and flexibility. Often referred to as the three P's (performance, power efficiency, programmability) these requirements can be addressed by tailoring the processor architecture to the application specific requirements: the instruction set, the register architecture, the memory and bus interfaces, parallel execution units and tailored pipeline structure to name a few.

  5. What are the key challenges for designers building their application-specific processor (ASIP) for embedded vision?

    Answer:Any processor design involves two disciplines: the hardware design, as well as the development of architecture-dependent software tools such as assembler, linker, simulator and debugger. Keeping hardware and software tools in sync are a key requirement, as well as keeping track of all architectural modifications. The development of an application-specific processor includes an intense architecture exploration phase, analyzing the impact of architectural modifications on power, performance and programmability for the given class of applications. For successful architecture exploration, the availability of software development tools at the earliest stage of a project is mandatory. By efficiently describing differing architectures and creating the corresponding software tools quickly, design schedules are preserved.

  6. What is Synopsys' Processor Designer?

    Answer: Processor Designer is an automated, application-specific processor design and optimization tool that slashes months from processor design time. Using a single input specification file, it automatically creates the synthesizable RTL code, as well as the application processor-specific software development tools such as simulator, linker, debugger and assembler, thus eliminating months of engineer-effort. Processor Designer's high degree of automation enables design teams to focus on architecture exploration and application-specific processor development, rather than on consistency checking and verification of individual tools.

  7. Why build an FPGA prototype for an embedded vision application?

    Answer: FPGA-based prototypes provide SoC design teams with cycle-accurate, high-performance execution and real-world interface connectivity. They are primarily used to facilitate software development, hardware/software integration and system validation. Due to the compute intensive algorithms deployed for embedded vision, validation of the hardware/software integration cannot be done by simulation.

  8. What are the key prototyping challenges faced by designers of embedded vision systems?

    Answer: Embedded vision systems require dedicated I/O capabilities for video data. Due to the large amount of data, they also require dedicated memory to be connected to the system. Setting up all these elements from scratch and configuring them in the right way is significant effort. While the designer wants to focus on the performance of the actual design, he/she often spends the majority of her/his time configuring the FPGA setup.

  9. What is Synopsys' HAPS FPGA-based prototyping solution?

    Answer: HAPS is a portfolio of FPGA-based prototyping solutions consisting of a suite of modular, easy-to-use hardware systems supported by an integrated tool flow including Certify® Multi-FPGA ASIC prototyping environment , Synplify® FPGA synthesis, and Identify® interactive debugging software. HAPS systems can be tailored for the user's end-application or be more general-purpose to provide the most flexibility across projects. HAPS systems offer a variety of daughter boards to provide high-performance physical interfaces such as DDR memory, video, and USB to the FPGA ICs.

  10. When is the Embedded Vision Development System available?

    Answer: The Embedded Vision Development System is available now.

NewsArticlesDatasheetsSuccess StoriesWhite PapersWebinarsVideosNewsletters