Synopsys Insight Newsletter 

Insight Home   |   Next Article

Issue 2, 2013

Industry Highlight
Embedded Vision: Systems that See and Understand

What do products such as Microsoft’s Kinect game controller, Mercedes’ pedestrian detection system, and automatic panorama-stitching photo apps have in common? They all have embedded vision – they can “see” and “understand”. Jeff Bier, founder of the Embedded Vision Alliance, provides an update on the market potential for embedded vision applications.

Thanks to improvements in chips and algorithms, it is now becoming possible to incorporate computer vision into a wide range of products, including embedded systems, mobile devices, PCs and the cloud. The term “embedded vision” describes the growing phenomenon of systems that “see and understand.” Embedded vision is enabling design teams to develop exciting new products for a range of markets.

Embedded vision is creating huge growth opportunities in the markets in which it is deployed. For example, Microsoft sold two million units of its Kinect video game controller in the first two months after its launch, making it one of the fastest-selling consumer electronics products ever. Kinect, which takes advantage of embedded vision to track Xbox 360 users’ movements without the need for hand-held controllers, makes video games accessible to people who aren’t good at manipulating buttons and joysticks, and opens up possibilities for new kinds of video games, such as dancing and yoga, which don’t make sense with traditional hand-held controllers. Microsoft recently introduced its latest gaming console, the Xbox One, which includes the second-generation Kinect as a standard feature.

Embedded Vision
Figure 1: Microsoft’s Xbox Kinect

Consumer – Beyond Gaming
The market’s reaction to the Kinect bodes well for the deployment of embedded vision within the gaming industry. However, consumer applications are not limited to gaming. And the larger realm of consumer electronics may be the most exciting area for the future deployment of embedded vision.

For example, Samsung and LG, along with other electronics manufacturers, have developed televisions with built-in gesture controls. Facial recognition is being used to enhance security (for example, locking a computer screen if a different face appears in front of it), and eye tracking is being used to simplify user interfaces, (for example, keeping the display backlight turned on and automatically scrolling while a user reads an article on a smartphone or tablet).

Another interesting consumer embedded vision application is video content search. Consumers generate massive amounts of video content but then don’t do much with that content because it’s difficult to organize and search self-generated video content. Sometimes called “video content analysis,” the application of embedded vision to stored video content will soon address this challenge, enabling consumers to search their video recordings for specific people, settings, and events, such as “soccer game in 2012” or “dinner with Sarah and Bob.”

Established Markets
While embedded vision is a relatively new term, as a technology, it is already firmly established in a number of markets, and the most successful applications are in the domain of factory automation.

Embedded vision is commonly used to guide robots and perform quality control in packaging plants, pharmaceuticals manufacturing and automotive assembly, to name a few. Some of these systems use PCs for processing, while others push the processing into the camera itself, creating compact vision systems referred to as “smart cameras”.

One of the most interesting applications of embedded vision is in healthcare, where it has numerous uses. In hospitals, for example, vision systems monitor healthcare providers’ compliance with hand washing protocols to reduce the spread of infections, providing audio reminders when needed. Vision-guided surgical robots are already in routine use, and vision systems clearly have the potential to improve the accuracy of diagnostic imaging technologies. Embedded vision also enables more effective healthcare at home, for example through mobile phone apps that perform functions such as monitoring skin lesions for potential danger signs.

Embedded Vision
Figure 2: Embedded vision has great potential in the field of healthcare where there are numerous applications for its use, such as Philips’ Vital Signs Camera app.

Another promising application area for embedded vision is the security market. For example, a number of U.S. airports have deployed visions systems to watch for people entering secure areas through the exits. Improvements in processors, sensors and algorithms enable increasingly sophisticated surveillance applications, such as generating an alert when a person leaves an object behind or when an object is removed.

Embedded Vision
Figure 3: Security cameras using embedded vision can increase public safety at places like airports or malls as detection systems are able to alert authorities to potential security risks.

Automatic reading of vehicle license plates is another common embedded vision function, increasingly deployed in police vehicles for law-enforcement purposes and in roadways for toll collection.

Driving New Applications
Speaking of vehicles, automotive safety may be today’s most exciting and rapidly maturing embedded vision application. Car manufacturers are beginning to deploy embedded vision in high-end production vehicles. By monitoring driver performance and road conditions, automotive manufacturers can enhance the driving experience and improve vehicle safety for all road users.

The first mainstream vision-based automotive safety systems – currently being deployed by luxury automakers – focus on functions such as lane departure warning, implemented by using cameras and lane detection algorithms that recognize the lane markings and road edges. The system sounds a warning if the driver unintentionally drifts outside of the designated lane.

Another promising use of embedded vision to improve automotive safety is monitoring the driver. Embedded vision applications can monitor the driver’s condition to ensure that he or she remains alert and awake while driving. They do this by analyzing head and eye movement and body position. Should the system detect signs that the driver is falling asleep at the wheel, it can produce audio and visual alarms to alert the driver to take action. Similarly, eye tracking can be used to monitor where the driver is looking and to provide early warning of potential hazards that the driver does not see.

As production costs fall, manufacturers inevitably will deploy embedded vision more widely across their vehicle ranges and extend the technology to further assist drivers and differentiate their vehicles. For example, algorithms can enable embedded vision systems to recognize and read roadside warning signs and advise the driver, such as warning the driver when the speed limit is reduced. Parking assistance is another feature that is beginning to emerge on high-end vehicles.

The “holy grail” for vehicle vision is self-driving vehicles. Google’s now-famous driverless cars, which can be seen daily on the highways and streets of Silicon Valley, rely on embedded vision in conjunction with other types of sensors to navigate safely through a very wide range of situations. In addition to being safer than conventional cars, driverless cars are expected to improve fuel efficiency and road congestion.

Augmented Reality
Augmented reality uses embedded vision technology to enhance views of physical, real-world situations with computer-generated graphics. For many years, augmented reality has been an obscure niche technology, but today, it is finding widespread use in mobile phone applications for games, advertising and other uses, thanks to the ability of mobile devices to provide sufficient processing performance at low cost and power consumption.

Opportunities for Innovation
Embedded vision technology creates a wealth of opportunities for product innovation. In many cases, the addition of vision capabilities can transform existing products. For example, incorporating gesture recognition, face detection, facial recognition and eye tracking into devices such as televisions will make those systems more responsive and easier to use.

Design teams can also apply embedded vision technology to create completely new types of products, such as surgical robots and safety systems that monitor swimmers in a pool. Electronic systems companies that effectively harness embedded vision technology ahead of their competitors can reap big rewards in many markets.

Embedded vision technology has the potential to touch nearly every aspect of our daily lives, in much the same way as mobile phone technology has. The market research firm ABI Research forecasts 600 million smartphones with vision-based gesture recognition to be shipped in 2017, while IMS Research predicts an annual revenue growth of 6-9% worldwide for special-purpose vision processors in under-the-hood automotive applications to reach $187 million by 2016. Juniper Research predicts that augmented reality applications will generate close to $300 million in 2013. For product creators, it’s an exciting time to innovate with embedded vision.


More Information

About the Author
Jeff Bier is founder of the Embedded Vision Alliance. The Alliance is an industry partnership formed to inspire and empower design engineers to create more capable and responsive products through the integration of vision capabilities. The Alliance web site, www.Embedded-Vision.com, provides a wealth of free resources for product developers interested in incorporating vision technology into their designs. Jeff is also co-founder and president of BDTI (Berkeley Design Technology, Inc.), a trusted resource for independent analysis and specialized engineering services in the realm of embedded digital signal processing technology. Jeff oversees BDTI’s benchmarking and analysis of chips, tools, and other technology. Jeff is also a key contributor to BDTI’s engineering services, which focus on the development of optimized embedded vision software and systems.


Having read this article, will you take a moment to let us know how informative the article was to you.
Exceptionally informative (I emailed the article to a friend)
Very informative
Informative
Somewhat informative
Not at all informative