Engineering for the Exponential Complexity of Physical AI

Greg Sorber

Jan 28, 2026 / 4 min read

The growing sophistication of intelligent devices was on full display at the 2026 Consumer Electronics Show (CES). Once a showcase for tech gadgets and video games, the event this year was awash with self-driving cars, advanced robotics, and AI-equipped everything — from tiny wearables to massive industrial machines.

All of them were striking examples of how system-level engineering is reshaping the way products are conceptualized, designed, and manufactured.

“They say the ‘C’ [in CES] is for consumer,” quipped Daniel Newman, CEO and chief analyst of The Futurum Group and co-host of The Six Five Pod, a tech-focused podcast. “Should it be for chips?”

Synopsys CEO Sassine Ghazi, the featured guest during a live recording of the podcast from the CES show floor, didn’t disagree.

“If you go back about six years ago, CES was all about smart devices — smart car, smart phone, smart home, etcetera,” Ghazi said, noting the dramatic shift from “smart” to “intelligent” products. “You now have AI driving the device in terms of how it learns, interacts, adapts, reasons. The complexity of building these devices is exponential. And it requires multiple levels of optimization across engineering.”


Exec. Guide | Rethinking Automotive Development: Virtualization for the Software-Defined Era

Discover how virtualization improves profit margins, accelerate time to market for vehicles, and secure a competitive edge in the software-defined vehicle era.


Driving toward autonomy

The podcast was recorded from the Synopsys booth in the CES West Hall, the show’s hub for automotive technology and advanced mobility. It was an apt setting given recent advances in vehicular autonomy and the broader push for increasingly intelligent devices — all of which require optimization across silicon, software, and physics.

“As these designs get more complex, it’s not just chips and it’s not just a physical design,” said Patrick Moorhead, CEO and chief analyst at Moor Insights & Strategies and co-host of The Six Five Pod. “It’s everything together.”

This complexity is particularly acute in the automotive industry, where the convergence of silicon, software, and physics is redefining what it means to build a car.

“All these devices have a tremendous [amount] of silicon,” Ghazi noted. “What is an automobile today? It’s a software-defined system. A robot is a software-defined robot. So, the same technology is required to deliver that sophistication from silicon all the way up to the physical system.”

Validating a car’s performance and safety has always been a costly and time-consuming endeavor. Doing so for vehicles that must operate autonomously in dynamic and often unpredictable real-world environments represents a far greater challenge — one that cannot be realistically tackled with physical prototypes, crash test dummies, and thousands of miles on the road.

Ghazi said traditional test and verification methods are giving way to virtual simulations.

“Imagine a world where you can virtualize the car itself as well as the environment,” he said, “and do a lot of that development and validation virtually through a digital twin.”

Only a few years ago, digital twins were not practical because the simulations would take too long. With industry leaders like Synopsys and NVIDIA working to dramatically accelerate simulations using GPUs, these barriers are being torn down. And this will spark a new wave of innovation, Ghazi noted — not just in cars, but with countless intelligent systems that interact seamlessly with the physical world. 

Collaboration across silicon, software, and physics

Building these complex physical AI systems takes more than just powerful software and silicon, however. Engineers need a holistic approach to design viable products that perform as expected.

“You build a lot of margins between the software layer, the various physics, the mechanical, the structure,” Ghazi explained, noting the lack of optimization and missed opportunities that arise when components are engineered independently rather than holistically.

Those inefficiencies and costs add up. They can’t be undone late in the development cycle. And they can’t be solved with physical prototypes.

“You cannot do it just by validating through a physical prototype. It’s too expensive, it takes forever, and you’re going to miss many use cases,” said Ghazi. “You cannot have a robot or a car that’s going to cost a few million dollars, [bypass system-level optimization and virtual validation], and think it’s going to be deployed at scale.”

sassine-ghazi-ces-podcast-interview-image

Left to right: Sassine Ghazi (Synopsys), Patrick Moorhead (Moor Insights & Strategies), and Daniel Newman (Futurum) at CES 2026

Accelerating innovation with AI

As engineers pursue the design of more intelligent robots, automobiles, and other products, Ghazi is confident the integration of simulation, AI, and system-level engineering will unlock new frontiers.

“How else do you [validate] the application without having to do physical prototyping, optimizing, and reducing the margins across every engineering discipline?” he asked rhetorically. “That’s where we’re seeing the opportunity.”

It’s a bold vision, but for companies at CES and elsewhere striving to make physical AI a reality beyond tradeshows, it’s a path to overcoming the enormous engineering challenges.

“I’m a strong believer that innovation is driven by constraints and complexity,” Ghazi said. “The more you constrain, the more complex the problem is, that’s where innovation shines.”

 

Continue Reading