Physical AI refers to applications in which digital artificial intelligence (AI) tools are connected to hardware that senses and executes actions in the physical world. This integration enables machines to autonomously act and adapt to real-world situations in real-time. In the past, machinery either carried out a predefined set of actions or selected actions from a set of possibilities using logical decision-making. AI-powered physical systems use a variety of artificial intelligence algorithms to interpret data and infer actions beyond simple if-then-else algorithms.
Recent advances in generative AI, improvements to machine learning, and practical solutions for edge computing enable the growing application of physical AI. AI-enabled physical systems are finding their way into an increasing list of applications, removing the need for a human in the loop to make decisions based on changing data.
Autonomous systems employ physical AI in many applications by gathering, interpreting, acting on, and learning from information in addition to leveraging advanced AI technology and the latest hardware for sensing, computing, and physical action.
AI Models
The foundational step of any physical AI application is building and training its AI models. To do this, engineers first identify which AI technologies are best for predicting the behavior of the physical system the team is automating. This includes large language models (LLMs), small language models, and more traditional AI tools such as machine learning (ML).
Foundation models are the most common type of model used in physical AI. They are called world foundation models (WFMs) because they are trained on physical data. Because these neural networks are trained on large datasets, they can handle a wide range of use cases.
Hardware
Once model training is done, the next step is to assemble the hardware needed for the physical AI application. This hardware can be broken into four categories: training environment, sensors, computing resources, and actuators and output.
Training Environment
Teams responsible for training AI models use high-performance computing (HPC) data centers to build and train them. A great example of this is NVIDIA Omniverse, which is a collection of libraries and microservices for developing physical AI that runs on GPU-enabled hardware on AWS, Azure, or self-hosted computer infrastructures.
Sensors
The input connection between the physical and digital worlds is the set of sensors that provide AI agents with information. Engineers use the term multimodal sensing to refer to the acquisition of high-fidelity data from multiple sensors. The most common types of sensors are:
Type of Sensing |
Object |
Environmental |
Internal |
Location |
Usage |
● Determine if objects exist, size, location, and motion ● Provide enough sensor data to identify objects by labels or inference |
● Measure physical properties of the environment
|
● Measure physical properties of the physical AI hardware
|
● Measure location of the physical AI hardware |
Examples |
● Video cameras ● Infrared cameras ● Still cameras ● Lidar ● Radar ● Sonar ● Ultrasonic sensors |
● Microphones ● Temperature sensors ● Humidity sensors ● Gas monitors ● Pressure sensors |
● Flow sensors ● Accelerometers ● Gyroscopes ● Force and torque sensors ● Encoders ● Tactile sensors
|
● GPS ● Proximity sensors ● Real-time location system (RTLS) |
Computing Resources
Physical AI systems use a combination of remote, local, and edge computing resources for calculations that interpret sensor data and make decisions. Edge computing, in addition to leveraging advances in thermal management, multi-chip packaging, and power management, can be deployed alongside the rest of the hardware to access AI models in real time.
Actuators and Output
The interaction of a physical AI system with the real world occurs through actuators that convert commands into motion and output devices that provide information to humans and other digital and physical systems. Common examples of actuators and output devices are:
● Motors
● Linear actuators
● End effectors
● Hydraulic pistons
● Pneumatic pistons
● Pumps
● Solenoids
● Valves
● Voice coils
● Piezoelectric actuators
● Displays
● Speakers
● Safety lights
● Bluetooth, Wi-Fi, and other protocols
Process
The AI models and hardware are combined in a physical AI system to carry out its assigned tasks. The industry breaks down the process into four steps:
1. Perceive
The first step is to gather data about the physical world around the system. Various sensors in or near the system produce datasets that the physical AI system breaks down into information useful for understanding and making decisions about the physical environment.
2. Reason
The AI tools mentioned above then come into play in the second step. The datasets are consumed by the AI workflow to interpret what is happening and then decide how to react. This goes beyond the if-then-else logic of traditional machine control systems, enabling real-time decision-making.
3. Act
In the third step, the AI workflow takes the advice generated in the reasoning step and produces commands for the actuators. Actions can also be sent to output devices, such as speakers and displays.
4. Learn
The fourth step takes the results of the act step and incorporates them into the AI models to improve the system's performance. This step is optional and can be done remotely at the data center or lab. The learning step can also be done locally in the device, in which case it becomes “embodied AI.”
A good way to better understand how AI-driven applications apply these steps is to look at a simple example, such as a robot arm that takes six different types of donuts from a tray and places one of each type in a box. In this scenario:
Breakthroughs in sensor technology, GPU-driven high-performance computing, digital twin environments, and artificial intelligence tools have dramatically expanded the variety of applications for physical AI. Until recently, it was primarily used with computer vision for industrial robots and leading-edge self-driving cars. It is now finding its way into real-world applications across the aerospace, energy, healthcare, and consumer products industries.
Here are a few of the more exciting applications:
Even though physical AI has moved from R&D into real-world applications, the technology still faces some significant challenges. Researchers and engineers are developing new technologies and processes at an ever-increasing rate to meet the industry's growing needs.
The most common processing challenges are:
As noted earlier, most physical AI implementations rely on synthetic data generation to train their foundational models. Much of that data is generated with physics-based simulation models. In addition, simulation plays a critical role in the design of electronic and mechanical systems used to implement physical AI.
Training with Physics-Based Simulation
Training physical AI models, especially world foundation models, requires extensive, diverse datasets and, therefore, a robust and diverse simulation environment. A good example of this type of environment is the Synopsys multiphysics simulation toolset, which spans silicon to systems. The performance of actuators can be modeled with Ansys Motor-CAD dedicated electric motor design tool for multiphysics simulation, Ansys Maxwell advanced electromagnetic field solver, Ansys Mechanical structural finite element analysis software, and Ansys Motion multibody dynamics simulation software. Engineering teams use tools like Ansys Zemax OpticStudio optical system design and analysis software and Ansys Lumerical INTERCONNECT photonic integrated circuit design and simulation software to design complex optical sensors, Ansys HFSS high-frequency electromagnetic simulation software for radar, and Ansys AVxcelerate Sensors autonomous vehicle sensor simulation software for lidar, which uses the industry-leading Ansys SPEOS CAD integrated optical and lighting simulation software.
Engineers must first create accurate and flexible models, then plug them into environments like NVIDIA Omniverse using standard formats like Universal Scene Description (USD).
When creating a world foundation model, engineers use parametric physics models directly with an MBSE tool like Ansys ModelCenter model-based systems engineering software, a digital twin environment like Ansys Twin Builder simulation-based digital twin platform, or Ansys TwinAI AI-powered digital twin software. Alternatively, engineers can use an optimization tool such as Ansys optiSLang process integration and design optimization software to create reduced-order models for those tools.
Driving the Design of Physical AI Hardware with Simulation
Simulation plays an equally important role in helping engineers design the sensors, computers, actuators, and output devices used to make physical AI a reality. Electrical engineers rely heavily on simulation for chip design, especially for multichip packages used in modern CPUs and GPUs. A modern chip design pipeline will include tools like Synopsys Platform Architect, Synopsys ZeBu® EmPower, and Synopsys SpyGlass Power to explore and evaluate power optimization using RTL code. The process will then bring a package like Synopsys RedHawk-SC into the mix for digital power integrity sign-off.
At the electronic system level, engineers will use tools like Ansys Icepak electronics cooling simulation software and Ansys Sherlock electronics reliability prediction software to model chip, board, and enclosure-level packages for reliability and to optimize their design to meet power, thermal, and structural requirements.
Engineers use the same model training tools that mimic sensors in the design and development process. For example, a combination of Ansys Maxwell and Ansys Motion helps configure a sophisticated stepper motor and its application to a robot arm, while Ansys Fluent fluid simulation software can simulate the flow in a valve as it opens and closes. Control engineers will use an embedded software product, such as the Ansys SCADE Suite model-based development environment, to design the algorithms that execute the actions requested by the physical AI system. Ansys SCADE then automatically creates safety certified code that automatically meets numerous safety standards as required for systems that are safety critical like transportation.
An example of an Ansys Motor-Cad simulation of an electric motor
Since Physical AI uses almost every type of hardware and is trained on simulation-derived synthetic data, a comprehensive multiphysics and multi-scale simulation toolset, such as those offered by Synopsys, is key to advancing the industry.