If you grew up in the 80s, there was no escaping Olivia Newton-John’s catchy hit tune: “Let’s Get Physical.” It was everywhere—and suddenly everyone was wearing headbands and leggings, whether this was a good fashion-choice or not. Thankfully, this fashion craze did not survive into the AI era.
But the phrase stuck.
Sort of.
These days, the word "physical" has less to do with spandex and more to do with servos, sensors, and synthetic muscle.
Welcome to Physical AI—the branch of artificial intelligence focused on helping machines operate in the real world. If traditional AI lives in servers and chatbots, Physical AI lives in robots, drones, autonomous vehicles, and other machines that interact with the messy unpredictability of gravity, friction, and humans who occasionally walk right in front of them.
In this post, I’ll break down:
* What Physical AI actually means
* Why NVIDIA, Disney, and just about every robotics company is talking about it
* How simulation, reinforcement learning, and real-world robotics are connected
* And yes, why it's the hottest trend in tech that doesn’t involve Lululemon
I’ve written about Physical AI before in Deep Learning with the Wolf. Check it out.
I also made my own short explainer video about Physical AI.I
So What Is Physical AI?
When most people hear “AI,” they think of large language models, chatbots, or generative art tools. These systems live in the cloud and operate purely in the digital world.
Physical AI is different—it’s about helping machines function in the physical world.
That means understanding:
* Friction
* Balance
* Motion planning
* How to not fall down a flight of stairs or crush your coffee table
As NVIDIA puts it: “Physical AI combines perception, reasoning, and action in machines that operate in the real world.”
This is the domain of robots, self-driving cars, drones, and sim-to-real learning. These systems must not only think—but move.
Thanks for reading DROIDS!! Subscribe for free to receive new posts and support my work.
Part 2: Why Does This Matter?
Because real-world robots aren’t just executing code. They’re reacting to environments that shift by the second. Gravel paths. Uneven sidewalks. People suddenly doing TikTok dances in front of them.
Physical AI gives machines the capacity to:
* See (through computer vision)
* Think (via onboard AI + edge computing)
* Act (via motion control, balance systems, etc.)
* And crucially: adapt on the fly
Part 3: From Sim to Sidewalk
Training these bots can be risky and expensive—so NVIDIA’s strategy starts in simulation.
Using tools like Isaac Sim, robots can learn to walk, navigate obstacles, or even show social behaviors—entirely in a virtual environment. This is called sim-to-real learning. And once they’ve learned in sim, those behaviors get transferred to a real-world bot.
🗣️ As Spencer Huang (NVIDIA Robotics) told me at GTC:“You want to go sim first... We simulate hundreds—sometimes thousands—of robots at once. It’s trial and error at scale.”
#robotics #droidsnewsletter #physicalAI
droidsnewsletter.com