I spent two minutes at the Humanoids Summit trying to unplug a cable.
I was standing at the Lightwheel booth, holding a pair of game controllers, operating a robotic arm. The task sounded trivial: grab the connector, pull, unplug.
I failed. Repeatedly. I don’t think I even came close.
As a human, unplugging a cable is something you do without thinking. You grab it, wiggle a little, pull — done. For a robot, every part of that interaction has to be learned: where to grab, how hard to pull, what to do when the connector resists, and how to recover when things don’t line up perfectly.
The two-minute video above shows exactly what that learning process looks like. It’s awkward. It’s slow. And it’s far harder than it appears.
That small, frustrating demo explains more about the state of robotics today than any polished keynote or glossy humanoid reveal. Training robots isn’t just about building better hardware or bigger models. It’s about teaching machines how to deal with the messy physical details humans take for granted.
Why Simple Tasks Break Robots (and apparently, Diana, too.)
In the race to build general-purpose robots — from autonomous vehicles to humanoids — hardware keeps improving. Motors get stronger. Sensors get cheaper. Form factors get sleeker.
Software, however, is starving.
Physical AI systems need enormous amounts of experience to behave reliably in the real world. But gathering that experience physically is slow, expensive, and risky. You can’t crash cars endlessly to see what happens. You can’t let humanoid robots repeatedly fail in kitchens, warehouses, or factories.
This is why simulation has quietly become essential infrastructure for robotics.
Simulation Is Harder Than It Sounds
Simulation sounds straightforward in theory. Build a virtual world. Drop a robot into it. Let it practice.
In reality, it’s brutally difficult.
First, there’s asset discovery. Engineers spend huge amounts of time just finding the right objects to populate a simulation, not “a cable,” but this cable, with the right shape, stiffness, and friction.
Second, there’s physics fidelity. Most simulations assume a rigid world. But the real world is full of non-rigid, deformable objects: cables, cloth, food, wires, plants. These are exactly the things that cause robots to fail once they leave the lab.
And third, there’s evaluation. A robot can succeed endlessly in simulation and still fail the moment it touches reality. Simulation is a proxy, not the real thing, and without careful validation it can create false confidence.
Practicing against a tennis ball machine helps, but it won’t prepare you for wind, pressure, or a slippery court. At some point, you have to play the match.
Lightwheel: Building Worlds for Robots to Learn In
Lightwheel isn’t building robots. They’re building the worlds robots learn in.
Founded by Steve Xie, formerly a lead on autonomous driving simulation at NVIDIA and Cruise, Lightwheel focuses on reducing friction in simulation workflows rather than promising to “solve” sim-to-real outright.
I had the opportunity to interview members of the Lightwheel team. What stood out was how they frame simulation not as a visualization tool, but as a behavioral test environment.
Their framework separates two layers:
* a behavior layer, which captures what a robot does
* a world layer, which represents where it does it
The goal isn’t a perfect digital copy of reality. It’s a world realistic enough to stress-test behavior and measure how well it generalizes.
#www.droidsnewsletter.com