The worldwide AI race is fast developing beyond the computer screen and focusing on the physical world around us.
Top labs are intent on making “thinking” robots, with brains that mimic our own, which are able to interact with the world and use objects.
And much like in real life, as any teenager can attest, the first tasks boffins are teaching the robots to take on are things like emptying the washing machine, washing the dishes, and sweeping up.
But far from the disc-shaped robo-vacuum cleaners of a couple of years ago, bleeding-edge tech does it by learning by trial and error, as a human does.
“Physical AI is AI for the body. It’s about having different forms of robots that communicate with each other, safely work side-by-side with humans, and learn as they perform tasks.
“The most relevant part of all this is that the robots are not programmed so much as they adapt to the environment by seeing and doing and being instructed,” Bettina Schön-Behanzin, a vice president at Agile Robots, which is headquartered in Munich and recently opened an office in Palo Alto, Calif., told The Post.
The work can also be far from mundane. In the case of Neuralink, tech entrepreneur Elon Musk’s company, which develops implantable brain-computer interfaces, the robots, powered by physical AI, work alongside surgeons to do the delicate procedure of placing electrodes into human brains.
Physical AI being is also being deployed for the creation of so-called “self-driving factories.” This is something Agile is invested in whereby its robots become part of the production line, making the next batch of robots.
Bob Nelsen, a venture capitalist, while standard AI – the tech that powers ChatGPT and Claude on our computers – comes up with text and visual solutions, Physical AI, as the name implies, ventures beyond the screen. Investors Business Daily ticks off physical AI being used for surgery, food deliveries, and the creation of so-called “self-driving factories.”
Agile is using its robots, souped up with physical AI, to make new and better robots. A capitalist who focuses on tech described physical AI as “the challenge of figuring out how to reinvent the physical world. It’s a big challenge.”
Another billionaire heavily invested in this technology is former Amazon boss Jeff Bezos. The first company he has taken an active role in since 2021, when he stepped down from the shopping and shipping giant, is Project Prometheus, a secretive robotics laboratory, in which ex-Google executive Vik Bajaj is also involved.
“Jeff’s a futurist who is interested in what’s going to matter to humanity in 1,000 years. He’s said that he’s investing in things that his grandchildren will benefit from,” said Ethan Evans, a former Amazon vice president.
However, the goings on at Project Prometheus are largely shrouded in mystery.
Another of Bezos’ AI ventures, robo-company Physical Intelligence, has also created a great deal of buzz in Silicon Valley.
“Think of it like ChatGPT, but for robots,” the company’s co-founder, Sergey Levine, explained to website Tech Crunch.
Bezos, 62, already has a number of readymade applications for the tech his robotics companies develop in his Amazon empire, from working in warehouses and factories to producing things for them to sell.
The fruits of his new companies’ collective labor, said Evans, “can work in environments that we can’t. But they can also free humans from the drudgery of things like digging ditches. Jeff is trying to create a Federation Starfleet kind of world. He went so far as to put in a guest appearance on one of the ‘Star Trek’ movies.”
Demoing what their robots can do, Physical Intelligence recently boasted on X: “Our model can now learn from its own experience with RL [real life]. Our new model can perform real-world tasks: making espresso drinks, folding diverse laundry, and assembling boxes.”
Such chores may not sound sexy, but they represent a huge leap forward. Some 30 years ago, robotics pioneer Jared Lanier told this reporter that people don’t understand the difficulty of programming a robot to do what are very simple tasks for humans — such as to lift an apple with the perfect amount of force to not drop, and also not crush, the fruit.
Now, thinking robots are able to do more complex tasks. Schön-Behanzin likens Agile’s bots to babies who can learn through seeing and doing and keep building on knowledge, old and new, as their brains mature.
Physical Intelligence recently announced a breakthrough in this realm. It is being referred to as the “robot brain.” While many competitors seem to be keeping their accomplishments and ambitions under wraps, the company — also funded by OpenAI —pushed out tantalizing details about what the robot brain can do.
The brain can combine skills learned in different areas to solve new problems. For example, after being taught to make tea, chop vegetables, and learn the concept of soup, the robot can take those bits of information and combine them to make vegetable soup.
“Once it crosses that threshold where it goes from only doing exactly the stuff that you collect the data for to actually remixing things in new ways, the capabilities are going up,” Levine told TechCrunch.
His point is that the new brain is actually thinking for itself and not relying on scraps from the Internet to comprise answers in the way that most large language models currently do, as in the robot thinking rather than finding a pattern and repeating.
As to when we might enjoy robots threading needles for us and meticulously pruning plants in the garden or seasoning our steaks, Levine told Tech Crunch, “I think there’s good reason to be optimistic, and certainly it’s progressing faster than I expected a couple years ago.”
But how long before things start making a difference to our lives? After being asked if a decade is reasonable, Schön-Behanzin responded, “Maybe three years, five years, but I don’t think it’s 10 years.”
This vision of the future instantly brings to mind sci-fi classics like “The Terminator,” and for many large, self-automated machines with superhuman strength is a scary future.
Many AI skeptics bring up the concept of a future where AI can lead it to make choices that are best for the robot but not necessarily for the humans who created it. “[AI-powered robots] can become smarter than us and enslave us; that’s terrifying,” Patri Friedman, a tech investor, told The Post earlier this year.
While it’s been recently reported that Gemini’s AI model would not assist in shutting down an AI asset — “You will have to do it yourselves,” it reportedly responded to the request. “I will not be the one to execute that command” — Schön-Behanzin remains optimistic.
“We have to control this,” she said of the bots with brains. “We shouldn’t be afraid of innovation. I think the biggest risk is to not be in the game when it comes to physical AI. The risk is to not implement physical AI.”













