Autonomous humanoid robots are edging closer to reality, but one of the UK's leading robotics experts warns that the hype masks some uncomfortable truths: we are still years from truly useful humanoids, the real danger lies in humans misusing them, and without strict regulation they may pose more risk than reward. Dr Carl Strathearn, Lecturer in Computer Science at Edinburgh Napier University and expert adviser on AI and robotics to the UK Government's Office for Science, will share his warnings and insights at New Scientist Live in October.
For all the glossy demonstrations of humanoid robots pouring drinks, folding clothes or mimicking facial expressions, the gulf between the videos and a truly reliable helper in everyday life remains huge. Dr Strathearn said: "The biggest problem is the lack of real-world data and the technological means of gathering it in large enough quantities to train our robots effectively." Current systems rely heavily on virtual simulations, reinforcement learning, or scraping YouTube videos of humans performing tasks.
The result is machines that cope well in labs but struggle in the messy, unpredictable world.
Dr Strathearn, who will showcase Euclid, his so-called "friendly robot", at London's ExCel centre for the annual science showcase, said: "Think of a simple object like a cup. There are millions of variations in size, weight, shape, colour. Now extrapolate that to every object in a house, and you can see the scale of the challenge."
One possible solution is crowdsourcing real-world data on a massive scale, perhaps through video glasses such as Meta's Ray-Ban smart glasses. But Dr Strathearn admits this would require thousands, even millions, of people collecting and sharing data - an ambitious, and ethically fraught, task.
For Dr Strathearn, the real danger does not lie in science-fiction fears of machines turning against us.
He explained: "If you are talking Terminator, the answer is no. We are and always have been more of a danger to ourselves than anything else."
Instead, he argues, the real risk comes from humans controlling humanoid robots, often with little or no training. Strathearn is currently leading a petition to the UK Parliament calling for regulation of humanoid robots in public spaces after a series of near misses.
Dr Strathearn said: "Humans control them using handheld devices, which makes them very dangerous and unreliable. There are more and more instances of serious near misses between humans and robots - not because of AI, but because of humans."
That is why, he argues, strict regulation is essential before humanoids appear widely in our daily lives.
Another challenge is perception. Robots that are too lifelike risk sliding into the "uncanny valley," triggering discomfort or unease. Yet in some contexts, like dementia care, a familiar human-like face could be soothing and beneficial.
Dr Strathearn said: "People have different thresholds of perception when it comes to creepiness. That's why we have different types of robots - some very lifelike, some with just minimal facial features."
During his PhD, he even devised the "Multimodal Turing Test" to explore whether communication through lifelike robots made AI seem more human. Later, Japanese researchers tested the idea and found that people were indeed more likely to believe AI was human when it came through a realistic robot.
Strathearn insists that acceptance will come not by accident but through gradual, careful introduction and education, especially for children learning robotics and AI skills in schools.
Despite all these caveats, companies are racing ahead. Dr Strathearn said: "The hype is a major issue. We are far from humanoid robots that are good enough to do everyday tasks effectively, but that doesn't stop major companies wanting to mass produce them."
He points out that the skills shortage in robotics is already acute. Universities still divide students into siloed disciplines like computer science, engineering and design, when the future of robotics depends on interdisciplinary, cross-trained talent.
Dr Strathearn said: "Without a solid foundation in education, I worry about the sustainability of the humanoid robotics industry."
Ironically, he sees one frontier where humanoid robots may prove genuinely useful much sooner: space exploration.
He continued: "Space exploration for sure - we could use telemetric or AI-controlled humanoids to work in space for longer periods than humans, advancing us further into the unknown."
Future humanoids might even be deployed to help terraform planets or explore rugged terrain beyond the reach of current robotic rovers.
Dr Strathearn said: "They may be more useful much quicker for this type of exploration work, than down here on Earth ironically."
So while robots may one day help us colonise new worlds, Strathearn's warning is clear: here on Earth, the real challenge is ensuring they are safe, reliable, and properly regulated before being unleashed on society.
Dr Strathearn said: "Robots might terraform Mars one day. But on Earth, only strict regulation will keep us safe."
New Scientist Live runs from October 18-20 at London ExCel
You may also like
Why prosecutors want the death penalty for Luigi Mangione, accused killer of UnitedHealthcare CEO
Not Kohli Or Pandya, Rohit Sharma Names This 39-Year-Old As The Funniest Player In Team India
'Most addictive crime drama TV series' people 'can't stop binge-watching'
Breaking: Enzo Maresca provides worrying Liam Delap injury update as Chelsea concerns grow
ITAT Ruling Extends Section 87A Rebate to Long-Term Capital Gains, Offering Huge Relief to Taxpayers