The science of human touch – and why it’s so hard to replicate in robots
By Perla Maiolino, University of Oxford
Robots now see the world with an ease that once belonged only to science fiction. They can recognise objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely or meaningfully, and the limits appear instantly.
As a researcher in soft robotics working on artificial skin and sensorised bodies, I’ve found that trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is.
My work began with the seemingly simple question of how robots might sense the world through their bodies. Develop tactile sensors, fully cover a machine with them, process the signals and, at first glance, you should get something like touch.
Except that human touch is nothing like a simple pressure map. Our skin contains several distinct types of mechanoreceptor, each tuned to different stimuli such as vibration, stretch or texture. Our spatial resolution is remarkably fine and, crucially, touch is active: we press, slide and adjust constantly, turning raw sensation into perception through dynamic interaction.
Engineers can sometimes mimic a fingertip-scale version of this, but reproducing it across an entire soft body, and giving a robot the ability to interpret this rich sensory flow, is a challenge of a completely different order.
Working on artificial skin also quickly reveals another insight: much of what we call “intelligence” doesn’t live solely in the brain. Biology offers striking examples – most famously, the octopus.
Octopuses distribute most of their neurons throughout their limbs. Studies of their motor behaviour show an octopus arm can generate and adapt movement patterns locally based on sensory input, with limited input from the brain.
Their soft, compliant bodies contribute directly to how they act in the world. And this kind of distributed, embodied intelligence, where behaviour emerges from the interplay of body, material and environment, is increasingly influential in robotics.
Touch also happens to be the first sense that humans develop in the womb. Developmental neuroscience shows tactile sensitivity emerging from around eight weeks of gestation, then spreading across the body during the second trimester. Long before sight or hearing function reliably, the foetus explores its surroundings through touch. This is thought to help shape how infants begin forming an understanding of weight, resistance and support – the basic physics of the world.
This distinction matters for robotics too. For decades, robots have relied heavily on cameras and lidars (a sensing method that uses pulses of light to measure distance) while avoiding physical contact. But we cannot expect machines to achieve human-level competence in the physical world if they rarely experience it through touch.
Simulation can teach a robot useful behaviour, but without real physical exploration, it risks merely deploying intelligence rather than developing it. To learn in the way humans do, robots need bodies that feel.
A ‘soft’ robot hand with tactile sensors, developed by the University of Oxford’s Soft Robotics Lab, gets to grips with an apple. Video: Oxford Robotics Institute.
One approach my group is exploring is giving robots a degree of “local intelligence” in their sensorised bodies. Humans benefit from the compliance of soft tissues: skin deforms in ways that increase grip, enhance friction and filter sensory signals before they even reach the brain. This is a form of intelligence embedded directly in the anatomy.
Research in soft robotics and morphological computation argues that the body can offload some of the brain’s workload. By building robots with soft structures and low-level processing, so they can adjust grip or posture based on tactile feedback without waiting for central commands, we hope to create machines that interact more safely and naturally with the physical world.

Healthcare is one area where this capability could make a profound difference. My group recently developed a robotic patient simulator for training occupational therapists (OTs). Students often practise on one another, which makes it difficult to learn the nuanced tactile skills involved in supporting someone safely. With real patients, trainees must balance functional and affective touch, respect personal boundaries and recognise subtle cues of pain or discomfort. Research on social and affective touch shows how important these cues are to human wellbeing.
To help trainees understand these interactions, our simulator, known as Mona, produces practical behavioural responses. For example, when an OT presses on a simulated pain point in the artificial skin, the robot reacts verbally and with a small physical “hitch” of the body to mimic discomfort.
Similarly, if the trainee tries to move a limb beyond what the simulated patient can tolerate, the robot tightens or resists, offering a realistic cue that the motion should stop. By capturing tactile interaction through artificial skin, our simulator provides feedback that has never previously been available in OT training.
Robots that care
In the future, robots with safe, sensitive bodies could help address growing pressures in social care. As populations age, many families suddenly find themselves lifting, repositioning or supporting relatives without formal training. “Care robots” would help with this, potentially meaning the family member could be cared for at home longer.
Surprisingly, progress in developing this type of robot has been much slower than early expectations suggested – even in Japan, which introduced some of the first care robot prototypes. One of the most advanced examples is Airec, a humanoid robot developed as part of the Japanese government’s Moonshot programme to assist in nursing and elderly-care tasks. This multifaceted programme, launched in 2019, seeks “ambitious R&D based on daring ideas” in order to build a “society in which human beings can be free from limitations of body, brain, space and time by 2050”.
Japan’s Airec care robot is one of the most advanced in development. Video by Global Update.
Throughout the world, though, translating research prototypes into regulated robots remains difficult. High development costs, strict safety requirements, and the absence of a clear commercial market have all slowed progress. But while the technical and regulatory barriers are substantial, they are steadily being addressed.
Robots that can safely share close physical space with people need to feel and modulate how they touch anything that comes into contact with their bodies. This whole-body sensitivity is what will distinguish the next generation of soft robots from today’s rigid machines.
We are still far from robots that can handle these intimate tasks independently. But building touch-enabled machines is already reshaping our understanding of touch. Every step toward robotic tactile intelligence highlights the extraordinary sophistication of our own bodies – and the deep connection between sensation, movement and what we call intelligence.
This article was commissioned in conjunction with the Professors’ Programme, part of Prototypes for Humanity, a global initiative that showcases and accelerates academic innovation to solve social and environmental challenges. The Conversation is the media partner of Prototypes for Humanity 2025.![]()
Perla Maiolino, Associate Professor of Engineering Science, member of the Oxford Robotics Institute, University of Oxford
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Agentic AI and the Art of Asking Better Questions
I’ve had a lot of conversations about AI over the past couple years—some insightful, some overhyped, and a few that left me questioning whether we’re even talking about the same technology. But every now and then, I get the opportunity […]
The post Agentic AI and the Art of Asking Better Questions appeared first on TechSpective.
Technology that helps robots read human intentions could lead to safer, smarter, more trustworthy machines
Google’s year in review: 8 areas with research breakthroughs in 2025
Machine learning helps robots see clearly in total darkness using infrared
StackTrax – The Industry’s First Off-The-Shelf Dual Axis RTU
The Hiring Freeze Came First. The Robots Came After.
Bio-hybrid robots turn food waste into functional machines
Demonstration of the robotic gripper made from langoustine tails. 2025 CREATE Lab EPFL CC BY SA.
By Celia Luterbacher
Although many roboticists today turn to nature to inspire their designs, even bioinspired robots are usually fabricated from non-biological materials like metal, plastic and composites. But a new experimental robotic manipulator from the Computational Robot Design and Fabrication Lab (CREATE Lab) in EPFL’s School of Engineering turns this trend on its head: its main feature is a pair of langoustine abdomen exoskeletons.
Although it may look unusual, CREATE Lab head Josie Hughes explains that combining biological elements with synthetic components holds significant potential not only to enhance robotics, but also to support sustainable technology systems.
“Exoskeletons combine mineralized shells with joint membranes, providing a balance of rigidity and flexibility that allows their segments to move independently. These features enable crustaceans’ rapid, high-torque movements in water, but they can also be very useful for robotics. And by repurposing food waste, we propose a sustainable cyclic design process in which materials can be recycled and adapted for new tasks.”
In a paper published in Advanced Science, Hughes and her team demonstrate three robotic applications by augmenting the exoskeletons of langoustines, which had previously been harvested and processed for the food industry, with the precise control and longevity of synthetic components: a manipulator that can handle objects weighing up to 500g, grippers that can bend and grasp various objects, and a swimming robot.
Design, operate, recycle, repeat
For their study, the CREATE Lab decided to bring together the structural robustness and flexibility of the exoskeletons of langoustines with the precise control and longevity of synthetic components.
They achieved this by embedding an elastomer inside the exoskeleton to control each of its segments and then mounting it on a motorized base to modulate its stiffness response (extension and flexion). Finally, the team covered the exoskeleton in a silicon coating to reinforce it and extend its lifespan.
When mounted on the motorized base, the device can be used to move an object weighing up to 500 g into a target zone. When mounted as a gripping pair, two exoskeletons can successfully grasp a variety of objects ranging in size and shape from a highlighter pen to a tomato. The robotic system can even be used to propel a swimming robot with two flapping exoskeletal ‘fins’ at speeds of up to 11 centimeters per second.
After use, the exoskeleton and its robotic base can be separated and most of the synthetic components can be reused. “To our knowledge, we are the first to propose a proof of concept to integrate food waste into a robotic system that combines sustainable design with reuse and recycling,” says CREATE Lab researcher and first author Sareum Kim.
One limitation of the approach lies in the natural variation in biological structures; for example, the unique shape of each langoustine tail means that the two- ‘fingered’ gripper bends slightly differently on each side. The researchers say this challenge will require the development of more advanced synthetic augmentation mechanisms like tunable controllers. With such improvements, the team sees potential for future systems integrating bioderived structural elements, for example in biomedical implants or bio-system monitoring platforms.
“Although nature does not necessarily provide the optimal form, it still outperforms many artificial systems and offers valuable insights for designing functional machines based on elegant principles,” Hughes summarizes.
Read the work in full
Dead Matter, Living Machines: Repurposing Crustaceans’ Abdomen Exoskeleton for Bio-Hybrid Robots, S. Kim, K. Gilday, and J. Hughes, Adv. Sci. (2025).
This AI finds simple rules where humans see only chaos
ChatGPT’s New AI Image Maker: Number One
Smarting from the wild popularity of NanoBanana – the new image maker from Google – ChatGPT’s maker has released a major upgrade of its own.
The verdict from AI enthusiast Grant Harvey, lead writer for The Neuron newsletter: OpenAI has grabbed back the picture-making crown.
It’s once again best overall AI image editor/generator on the market.
For Harvey’s shoot-out analysis between NanoBanana and OpenAI GPT Image 1.5, check-out this excellent once-over.
In other news and analysis on AI writing:
*AI Earns Dubious Distinction for the ‘Word of the Year’: AI ‘slop’ – a label for the torrent of substandard content that is sometimes auto-generated by AI – is now the Word of the Year.
Observes writer Lucas Ropek: “These new tools have even led to what has been dubbed a ‘slop economy,’ in which gluts of AI-generated content can be milked for advertising money.”
Presenters of the award: Publishers of the Merriam-Webster Dictionary.
*Google Gemini Adds a Key AI Research Tool: Google is currently integrating a key research tool to its Gemini chatbot, which collates up to 50 PDFs or other research docs for you – and then unleashes AI on them to help you analyze everything.
Dubbed Google “NotebookLM,” the tool has been extremely popular with researchers and other thinkers -– and will be even more useful once its integration with the Gemini chatbot is fully rolled-out.
Observes writer Alexey Shabanov: “The update supports multiple notebook attachments, making it possible to bring substantial datasets into Gemini.”
*AI Fables for Kids – Complete With Values: Neo-Aesop has released a new AI app designed to create hyper-personalized Aesop-like fables for kids.
Playing with the app, users can choose their own characters, settings and virtues for each story. In the process, the child reader and his/her favorite animals can also become the heroes in each tale.
Observes Lindsay Hiebert, founder, Neo-Aesop: “There are no ads, no doom-scrolling and no engagement traps. Just stories that invite real conversation between a parent and a child.”
*Star in Your Own AI-Generated Fiction: Ever wish you could auto-generate fiction that features you and your friends as the main characters?
Vivibook has you covered.
Designed as the AI platform for people who want to be the story, Vivibook takes care of all the narrative, the story arc, the chapter breakdowns, the plot twists – as well as the psychological evolution of the characters.
*Major Keyword Generator Integrates Seamlessly With ChatGPT: Writers who spend a great deal of time ensuring their content appears high-up in search engine returns (Search Engine Optimization) just got a big break.
Semrush – a market leader in helping writers generate content keywords designed to attract the search engines – has been fully integrated into ChatGPT.
The integration enables users to access live Semrush data and intelligence without ever needing to leave the ChatGPT interface.
*Turnkey AI Marketing for Small Businesses – At Your Service: Small businesses looking for an all-in-one solution for AI-driven marketing may want to check-out PoshListings.
It’s a turnkey system that offers:
–Web site analysis, along with strategies for improvement
–AI content for articles, ads and social posts
–Multi-channel publishing to Google, social media and local directories
–Automated email and SMS promotion
–Predictive AI analytics
*Daily Summaries of Your Gmail and Calendar – Courtesy of AI: Google is out with a new AI tool – dubbed CC – that serves up daily summaries of everything that pops-up in your Gmail and Google Calendar.
Observes writer Lance Whitney: “By connecting to your Gmail and Google Calendar content, CC can see what awaits you in your inbox and calendar.
“The tool then boils it all down into a game plan for you to follow for the day.”
*Copilot’s Latest Upgrade: A Video Tour: Key ChatGPT competitor Microsoft Copilot is packing more of a punch these days and sporting a host of new features, including:
–Deep, day-to-day knowledge of who you are, what
you do and what your company, team or group does
–Voice summaries of your upcoming workday
–Voice-driven content creation
–Voice-driven email creation
–Agent-driven Web research, in the background
–Integration with Word, Excel and PowerPoint AI agents
–Written financial reports auto-generated from Excel
–Auto-generated, written reports sourced from other Microsoft apps
Essentially: This is an extremely helpful walk-through from The Neuron’s Editor, Corey Noles, which features Callie August, director, Microsoft 365 Copilot.
*AI BIG PICTURE: Free AI from China Keeps U.S. Tech Titans on Their Toes: While still holding a slim lead, major AI players like ChatGPT, Gemini and Claude are feeling the nip-at-their-heels of ‘nearly as good’ – and free – AI alternatives from China.
Key Chinese players like DeepSeek and Qwen, for example, are within chomping distance of the U.S. marketing leaders — and are Open Source, or freely available for download and tinkering.
One caveat: Researchers have found AI code embedded in some Chinese AI that can be activated to forward your data along to the Chinese Communist Party.

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT’s New AI Image Maker: Number One appeared first on Robot Writers AI.
ChatGPT’s New AI Image Maker: Number One
Smarting from the wild popularity of NanoBanana – the new image maker from Google – ChatGPT’s maker has released a major upgrade of its own.
The verdict from AI enthusiast Grant Harvey, lead writer for The Neuron newsletter: OpenAI has grabbed back the picture-making crown.
It’s once again best overall AI image editor/generator on the market.
For Harvey’s shoot-out analysis between NanoBanana and OpenAI GPT Image 1.5, check-out this excellent once-over.
In other news and analysis on AI writing:
*AI Earns Dubious Distinction for the ‘Word of the Year’: AI ‘slop’ – a label for the torrent of substandard content that is sometimes auto-generated by AI – is now the Word of the Year.
Observes writer Lucas Ropek: “These new tools have even led to what has been dubbed a ‘slop economy,’ in which gluts of AI-generated content can be milked for advertising money.”
Presenters of the award: Publishers of the Merriam-Webster Dictionary.
*Google Gemini Adds a Key AI Research Tool: Google is currently integrating a key research tool to its Gemini chatbot, which collates up to 50 PDFs or other research docs for you – and then unleashes AI on them to help you analyze everything.
Dubbed Google “NotebookLM,” the tool has been extremely popular with researchers and other thinkers -– and will be even more useful once its integration with the Gemini chatbot is fully rolled-out.
Observes writer Alexey Shabanov: “The update supports multiple notebook attachments, making it possible to bring substantial datasets into Gemini.”
*AI Fables for Kids – Complete With Values: Neo-Aesop has released a new AI app designed to create hyper-personalized Aesop-like fables for kids.
Playing with the app, users can choose their own characters, settings and virtues for each story. In the process, the child reader and his/her favorite animals can also become the heroes in each tale.
Observes Lindsay Hiebert, founder, Neo-Aesop: “There are no ads, no doom-scrolling and no engagement traps. Just stories that invite real conversation between a parent and a child.”
*Star in Your Own AI-Generated Fiction: Ever wish you could auto-generate fiction that features you and your friends as the main characters?
Vivibook has you covered.
Designed as the AI platform for people who want to be the story, Vivibook takes care of all the narrative, the story arc, the chapter breakdowns, the plot twists – as well as the psychological evolution of the characters.
*Major Keyword Generator Integrates Seamlessly With ChatGPT: Writers who spend a great deal of time ensuring their content appears high-up in search engine returns (Search Engine Optimization) just got a big break.
Semrush – a market leader in helping writers generate content keywords designed to attract the search engines – has been fully integrated into ChatGPT.
The integration enables users to access live Semrush data and intelligence without ever needing to leave the ChatGPT interface.
*Turnkey AI Marketing for Small Businesses – At Your Service: Small businesses looking for an all-in-one solution for AI-driven marketing may want to check-out PoshListings.
It’s a turnkey system that offers:
–Web site analysis, along with strategies for improvement
–AI content for articles, ads and social posts
–Multi-channel publishing to Google, social media and local directories
–Automated email and SMS promotion
–Predictive AI analytics
*Daily Summaries of Your Gmail and Calendar – Courtesy of AI: Google is out with a new AI tool – dubbed CC – that serves up daily summaries of everything that pops-up in your Gmail and Google Calendar.
Observes writer Lance Whitney: “By connecting to your Gmail and Google Calendar content, CC can see what awaits you in your inbox and calendar.
“The tool then boils it all down into a game plan for you to follow for the day.”
*Copilot’s Latest Upgrade: A Video Tour: Key ChatGPT competitor Microsoft Copilot is packing more of a punch these days and sporting a host of new features, including:
–Deep, day-to-day knowledge of who you are, what
you do and what your company, team or group does
–Voice summaries of your upcoming workday
–Voice-driven content creation
–Voice-driven email creation
–Agent-driven Web research, in the background
–Integration with Word, Excel and PowerPoint AI agents
–Written financial reports auto-generated from Excel
–Auto-generated, written reports sourced from other Microsoft apps
Essentially: This is an extremely helpful walk-through from The Neuron’s Editor, Corey Noles, which features Callie August, director, Microsoft 365 Copilot.
*AI BIG PICTURE: Free AI from China Keeps U.S. Tech Titans on Their Toes: While still holding a slim lead, major AI players like ChatGPT, Gemini and Claude are feeling the nip-at-their-heels of ‘nearly as good’ – and free – AI alternatives from China.
Key Chinese players like DeepSeek and Qwen, for example, are within chomping distance of the U.S. marketing leaders — and are Open Source, or freely available for download and tinkering.
One caveat: Researchers have found AI code embedded in some Chinese AI that can be activated to forward your data along to the Chinese Communist Party.

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT’s New AI Image Maker: Number One appeared first on Robot Writers AI.

