Page 1 of 577
1 2 3 577

The science of human touch, and why it’s so hard to replicate in robots

Robots now see the world with an ease that once belonged only to science fiction. They can recognize objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely or meaningfully, and the limits appear instantly.

Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials

Paint brush and oil paint

Chemical products such as medicines, plastics, soap, and paint are still often based on fossil raw materials. This is not sustainable, so there is an urgent need for ways to make a ‘materials transition’ to products made from bio-based raw materials. To achieve results more quickly and efficiently, researchers at Radboud University in the Big Chemistry programme are using robots and AI.

The material transition from fossil-based to bio-based (where raw materials are based on materials of biological origin) is a major challenge. Raw materials for products must be replaced without changing the quality of those products. This requires knowledge of the properties and behaviour of those raw materials at the molecular level. Wilhelm Huck, professor of physical-organic chemistry at Radboud University: “Moreover, you don’t want to optimize the properties of a single molecule, but of a mixture. And we can greatly accelerate that search with our robots and models.”

Millions of unpredictable interactions

The difficulty, Huck explains, is that most chemistry is ‘non-additive’. ‘Whether you dissolve one sugar cube in water or ten, essentially the same thing happens with ten cubes. That is predictable. But if you know how one molecule behaves and you know how another molecule behaves, you might think: if I put them together, I’ll get the combined or the average of the two. And that’s almost never the case in chemistry. In many cases, the interaction between molecules leads to an interaction that you couldn’t have predicted.”

Because raw materials can interact with other raw materials in all kinds of ways, the number of possible interactions increases rapidly. Huck: ‘And when you consider that suppliers of ingredients for cleaning products, cosmetics, paints and coatings, ink, perfume, medicines, you name it – that they can supply tens of thousands of components. And that you can combine them in different ways. That quickly adds up to hundreds of millions of interactions that you can’t possibly study all. So you need a model that can predict the properties of mixtures. And to train that model, you need a lot of data, which you collect in experiments.”

Three projects: paint, soap, and polymers

This fall, three grants were awarded to projects by Radboud researchers within the larger Big Chemistry program of the National Growth Fund. Led by chemists Wilhelm Huck, Mathijs Mabesoone, and Peter Korevaar, the programme involves collaboration with companies to conduct research into the properties of bio-based raw materials for paints and soaps, among other things.

Peter Korevaar will be conducting research into paints together with Van Wijhe Verf. These are often still (partly) based on oil, because they have to be waterproof. And that is just one of the requirements that paint has to meet: ‘Paint has to mix well. That mixture has to remain stable. It must not be too watery or too viscous. It has to be washable, but it shouldn’t wash off your house when it rains. It simply has to be good stuff. If you try to design that based on new, bio-based ingredients, you need a lot of experimental data.”

Mathijs Mabesoone will be conducting research into soaps together with the company Croda International. ‘If you have a pure soap solution, it has a certain cleaning capacity, for example. But in mixtures of soaps, that same property can suddenly occur at a hundred times lower concentration. That is also very difficult to predict, so we are going to take a lot of measurements. We will create a large database of informative measurement points, which we can then use to train a model to better predict the interactions.”

The third project that received funding this fall deals with polymers on a more fundamental level: large molecules that often occur in mixtures. Huck: “For most polymers, there is insufficient data for theoretical calculations. For the development of new, bio-based polymers, we will collect more data in collaboration with TNO and Van Loon Chemical Innovations (VLCI), so that we can train AI models to make better predictions.”

Robot lab: data-driven science

Generating unique data, and lots of it, is the goal of all three projects. And the scientists are doing this with the help of robots. A large robot lab at Noviotech Campus in Nijmegen will follow in the fall of 2026. But the researchers are already working with robots the size of a small refrigerator that continuously take measurements. Mabesoone: “You supply such a robot with a few samples of basic solutions, and then you put it to work testing, mixing, and measuring. The robot decides which are the best samples to make, and you only need to supply a small amount to obtain a lot of data.”

What will consumers notice?

Will consumers notice anything from this research, and if so, when? Huck: ‘If we don’t do this, you may find that at some point you can no longer get certain products because they contain substances that are no longer permitted or available. But if we do it right, you won’t notice much. You had good stuff and you want to keep good stuff. Only, in the long run, those good products will be more often biodegradable. And we can probably make the good products even better—with robotics and AI, we can try out so many more combinations than we ever thought possible that we are sure to discover completely new properties.”

Scientists reveal a tiny brain chip that streams thoughts in real time

BISC is an ultra-thin neural implant that creates a high-bandwidth wireless link between the brain and computers. Its tiny single-chip design packs tens of thousands of electrodes and supports advanced AI models for decoding movement, perception, and intent. Initial clinical work shows it can be inserted through a small opening in the skull and remain stable while capturing detailed neural activity. The technology could reshape treatments for epilepsy, paralysis, and blindness.

Infant-inspired framework helps robots learn to interact with objects

Over the past decades, roboticists have introduced a wide range of advanced systems that can move around in their surroundings and complete various tasks. Most of these robots can effectively collect images and other data in their surroundings, using computer vision algorithms to interpret it and plan their future actions.

Speech-to-reality system creates objects on demand using AI and robotics

Generative AI and robotics are moving us ever closer to the day when we can ask for an object and have it created within a few minutes. In fact, MIT researchers have developed a speech-to-reality system, an AI-driven workflow that allows them to provide input to a robotic arm and "speak objects into existence," creating things like furniture in as little as five minutes.

SoftBank’s $5.4B ABB Robotics Deal: Why IT Service Providers Should Treat Robotics as a Core Practice

As autonomy and embodied intelligence mature, IT service providers may not need to participate in every layer, but those who develop focused capabilities—whether in advisory, integration, or managed operations—will be better placed as demand grows.

This tiny implant sends secret messages to the brain

Researchers have built a fully implantable device that sends light-based messages directly to the brain. Mice learned to interpret these artificial patterns as meaningful signals, even without touch, sight, or sound. The system uses up to 64 micro-LEDs to create complex neural patterns that resemble natural sensory activity. It could pave the way for next-generation prosthetics and new therapies.

Generations in Dialogue: Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín

Generations in Dialogue: Bridging Perspectives in AI is a podcast from AAAI featuring thought-provoking discussions between AI experts, practitioners, and enthusiasts from different age groups and backgrounds. Each episode delves into how generational experiences shape views on AI, exploring the challenges, opportunities, and ethical considerations that come with the advancement of this transformative technology.

Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín

In the third episode of this new series from AAAI, host Ella Lan chats to Professor Roberto Martín-Martín about taking a screwdriver to his toys as a child, how his research focus has evolved over time, how different generations interact with technology, making robots for everyone, being inspired by colleagues, advice for early-career researchers, and how machines can enhance human capabilities.

About Professor Roberto Martín-Martín:

Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin, where his research integrates robotics, computer vision, and machine learning to build autonomous agents capable of perceiving, learning, and acting in the real world. His work spans low-level tasks like pick-and-place and navigation to complex activities such as cooking and mobile manipulation, often drawing inspiration from human cognition and integrating insights from psychology and cognitive science. He previously worked as an AI Researcher at Salesforce AI and as a Postdoctoral Scholar at the Stanford Vision and Learning Lab with Silvio Savarese and Fei-Fei Li, leading projects in visuomotor learning, mobile manipulation, and human-robot interaction. He earned his Ph.D. and M.S. from Technische Universität Berlin under Oliver Brock and a B.S. from Universidad Politécnica de Madrid. His work has been recognized with best paper awards at RSS and ICRA, and he serves as Chair of the IEEE/RAS Technical Committee on Mobile Manipulation.

About the host

Ella Lan, a member of the AAAI Student Committee, is the host of “Generations in Dialogue: Bridging Perspectives in AI.” She is passionate about bringing together voices across career stages to explore the evolving landscape of artificial intelligence. Ella is a student at Stanford University tentatively studying Computer Science and Psychology, and she enjoys creating spaces where technical innovation intersects with ethical reflection, human values, and societal impact. Her interests span education, healthcare, and AI ethics, with a focus on building inclusive, interdisciplinary conversations that shape the future of responsible AI.

Page 1 of 577
1 2 3 577