What is Agentic AI and is it here to stay?
Radboud chemists are working with companies and robots on the transition from oil-based to bio-based materials
Chemical products such as medicines, plastics, soap, and paint are still often based on fossil raw materials. This is not sustainable, so there is an urgent need for ways to make a ‘materials transition’ to products made from bio-based raw materials. To achieve results more quickly and efficiently, researchers at Radboud University in the Big Chemistry programme are using robots and AI.
The material transition from fossil-based to bio-based (where raw materials are based on materials of biological origin) is a major challenge. Raw materials for products must be replaced without changing the quality of those products. This requires knowledge of the properties and behaviour of those raw materials at the molecular level. Wilhelm Huck, professor of physical-organic chemistry at Radboud University: “Moreover, you don’t want to optimize the properties of a single molecule, but of a mixture. And we can greatly accelerate that search with our robots and models.”
Millions of unpredictable interactions
The difficulty, Huck explains, is that most chemistry is ‘non-additive’. ‘Whether you dissolve one sugar cube in water or ten, essentially the same thing happens with ten cubes. That is predictable. But if you know how one molecule behaves and you know how another molecule behaves, you might think: if I put them together, I’ll get the combined or the average of the two. And that’s almost never the case in chemistry. In many cases, the interaction between molecules leads to an interaction that you couldn’t have predicted.”
Because raw materials can interact with other raw materials in all kinds of ways, the number of possible interactions increases rapidly. Huck: ‘And when you consider that suppliers of ingredients for cleaning products, cosmetics, paints and coatings, ink, perfume, medicines, you name it – that they can supply tens of thousands of components. And that you can combine them in different ways. That quickly adds up to hundreds of millions of interactions that you can’t possibly study all. So you need a model that can predict the properties of mixtures. And to train that model, you need a lot of data, which you collect in experiments.”
Three projects: paint, soap, and polymers
This fall, three grants were awarded to projects by Radboud researchers within the larger Big Chemistry program of the National Growth Fund. Led by chemists Wilhelm Huck, Mathijs Mabesoone, and Peter Korevaar, the programme involves collaboration with companies to conduct research into the properties of bio-based raw materials for paints and soaps, among other things.
Peter Korevaar will be conducting research into paints together with Van Wijhe Verf. These are often still (partly) based on oil, because they have to be waterproof. And that is just one of the requirements that paint has to meet: ‘Paint has to mix well. That mixture has to remain stable. It must not be too watery or too viscous. It has to be washable, but it shouldn’t wash off your house when it rains. It simply has to be good stuff. If you try to design that based on new, bio-based ingredients, you need a lot of experimental data.”
Mathijs Mabesoone will be conducting research into soaps together with the company Croda International. ‘If you have a pure soap solution, it has a certain cleaning capacity, for example. But in mixtures of soaps, that same property can suddenly occur at a hundred times lower concentration. That is also very difficult to predict, so we are going to take a lot of measurements. We will create a large database of informative measurement points, which we can then use to train a model to better predict the interactions.”
The third project that received funding this fall deals with polymers on a more fundamental level: large molecules that often occur in mixtures. Huck: “For most polymers, there is insufficient data for theoretical calculations. For the development of new, bio-based polymers, we will collect more data in collaboration with TNO and Van Loon Chemical Innovations (VLCI), so that we can train AI models to make better predictions.”
Robot lab: data-driven science
Generating unique data, and lots of it, is the goal of all three projects. And the scientists are doing this with the help of robots. A large robot lab at Noviotech Campus in Nijmegen will follow in the fall of 2026. But the researchers are already working with robots the size of a small refrigerator that continuously take measurements. Mabesoone: “You supply such a robot with a few samples of basic solutions, and then you put it to work testing, mixing, and measuring. The robot decides which are the best samples to make, and you only need to supply a small amount to obtain a lot of data.”
What will consumers notice?
Will consumers notice anything from this research, and if so, when? Huck: ‘If we don’t do this, you may find that at some point you can no longer get certain products because they contain substances that are no longer permitted or available. But if we do it right, you won’t notice much. You had good stuff and you want to keep good stuff. Only, in the long run, those good products will be more often biodegradable. And we can probably make the good products even better—with robotics and AI, we can try out so many more combinations than we ever thought possible that we are sure to discover completely new properties.”
Scientists reveal a tiny brain chip that streams thoughts in real time
AI-powered robotic dog sees, remembers and responds with human-like precision in search-and-rescue missions
Engineers use AI to finetune robotic prosthesis to improve manual dexterity
Infant-inspired framework helps robots learn to interact with objects
Automation Solutions Mergers & Acquisitions Update
FACTS Benchmark Suite: Systematically evaluating the factuality of large language models
Speech-to-reality system creates objects on demand using AI and robotics
SoftBank’s $5.4B ABB Robotics Deal: Why IT Service Providers Should Treat Robotics as a Core Practice
This tiny implant sends secret messages to the brain
Generations in Dialogue: Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín
Generations in Dialogue: Bridging Perspectives in AI is a podcast from AAAI featuring thought-provoking discussions between AI experts, practitioners, and enthusiasts from different age groups and backgrounds. Each episode delves into how generational experiences shape views on AI, exploring the challenges, opportunities, and ethical considerations that come with the advancement of this transformative technology.
Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín
In the third episode of this new series from AAAI, host Ella Lan chats to Professor Roberto Martín-Martín about taking a screwdriver to his toys as a child, how his research focus has evolved over time, how different generations interact with technology, making robots for everyone, being inspired by colleagues, advice for early-career researchers, and how machines can enhance human capabilities.
About Professor Roberto Martín-Martín:
Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin, where his research integrates robotics, computer vision, and machine learning to build autonomous agents capable of perceiving, learning, and acting in the real world. His work spans low-level tasks like pick-and-place and navigation to complex activities such as cooking and mobile manipulation, often drawing inspiration from human cognition and integrating insights from psychology and cognitive science. He previously worked as an AI Researcher at Salesforce AI and as a Postdoctoral Scholar at the Stanford Vision and Learning Lab with Silvio Savarese and Fei-Fei Li, leading projects in visuomotor learning, mobile manipulation, and human-robot interaction. He earned his Ph.D. and M.S. from Technische Universität Berlin under Oliver Brock and a B.S. from Universidad Politécnica de Madrid. His work has been recognized with best paper awards at RSS and ICRA, and he serves as Chair of the IEEE/RAS Technical Committee on Mobile Manipulation.
About the host
Ella Lan, a member of the AAAI Student Committee, is the host of “Generations in Dialogue: Bridging Perspectives in AI.” She is passionate about bringing together voices across career stages to explore the evolving landscape of artificial intelligence. Ella is a student at Stanford University tentatively studying Computer Science and Psychology, and she enjoys creating spaces where technical innovation intersects with ethical reflection, human values, and societal impact. Her interests span education, healthcare, and AI ethics, with a focus on building inclusive, interdisciplinary conversations that shape the future of responsible AI.
AI Image Generation: On Genius
Google’s Nano Banana Pro Upgrade
While keeping pace with the seemingly endless parade of AI tools can be exhausting, getting crystal clear on the raw, new power embedded in Google’s new Nano Banana Pro image generator is well worth a huff-and-puff.
In a phrase, Nano Banana Pro (NBP) – released a few weeks ago – is the new, gold standard in AI imaging now, capable of rendering virtually anything imaginable.
Essentially: Writers now have a tool that can auto-generate one or more supplemental images for their work with a precision and power that currently has no rival.
Plus, unlike other image generators, NBP has an incredible amount of firepower under-the-hood that is simply not available to the competition.
For example: NBP is an exquisite image generator in its own right.
But it is also powered by Google’s Gemini 3.0 Pro, now widely considered the gold standard in consumer AI.
And, NBP can also be easily combined with Google Search, the world’s number one search engine.
Like many things AI, the secret to achieving master prowess with NBP is to sample how countless, highly inspired human imaginations are already working with the tool – and then synthesize that rubber-meets-the-road knowledge to forge your own method for working with NBP.
Towards that end, here are ten excellent videos on NBP, complete with detailed demos, of how imaginative folks are artfully using the AI – and surfacing truly world-class, head-turning images:
*Quick Overview: NBP Key Features: This 15-minute video from AI Master offers a great intro into the key new capabilities of NBP – complete with captivating visual examples. Demos include:
–blending multiple images into one
–converting stick figures into an image-rich scene
–experimenting with visual style changes on the same
image
–working with much more reliable text-on-images
*A Torrent of NBP Use Cases: This incredibly organized and informative 11-minute video from Digital Assets dives deep in the wide array of use cases you can tap into using NBP. Demos include:
–Historical event image generator, based on location,
date and approximate time (example: conjure Apollo moon landing)
–multi-angle product photography
–Alternate reality generator (example: depict architecture of ancient Rome as immersed in a futuristic setting)
–Hyper-realistic, 3D-diorama generation
*Another Torrent of NBP Use Cases: Click on this 27-minute video from Astrovah for a slew of more mind-bending use cases, including:
–Text-on-image analysis of any photo you upload, including its context and key facts to know about the image
–How to make an infographic in seconds
–How to inject season and weather changes to any image
–Making exploded-view images of any product
–Auto-generated blueprints of any image
*Generating Hyper-Realistic Photos With NBP: This great, 22-minute video from Tao Prompts offers an inside look at how to ensure any image you generate with NBP is hyper-photorealistic – right down to the brand of photo film you’re looking to emulate.
*Infinite Camera Angles on Tap: Getting just the right camera angle on any image is now child’s play with NBP. This 11-minute video from Chase AI serves-up demos on how to be the director of any image you create with NBP. Included is a detailed prompt library you can use featuring the same camera angle descriptions used by pro photographers.
*Swapping a Face in Seconds: Short-and-sweet, this 4-minute video from AsapGuide offers a quick, down-and-dirty way to transplant any face onto any image you provide.
*Aging/De-Aging a Person in Seconds: Another great collection of use cases, this 16-minute video from Atomic Gains includes an easy-to-replicate demo on making a person look younger, or vice-versa. Also included are demos on instantly changing the lighting in an image, changing the position of a character in an image and surgically removing specific details from any image.
*NBP: Getting Technical: Once you’ve played with NBP informally, you can pick up some extremely helpful, technical tips on how to manipulate NBP with this 29-minute video from AI Samson. Tricks include how to zoom in/out on an image, how to maintain character consistency and how to use complex cinematic stylings.
*Amplifying NBP With Google AI Studio: This 58-minute video from David Ondrej recommends using NBP in the free Google Studio interface. The reason: Google AI Studio will give you much more granular control over your results, including precise image size, creating accurate slides with text and using NBP with Google Search. Caveat: To use Google AI Studio, you need to switch to a special Google Gemini API subscription.
*Working with NBP in Photoshop: Adobe has already integrated NBP into its toolset. And this is the perfect video (8 minutes from Sebastien Jefferies) to check-out how to combine the power of NBP with the incredible precision of Photoshop. Included are lots of great demos that answer the question: NBP and Photoshop: What’s the long-term impact?

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post AI Image Generation: On Genius appeared first on Robot Writers AI.

