Women in robotics you need to know about 2025

October 1 was International Women in Robotics Day, and we’re delighted to introduce this year’s edition of “Women in Robotics You Need to Know About”! Robotics is no longer confined to factories or research labs. Today, it’s helping us explore space, care for people, grow food, and connect across the globe. Behind these breakthroughs are women who lead research groups, launch startups, set safety standards, and inspire the next generation. Too often, their contributions remain under-recognized, and this list is one way we can make that work visible.
This year’s list highlights 20 women in robotics you need to know about in 2025. They are professors, engineers, founders, communicators, and project leaders. Some are early in their careers, others have already shaped the field for decades. Their work ranges from tactile sensing that gives robots a human-like sense of touch, to swarm robotics for medicine and the environment, to embodied AI that may one day live in our homes. They come from across the world, including Australia, Brazil, Canada, China, Germany, Spain, Switzerland, the United Kingdom, and the United States. Together, they show us just how wide the world of robotics really is.
We publish this list not only to celebrate their achievements, but also to counter the persistent invisibility of women in robotics. Representation matters. When women’s contributions are overlooked, we risk reinforcing the false perception that robotics is not their domain. The honorees you’ll meet here prove the opposite. They are making discoveries, leading teams, starting companies, writing the standards, and pushing the boundaries of what robots can do.

The 2025 honorees

- Heba Khamis
Heba Khamis is lecturer at UNSW and co-founder of Contactile, start-up making tactile sensors that give robots a human sense of touch to enable them to perform difficult material handling tasks. - Kelen Teixeira Vivaldini
Kelen Teixeira Vivaldini is a professor at UFSCar, Brazil, researching autonomous robots, intelligent systems, and mission planning, with applications in environmental monitoring and inspection. - Natalie Panek
Natalie Panek is a senior engineer in systems design and works in the robotics and automation division of the space technology company, MDA. She was named on Forbes 2015 30 under 30 and one of WXN’s 2014 Top 100 award winners. - Joelle Pineau
Joelle Pineau is a Canadian AI researcher and robotics leader and a professor at McGill. She also served as Meta AI’s vice‑president until 2025. She co‑directs the Reasoning & Learning Lab, co‑founded SmartWheeler and Nursebot and champions reproducible research. - Hallie Siegel
Hallie Siegel is a science communicator and former Robohub editor, building global networks that connect researchers and the public.

- Xiaorui Zhu
Xiaorui Zhu co-founded DJI and RoboSense and directs Galaxy AI & Robotics, with award-winning research in UAVs, autonomous driving and mobile robotics. - Lijin Aryananda
Lijin Aryananda is a robotics researcher with 15+ years’ experience in humanoids, automation & medical devices. At ZHAW she develops AI methods for tomography, bridging academia & industry with inclusive leadership. - Georgia Chalvatzaki
Georgia Chalvatzaki, professor at TU Darmstadt and head of the PEARL Lab, advances human-centric robot learning. Her work blends AI and robotics to give mobile manipulators the ability to collaborate safely and intelligently with people. - Mar Masulli
Mar Masulli is CEO & co-founder of BitMetrics, using AI to give robots & machines vision and reasoning. She also serves on the Spanish Robotics Association board. - Alona Kharchenko
Alona Kharchenko is co-founder & CTO of Devanthro, building embodied AI for homes, and was recognized by Forbes as 30 Under 30.

- Nicole Robinson
Nicole Robinson is the co-founder of Lyro Robotics, deploying AI pick-and-pack robots for industry. - Dimitra Gkatzia
Dimitra Gkatzia is an associate professor at Edinburgh Napier, advancing natural language generation for human-robot interaction. - Sabine Hauert
Sabine Hauert is a professor at the University of Bristol, co-founder of Robohub, and a pioneer in swarm robotics for nanomedicine and the environment. - Monica Anderson
Monica Anderson is a professor at the University of Alabama, researching distributed autonomy and inclusive human-robot teaming. - Shilpa Gulati
Shilpa Gulati is an experienced engineering leader with over 15 years of experience in building and scaling teams to solve complex problems in Robotics using state-of-the-art technologies.

- Shuran Song
Shuran Song is a Stanford professor and robotics researcher, building low-cost systems for robot perception and releasing influential open datasets. - Kathryn Zealand
Kathryn Zealand is co-founder of Skip, building powered clothing, “e-bikes for walking”. She spun the project out of X and has a background in theoretical physics. - Ann Virts
Ann Virts is a NIST project leader developing test methods for mobile and wearable robots, recognized with a U.S. DOC Bronze Medal. - Carole Franklin
Carole Franklin directs standards development for robotics at the Association for Advancing Automation (A3), leading ANSI & ISO robot safety work. With a background at Booz Allen & Ford, she champions safe, effective deployment of robots. - Meghan Daley
Meghan Daley is a NASA project manager who leads teams to develop and integrate simulations for robotic operations to prepare astronauts on the ISS and beyond.
We’ll be spotlighting five honorees each week throughout October, so stay tuned for deeper profiles and stories of their work.
Why it matters
Robotics is not just about technology; it’s about people. By showcasing these individuals, we hope to inspire the next generation, connect the community, and advance the values of diversity and inclusion in STEM
Join the conversation on social media with #WomenInRobotics and help us celebrate the people making robotics better for everyone.
The article was cross-posted from Women in Robotics. Read the original here.
New Claude Sonnet 4.5:
61% Reliability In Agent Mode
Anthropic is out with an upgrade to its flagship AI
that offers 61% reliability when used as an agent for everyday computing tasks.
Essentially, that means when you use the Sonnet 4.5 as an agent to complete an assignment featuring multi-step tasks like opening apps, editing files, navigating Web pages and filling out forms, it will complete those assignments for you 61% of the time.
One caveat: That reliability metric – known as the OSWorld-Verified Benchmark – is based on Sonnet 4.5’s performance in a sandbox environment, where researchers pit the AI against a set of pre-programmed, digital encounters that never change.
Out on the Web – where things can get unpredictable
very quickly — performance could be worse.
Bottom line: If an AI agent that finishes three-out-of-every-five tasks turns your crank, this could be the AI you’ve been looking for.
In other news and analysis on AI writing:
*LinkedIn’s CEO: ‘I Write Virtually All My Emails With AI Now:” Crediting AI for making him sound ‘super smart’ when it comes to emails, LinkedIn CEO Ryan Roslanksy says he writes nearly all of his emails using AI now.
Observes writer Sherin Shibu: “Roslansky, who has led LinkedIn for the past five years, said that using AI is like tapping into ‘a second brain’ personalized just for him.
*Another ‘AI Writing Humanizer’ Tool Launches: JustDone has just rolled-out an ‘AI humanizer” tool that transforms the sometimes robotic writing of chatbots like ChatGPT into more human-sounding text.
Sounds good in theory.
But truth-be-told, you can do your own ‘humanizing’ with ChatGPT simply by including writing style directions in your prompt.
For example: Simply add phrases like, “write in a warm, witty, conversational style” or “write at the level of a college freshman, but be sure to inject plenty of deadpan humor in your writing.”
Essentially: Simply experiment with describing the precise kind of writing you’d like from ChatGPT, and you won’t need to pay for a ‘humanizer.’
That said, for best results, write — and humanize your writing — using ChatGPT-4.0.
The reason: ChatGPT-5 and other chatbots often resist or water down prompting that attempts to alter writing style.
*New Microsoft 365 ‘Premium” Tier Promising Advanced AI: Microsoft has rolled out a ‘luxury’ version of its productivity suite, billed at $20/month, that offers:
–Higher usage limits with AI
–GPT-4 image generation from OpenAI
–Deep research, vision and actions
–Standard apps that have been with 365 for years, such as
Word, Excel, Powerpoint and Outlook
*OpenAI Launches New Social Media Video App: Video fans just got another text-to-video tool from ChatGPT’s maker – which is designed to compete with the likes of TikTok, Instagram Reels and YouTube Shorts.
The feature setting users’ imaginations ablaze: The ability to drop an image of yourself – or anyone else – into any video the app creates.
Even better: The social media app uses Sora 2, OpenAI’s new video creator, which offers enhanced precision in the creation of complex movement, sound, dialogue and effects for short videos.
*AI Chat, Talking Avatar Style: If chatting with an AI–powered animated character is on your bucket list, Microsoft has the solution.
It’s just rolled out 40 experimental characters you can chat with under its $20/month, Copilot Pro subscription.
Observes writer Lance Whitney: “You can choose from among 40 portraits, all with different genders, races, and nationalities.”
*’Instant Checkout’ Opens for Business in ChatGPT: Now you can buy goods and services while remaining in the ChatGPT app, thanks to a new checkout service from the AI.
Just underway – currently, you can only shop at Etsy in ChatGPT – the AI’s maker is promising to soon onboard Shopify to the new feature, which features a million-plus merchants.
Observes writer Chance Townsend: “OpenAI also revealed that the underlying technology will be open source to help bring agentic commerce to more merchants and developers.”
*Now AI Reports on Police Bodycam Footage, Too: While scores of police agencies have been using AI to write-up standard reports, some have also begun using the tech to report on bodycam footage.
Observes DigWatch: “The tool, Draft One, analyzes Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing and DUI incidents.”
*No Good at AI?: Hasta La Vista, Baby: Early AI adopter Accenture, a consulting firm, has issued a stern warning to staff – get with the AI program, or get another job.
Observes writer Joe Wilkins: “If Accenture workers fail to appease their overlords, the CEO says they’ll be dumped like yesterday’s trash.
“In their place, the IT firm will hire people who already have the AI ‘skills’ necessary to appease stockholders.”
*AI BIG PICTURE: Trump To Taiwan: Produce 50% of Chips in U.S., or You’re on Your Own: In a move bringing new definition to the phrase ‘heavy-handed,’ U.S. President Donald Trump has told Taiwan needs to move half of its chip production to the U.S. if it wants U.S. help against a Chinese invasion.
Observes writer Ashley Belanger: “To close the deal with Taiwan, (U.S. Commerce Secretary Howard) Lutnick suggested that the U.S. would offer some kind of security guarantee so that they can expect that moving their supply chain into the U.S. won’t eliminate Taiwan’s so-called silicon shield where countries like the U.S. are willing to protect Taiwan because we need their silicon, their chips, so badly.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post New Claude Sonnet 4.5: appeared first on Robot Writers AI.
The Ambient Brain: Why Amazon’s Alexa+ Is the AI We’ve Been Waiting For
For the better part of a decade, digital assistants have been stuck in a state of arrested development. Devices like Alexa, Siri, and Google Assistant have become glorified voice-activated egg timers and music players, adept at simple, one-shot commands but […]
The post The Ambient Brain: Why Amazon’s Alexa+ Is the AI We’ve Been Waiting For appeared first on TechSpective.
These little robots literally walk on water
Humanoid robots in the home? Not so fast, says expert
Beyond the Humanoid Hype: Why Collaborative Mobile Robots Are the Real Factory Floor Revolution
Robot Talk Episode 127 – Robots exploring other planets, with Frances Zhu

Claire chatted to Frances Zhu from the Colorado School of Mines about intelligent robotic systems for space exploration.
Frances Zhu has a degree in Mechanical and Aerospace Engineering and a Ph.D. in Aerospace Engineering from Cornell University. She was previously a NASA Space Technology Research Fellow and an Assistant Research Professor in the Hawaii Institute of Geophysics and Planetology at the University of Hawaii, specialising in machine learning, dynamics, systems, and controls engineering. Since 2025, she has been an Assistant Professor at the Colorado School of Mines in the Department of Mechanical Engineering, affiliated with the Robotics program and Space Resources Program.
The importance of a leader–follower relationship for performing tasks
High-Precision RV Reducers Perfectly Matched for Walking Robots
Mars rovers serve as scientists’ eyes and ears from millions of miles away – here are the tools Perseverance used to spot a potential sign of ancient life
Scientists absorb data on monitors in mission control for NASA’s Perseverance Mars rover. NASA/Bill Ingalls, CC BY-NC-ND.
By Ari Koeppel, Dartmouth College
NASA’s search for evidence of past life on Mars just produced an exciting update. On Sept. 10, 2025, a team of scientists published a paper detailing the Perseverance rover’s investigation of a distinctive rock outcrop called Bright Angel on the edge of Mars’ Jezero Crater. This outcrop is notable for its light-toned rocks with striking mineral nodules and multicolored, leopard print-like splotches.
By combining data from five scientific instruments, the team determined that these nodules formed through processes that could have involved microorganisms. While this finding is not direct evidence of life, it’s a compelling discovery that planetary scientists hope to look into more closely.
Bright Angel rock surface at the Beaver Falls site on Mars shows nodules on the right and a leopard-like pattern at the center. NASA/JPL-Caltech/MSSS
To appreciate how discoveries like this one come about, it’s helpful to understand how scientists engage with rover data — that is, how planetary scientists like me use robots like Perseverance on Mars as extensions of our own senses.
Experiencing Mars through data
When you strap on a virtual reality headset, you suddenly lose your orientation to the immediate surroundings, and your awareness is transported by light and sound to a fabricated environment. For Mars scientists working on rover mission teams, something very similar occurs when rovers send back their daily downlinks of data.
Several developers, including MarsVR, Planetary Visor and Access Mars, have actually worked to build virtual Mars environments for viewing with a virtual reality headset. However, much of Mars scientists’ daily work instead involves analyzing numerical data visualized in graphs and plots. These datasets, produced by state-of-the-art sensors on Mars rovers, extend far beyond human vision and hearing.
A virtual Mars environment developed by Planetary Visor incorporates both 3D landscape data and rover instrument data as pop-up plots. Scientists typically access data without entering a virtual reality space. However, tools like this give the public a sense for how mission scientists experience their work.
Developing an intuition for interpreting these complex datasets takes years, if not entire careers. It is through this “mind-data connection” that scientists build mental models of Martian landscapes – models they then communicate to the world through scientific publications.
The robots’ tool kit: Sensors and instruments
Five primary instruments on Perseverance, aided by machine learning algorithms, helped describe the unusual rock formations at a site called Beaver Falls and the past they record.
Robotic hands: Mounted on the rover’s robotic arm are tools for blowing dust aside and abrading rock surfaces. These ensure the rover analyzes clean samples.
Cameras: Perseverance hosts 19 cameras for navigation, self-inspection and science. Five science-focused cameras played a key role in this study. These cameras captured details unseeable by human eyes, including magnified mineral textures and light in infrared wavelengths. Their images revealed that Bright Angel is a mudstone, a type of sedimentary rock formed from fine sediments deposited in water.
Spectrometers: Instruments such as SuperCam and SHERLOC – scanning habitable environments with Raman and luminescence for organics and chemicals – analyze how rocks reflect or emit light across a range of wavelengths. Think of this as taking hundreds of flash photographs of the same tiny spot, all in different “colors.” These datasets, called spectra, revealed signs of water integrated into mineral structures in the rock and traces of organic molecules: the basic building blocks of life.
Subsurface radar: RIMFAX, the radar imager for Mars subsurface experiment, uses radio waves to peer beneath Mars’ surface and map rock layers. At Beaver Falls, this showed the rocks were layered over other ancient terrains, likely due to the activity of a flowing river. Areas with persistently present water are better habitats for microbes than dry or intermittently wet locations.
X-ray chemistry: PIXL, the planetary instrument for X-ray lithochemistry, bombards rock surfaces with X-rays and observes how the rock glows or reflects them. This technique can tell researchers which elements and minerals the rock contains at a fine scale. PIXL revealed that the leopard-like spots found at Beaver Falls differed chemically from the surrounding rock. The spots resembled patterns on Earth formed by chemical reactions that are mediated by microbes underwater.
Key Perseverance Mars Rover instruments used in this analysis. NASA
Together, these instruments produce a multifaceted picture of the Martian environment. Some datasets require significant processing, and refined machine learning algorithms help the mission teams turn that information into a more intuitive description of the Jezero Crater’s setting, past and present.
The challenge of uncertainty
Despite Perseverance’s remarkable tools and processing software, uncertainty remains in the results. Science, especially when conducted remotely on another planet, is rarely black and white. In this case, the chemical signatures and mineral formations at Beaver Falls are suggestive – but not conclusive – of past life on Mars.
There actually are tools, such as mass spectrometers, that can show definitively whether a rock sample contains evidence of biological activity. However, these instruments are currently too fragile, heavy and power-intensive for Mars missions.
Fortunately, Perseverance has collected and sealed rock core samples from Beaver Falls and other promising sites in Jezero Crater with the goal of sending them back to Earth. If the current Mars sample return plan can retrieve these samples, laboratories on Earth can scrutinize them far more thoroughly than the rover was able to.
Perseverance selfie at Cheyava Falls sampling site in the Beaver Falls location. NASA/JPL-Caltech/MSSS
Investing in our robotic senses
This discovery is a testament to decades of NASA’s sustained investment in Mars exploration and the work of engineering teams that developed these instruments. Yet these investments face an uncertain future.
The White House’s budget office recently proposed cutting 47% of NASA’s science funding. Such reductions could curtail ongoing missions, including Perseverance’s continued operations, which are targeted for a 23% cut, and jeopardize future plans such as the Mars sample return campaign, among many other missions.
Perseverance represents more than a machine. It is a proxy extending humanity’s senses across millions of miles to an alien world. These robotic explorers and the NASA science programs behind them are a key part of the United States’ collective quest to answer profound questions about the universe and life beyond Earth.
Ari Koeppel, Earth Sciences Postdoctoral Scientist and Adjunct Associate, Dartmouth College
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Unstructured document prep for agentic workflows
If you’ve ever burned hours wrangling PDFs, screenshots, or Word files into something an agent can use, you know how brittle OCR and one-off scripts can be. They break on layout changes, lose tables, and slow launches.
This isn’t just an occasional nuisance. Analysts estimate that ~80% of enterprise data is unstructured. And as retrieval-augmented generation (RAG) pipelines mature, they’re becoming “structure-aware,” because flat OCR collapse under the weight of real-world documents.
Unstructured data is the bottleneck. Most agent workflows stall because documents are messy and inconsistent, and parsing quickly turns into a side project that expands scope.
But there’s a better option: Aryn DocParse, now integrated into DataRobot, lets agents turn messy documents into structured fields reliably and at scale, without custom parsing code.
What used to take days of scripting and troubleshooting can now take minutes: connect a source — even scanned PDFs — and feed structured outputs straight into RAG or tools. Preserving structure (headings, sections, tables, figures) reduces silent errors that cause rework, and answers improve because agents retain the hierarchy and table context needed for accurate retrieval and grounded reasoning.
Why this integration matters
For developers and practitioners, this isn’t just about convenience. It’s about whether your agent workflows make it to production without breaking under the chaos of real-world document formats.
The impact shows up in three key ways:
Easy document prep
What used to take days of scripting and cleanup now happens in a single step. Teams can add a new source — even scanned PDFs — and feed it into RAG pipelines the same day, with fewer scripts to maintain and faster time to production.
Structured, context-rich outputs
DocParse preserves hierarchy and semantics, so agents can tell the difference between an executive summary and a body paragraph, or a table cell and surrounding text. The result: simpler prompts, clearer citations, and more accurate answers.
More reliable pipelines at scale
A standardized output schema reduces breakage when document layouts change. Built-in OCR and table extraction handle scans without hand-tuned regex, lowering maintenance overhead and cutting down on incident noise.
What you can do with it
Under the hood, the integration brings together four capabilities practitioners have been asking for:
Broad format coverage
From PDFs and Word docs to PowerPoint slides and common image formats, DocParse handles the formats that usually trip up pipelines — so you don’t need separate parsers for every file type.
Layout preservation for precise retrieval
Document hierarchy and tables are retained, so answers reference the right sections and cells instead of collapsing into flat text. Retrieval stays grounded, and citations actually point to the right spot.
Seamless downstream use
Outputs flow directly into DataRobot workflows for retrieval, prompting, or function tools. No glue code, no brittle handoffs — just structured inputs ready for agents.
One place to build, operate, and govern AI agents
This integration isn’t just about cleaner document parsing. It closes a critical gap in the agent workflow. Most point tools or DIY scripts stall at the handoffs, breaking when layouts shift or pipelines expand.
This integration is part of a bigger shift: moving from toy demos to agents that can reason over real enterprise knowledge, with governance and reliability built in so they can stand up in production.
That means you can build, operate, and govern agentic applications in one place, without juggling separate parsers, glue code, or fragile pipelines. It’s a foundational step in enabling agents that can reason over real enterprise knowledge with confidence.
From bottleneck to building block
Unstructured data doesn’t have to be the step that stalls your agent workflows. With Aryn now integrated into DataRobot, agents can treat PDFs, Word files, slides, and scans like clean, structured inputs — no brittle parsing required.
Connect a source, parse to structured JSON, and feed it into RAG or tools the same day. It’s a simple change that removes one of the biggest blockers to production-ready agents.
The best way to understand the difference is to try it on your own messy PDFs, slides, or scans, and see how much smoother your workflows run when structure is preserved end to end.
Start a free trial and experience how quickly you can turn unstructured documents into structured, agent-ready inputs. Questions? Reach out to our team.
The post Unstructured document prep for agentic workflows appeared first on DataRobot.