Page 3 of 557
1 2 3 4 5 557

Robot Talk Episode 127 – Robots exploring other planets, with Frances Zhu

Claire chatted to Frances Zhu from the Colorado School of Mines about intelligent robotic systems for space exploration.

Frances Zhu has a degree in Mechanical and Aerospace Engineering and a Ph.D. in Aerospace Engineering from Cornell University. She was previously a NASA Space Technology Research Fellow and an Assistant Research Professor in the Hawaii Institute of Geophysics and Planetology at the University of Hawaii, specialising in machine learning, dynamics, systems, and controls engineering. Since 2025, she has been an Assistant Professor at the Colorado School of Mines in the Department of Mechanical Engineering, affiliated with the Robotics program and Space Resources Program.

Mars rovers serve as scientists’ eyes and ears from millions of miles away – here are the tools Perseverance used to spot a potential sign of ancient life

Scientists absorb data on monitors in mission control for NASA’s Perseverance Mars rover. NASA/Bill Ingalls, CC BY-NC-ND.

By Ari Koeppel, Dartmouth College

NASA’s search for evidence of past life on Mars just produced an exciting update. On Sept. 10, 2025, a team of scientists published a paper detailing the Perseverance rover’s investigation of a distinctive rock outcrop called Bright Angel on the edge of Mars’ Jezero Crater. This outcrop is notable for its light-toned rocks with striking mineral nodules and multicolored, leopard print-like splotches.

By combining data from five scientific instruments, the team determined that these nodules formed through processes that could have involved microorganisms. While this finding is not direct evidence of life, it’s a compelling discovery that planetary scientists hope to look into more closely.

A streaked and spotted rock surfaceBright Angel rock surface at the Beaver Falls site on Mars shows nodules on the right and a leopard-like pattern at the center. NASA/JPL-Caltech/MSSS

To appreciate how discoveries like this one come about, it’s helpful to understand how scientists engage with rover data — that is, how planetary scientists like me use robots like Perseverance on Mars as extensions of our own senses.

Experiencing Mars through data

When you strap on a virtual reality headset, you suddenly lose your orientation to the immediate surroundings, and your awareness is transported by light and sound to a fabricated environment. For Mars scientists working on rover mission teams, something very similar occurs when rovers send back their daily downlinks of data.

Several developers, including MarsVR, Planetary Visor and Access Mars, have actually worked to build virtual Mars environments for viewing with a virtual reality headset. However, much of Mars scientists’ daily work instead involves analyzing numerical data visualized in graphs and plots. These datasets, produced by state-of-the-art sensors on Mars rovers, extend far beyond human vision and hearing.

A virtual Mars environment developed by Planetary Visor incorporates both 3D landscape data and rover instrument data as pop-up plots. Scientists typically access data without entering a virtual reality space. However, tools like this give the public a sense for how mission scientists experience their work.

Developing an intuition for interpreting these complex datasets takes years, if not entire careers. It is through this “mind-data connection” that scientists build mental models of Martian landscapes – models they then communicate to the world through scientific publications.

The robots’ tool kit: Sensors and instruments

Five primary instruments on Perseverance, aided by machine learning algorithms, helped describe the unusual rock formations at a site called Beaver Falls and the past they record.

Robotic hands: Mounted on the rover’s robotic arm are tools for blowing dust aside and abrading rock surfaces. These ensure the rover analyzes clean samples.

Cameras: Perseverance hosts 19 cameras for navigation, self-inspection and science. Five science-focused cameras played a key role in this study. These cameras captured details unseeable by human eyes, including magnified mineral textures and light in infrared wavelengths. Their images revealed that Bright Angel is a mudstone, a type of sedimentary rock formed from fine sediments deposited in water.

Spectrometers: Instruments such as SuperCam and SHERLOC – scanning habitable environments with Raman and luminescence for organics and chemicals – analyze how rocks reflect or emit light across a range of wavelengths. Think of this as taking hundreds of flash photographs of the same tiny spot, all in different “colors.” These datasets, called spectra, revealed signs of water integrated into mineral structures in the rock and traces of organic molecules: the basic building blocks of life.

Subsurface radar: RIMFAX, the radar imager for Mars subsurface experiment, uses radio waves to peer beneath Mars’ surface and map rock layers. At Beaver Falls, this showed the rocks were layered over other ancient terrains, likely due to the activity of a flowing river. Areas with persistently present water are better habitats for microbes than dry or intermittently wet locations.

X-ray chemistry: PIXL, the planetary instrument for X-ray lithochemistry, bombards rock surfaces with X-rays and observes how the rock glows or reflects them. This technique can tell researchers which elements and minerals the rock contains at a fine scale. PIXL revealed that the leopard-like spots found at Beaver Falls differed chemically from the surrounding rock. The spots resembled patterns on Earth formed by chemical reactions that are mediated by microbes underwater.

A diagram of the Perseverance rover with lines pointing to its instrumentsKey Perseverance Mars Rover instruments used in this analysis. NASA

Together, these instruments produce a multifaceted picture of the Martian environment. Some datasets require significant processing, and refined machine learning algorithms help the mission teams turn that information into a more intuitive description of the Jezero Crater’s setting, past and present.

The challenge of uncertainty

Despite Perseverance’s remarkable tools and processing software, uncertainty remains in the results. Science, especially when conducted remotely on another planet, is rarely black and white. In this case, the chemical signatures and mineral formations at Beaver Falls are suggestive – but not conclusive – of past life on Mars.

There actually are tools, such as mass spectrometers, that can show definitively whether a rock sample contains evidence of biological activity. However, these instruments are currently too fragile, heavy and power-intensive for Mars missions.

Fortunately, Perseverance has collected and sealed rock core samples from Beaver Falls and other promising sites in Jezero Crater with the goal of sending them back to Earth. If the current Mars sample return plan can retrieve these samples, laboratories on Earth can scrutinize them far more thoroughly than the rover was able to.

The Perseverance rover on the dusty, rocky Martian surfacePerseverance selfie at Cheyava Falls sampling site in the Beaver Falls location. NASA/JPL-Caltech/MSSS

Investing in our robotic senses

This discovery is a testament to decades of NASA’s sustained investment in Mars exploration and the work of engineering teams that developed these instruments. Yet these investments face an uncertain future.

The White House’s budget office recently proposed cutting 47% of NASA’s science funding. Such reductions could curtail ongoing missions, including Perseverance’s continued operations, which are targeted for a 23% cut, and jeopardize future plans such as the Mars sample return campaign, among many other missions.

Perseverance represents more than a machine. It is a proxy extending humanity’s senses across millions of miles to an alien world. These robotic explorers and the NASA science programs behind them are a key part of the United States’ collective quest to answer profound questions about the universe and life beyond Earth.The Conversation

Ari Koeppel, Earth Sciences Postdoctoral Scientist and Adjunct Associate, Dartmouth College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Unstructured document prep for agentic workflows

If you’ve ever burned hours wrangling PDFs, screenshots, or Word files into something an agent can use, you know how brittle OCR and one-off scripts can be. They break on layout changes, lose tables, and slow launches.

This isn’t just an occasional nuisance. Analysts estimate that ~80% of enterprise data is unstructured. And as retrieval-augmented generation (RAG) pipelines mature, they’re becoming “structure-aware,” because flat OCR collapse under the weight of real-world documents.

Unstructured data is the bottleneck. Most agent workflows stall because documents are messy and inconsistent, and parsing quickly turns into a side project that expands scope. 

But there’s a better option: Aryn DocParse, now integrated into DataRobot, lets agents turn messy documents into structured fields reliably and at scale, without custom parsing code.

What used to take days of scripting and troubleshooting can now take minutes: connect a source — even scanned PDFs — and feed structured outputs straight into RAG or tools. Preserving structure (headings, sections, tables, figures) reduces silent errors that cause rework, and answers improve because agents retain the hierarchy and table context needed for accurate retrieval and grounded reasoning.

Why this integration matters

For developers and practitioners, this isn’t just about convenience. It’s about whether your agent workflows make it to production without breaking under the chaos of real-world document formats.

The impact shows up in three key ways:

Easy document prep
What used to take days of scripting and cleanup now happens in a single step. Teams can add a new source — even scanned PDFs — and feed it into RAG pipelines the same day, with fewer scripts to maintain and faster time to production.

Structured, context-rich outputs
DocParse preserves hierarchy and semantics, so agents can tell the difference between an executive summary and a body paragraph, or a table cell and surrounding text. The result: simpler prompts, clearer citations, and more accurate answers.

More reliable pipelines at scale
A standardized output schema reduces breakage when document layouts change. Built-in OCR and table extraction handle scans without hand-tuned regex, lowering maintenance overhead and cutting down on incident noise.

What you can do with it

Under the hood, the integration brings together four capabilities practitioners have been asking for:

Broad format coverage
From PDFs and Word docs to PowerPoint slides and common image formats, DocParse handles the formats that usually trip up pipelines — so you don’t need separate parsers for every file type.

Layout preservation for precise retrieval
Document hierarchy and tables are retained, so answers reference the right sections and cells instead of collapsing into flat text. Retrieval stays grounded, and citations actually point to the right spot.

Seamless downstream use
Outputs flow directly into DataRobot workflows for retrieval, prompting, or function tools. No glue code, no brittle handoffs — just structured inputs ready for agents.

One place to build, operate, and govern AI agents

This integration isn’t just about cleaner document parsing. It closes a critical gap in the agent workflow. Most point tools or DIY scripts stall at the handoffs, breaking when layouts shift or pipelines expand. 

This integration is part of a bigger shift: moving from toy demos to agents that can reason over real enterprise knowledge, with governance and reliability built in so they can stand up in production.

That means you can build, operate, and govern agentic applications in one place, without juggling separate parsers, glue code, or fragile pipelines. It’s a foundational step in enabling agents that can reason over real enterprise knowledge with confidence.

From bottleneck to building block

Unstructured data doesn’t have to be the step that stalls your agent workflows. With Aryn now integrated into DataRobot, agents can treat PDFs, Word files, slides, and scans like clean, structured inputs — no brittle parsing required.

Connect a source, parse to structured JSON, and feed it into RAG or tools the same day. It’s a simple change that removes one of the biggest blockers to production-ready agents.

The best way to understand the difference is to try it on your own messy PDFs, slides, or scans,  and see how much smoother your workflows run when structure is preserved end to end.

Start a free trial and experience how quickly you can turn unstructured documents into structured, agent-ready inputs. Questions? Reach out to our team

The post Unstructured document prep for agentic workflows appeared first on DataRobot.

Shape-changing robots: New AI-driven design tool optimizes performance and functionality

Like octopuses squeezing through a tiny sea cave, metatruss robots can adapt to demanding environments by changing their shape. These mighty morphing robots are made of trusses composed of hundreds of beams and joints that rotate and twist, enabling astonishing volumetric transformations.

Princeton’s AI reveals what fusion sensors can’t see

A powerful new AI tool called Diag2Diag is revolutionizing fusion research by filling in missing plasma data with synthetic yet highly detailed information. Developed by Princeton scientists and international collaborators, this system uses sensor input to predict readings other diagnostics can’t capture, especially in the crucial plasma edge region where stability determines performance. By reducing reliance on bulky hardware, it promises to make future fusion reactors more compact, affordable, and reliable.

Rethinking how robots move: Light and AI drive precise motion in soft robotic arm

Photo credit: Jeff Fitlow/Rice University

By Silvia Cernea Clark

Researchers at Rice University have developed a soft robotic arm capable of performing complex tasks such as navigating around an obstacle or hitting a ball, guided and powered remotely by laser beams without any onboard electronics or wiring. The research could inform new ways to control implantable surgical devices or industrial machines that need to handle delicate objects.

In a proof-of-concept study that integrates smart materials, machine learning and an optical control system, a team of Rice researchers led by materials scientist Hanyu Zhu used a light-patterning device to precisely induce motion in a robotic arm made from azobenzene liquid crystal elastomer ⎯ a type of polymer that responds to light.

According to the study published in Advanced Intelligent Systems, the new robotic system incorporates a neural network trained to predict the exact light pattern needed to create specific arm movements. This makes it easier for the robot to execute complex tasks without needing similarly complex input from an operator.

“This was the first demonstration of real-time, reconfigurable, automated control over a light-responsive material for a soft robotic arm,” said Elizabeth Blackert, a Rice doctoral alumna who is the first author on the study.

Elizabeth Blackert and Hanyu Zhu (Photo credit: Jeff Fitlow/Rice University).

Conventional robots typically involve rigid structures with mobile elements like hinges, wheels or grippers to enable a predefined, relatively constrained range of motion. Soft robots have opened up new areas of application in contexts like medicine, where safely interacting with delicate objects is required. So-called continuum robots are a type of soft robot that forgoes mobility constraints, enabling adaptive motion with a vastly expanded degree of freedom.

“A major challenge in using soft materials for robots is they are either tethered or have very simple, predetermined functionality,” said Zhu, assistant professor of materials science and nanoengineering. “Building remotely and arbitrarily programmable soft robots requires a unique blend of expertise involving materials development, optical system design and machine learning capabilities. Our research team was uniquely suited to take on this interdisciplinary work.”

The team created a new variation of an elastomer that shrinks under blue laser light then relaxes and regrows in the dark ⎯ a feature known as fast relaxation time that makes real-time control possible. Unlike other light-sensitive materials that require harmful ultraviolet light or take minutes to reset, this one works with safer, longer wavelengths and responds within seconds.

“When we shine a laser on one side of the material, the shrinking causes the material to bend in that direction,” Blackert said. “Our material bends toward laser light like a flower stem does toward sunlight.”

To control the material, the researchers used a spatial light modulator to split a single laser beam into multiple beamlets, each directed to a different part of the robotic arm. The beamlets can be turned on or off and adjusted in intensity, allowing the arm to bend or contract at any given point, much like the tentacles of an octopus. This technique can in principle create a robot with virtually infinite degrees of freedom ⎯ far beyond the capabilities of traditional robots with fixed joints.

“What is new here is using the light pattern to achieve complex changes in shape,” said Rafael Verduzco, professor and associate chair of chemical and biomolecular engineering and professor of materials science and nanoengineering. “In prior work, the material itself was patterned or programmed to change shape in one way, but here the material can change in multiple ways, depending on the laser beamlet pattern.”

To train such a multiparameter arm, the team ran a small number of combinations of light settings and recorded how the robot arm deformed in each case, using the data to train a convolutional neural network ⎯ a type of artificial intelligence used in image recognition. The model was then able to output the exact light pattern needed to create a desired shape such as flexing or a reach-around motion.

The current prototype is flat and moves in 2D, but future versions could bend in three dimensions with additional sensors and cameras.

Photo credit: Jeff Fitlow/Rice University

“This is a step towards having safer, more capable robotics for various applications ranging from implantable biomedical devices to industrial robots that handle soft goods,” Blackert said.

3MP HDR IP69K Camera for Robotics & Autonomous Vehicles

STURDeCAM31 from e-con Systems® is designed to make robotics and autonomous vehicles safer and more reliable. Powered by the Sony® ISX031 sensor and featuring GMSL2 interface, this compact 3MP camera delivers 120dB HDR + LFM imaging with zero motion blur — even in the most challenging outdoor conditions. Engineered to automotive-grade standards, STURDeCAM31 is IP69K certified, making it resistant to dust, water, vibration, and extreme temperatures. With support for up to 8 synchronized cameras, it enables powerful surround-view and bird’s eye systems on NVIDIA® Jetson AGX Orin™.

Security researchers say G1 humanoid robots are secretly sending information to China and can easily be hacked

Researchers have uncovered serious security flaws with the Unitree G1 humanoid robot, a machine that is already being used in laboratories and some police departments. They discovered that G1 can be used for covert surveillance and could potentially launch a full-scale cyberattack on networks.

Security researchers say G1 humanoid robots are secretly sending information to China and can easily be hacked

Researchers have uncovered serious security flaws with the Unitree G1 humanoid robot, a machine that is already being used in laboratories and some police departments. They discovered that G1 can be used for covert surveillance and could potentially launch a full-scale cyberattack on networks.
Page 3 of 557
1 2 3 4 5 557