Meet the AI-powered robotic dog ready to help with emergency response
Prototype robotic dogs built by Texas A&M University engineering students and powered by artificial intelligence demonstrate their advanced navigation capabilities. Photo credit: Logan Jinks/Texas A&M University College of Engineering.
By Jennifer Nichols
Meet the robotic dog with a memory like an elephant and the instincts of a seasoned first responder.
Developed by Texas A&M University engineering students, this AI-powered robotic dog doesn’t just follow commands. Designed to navigate chaos with precision, the robot could help revolutionize search-and-rescue missions, disaster response and many other emergency operations.
Sandun Vitharana, an engineering technology master’s student, and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, spearheaded the invention of the robotic dog. It can process voice commands and uses AI and camera input to perform path planning and identify objects.
A roboticist would describe it as a terrestrial robot that uses a memory-driven navigation system powered by a multimodal large language model (MLLM). This system interprets visual inputs and generates routing decisions, integrating environmental image capture, high-level reasoning, and path optimization, combined with a hybrid control architecture that enables both strategic planning and real-time adjustments.
A pair of robotic dogs with the ability to navigate through artificial intelligence climb concrete obstacles during a demonstration of their capabilities. Photo credit: Logan Jinks/Texas A&M University College of Engineering.
Robot navigation has evolved from simple landmark-based methods to complex computational systems integrating various sensory sources. However, navigating in unpredictable and unstructured environments like disaster zones or remote areas has remained difficult in autonomous exploration, where efficiency and adaptability are critical.
While robot dogs and large language model-based navigation exist in different contexts, it is a unique concept to combine a custom MLLM with a visual memory-based system, especially in a general-purpose and modular framework.
“Some academic and commercial systems have integrated language or vision models into robotics,” said Vitharana. “However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.”
Mallikarachchi and Vitharana began by exploring how an MLLM could interpret visual data from a camera in a robotic system. With support from the National Science Foundation, they combined this idea with voice commands to build a natural and intuitive system to show how vision, memory and language can come together interactively. The robot can quickly respond to avoid a collision and handles high-level planning by using the custom MLLM to analyze its current view and plan how best to proceed.
“Moving forward, this kind of control structure will likely become a common standard for human-like robots,” Mallikarachchi explained.
The robot’s memory-based system allows it to recall and reuse previously traveled paths, making navigation more efficient by reducing repeated exploration. This ability is critical in search-and-rescue missions, especially in unmapped areas and GPS-denied environments.
The potential applications could extend well beyond emergency response. Hospitals, warehouses and other large facilities could use the robots to improve efficiency. Its advanced navigation system might also assist people with visual impairments, explore minefields or perform reconnaissance in hazardous areas.
Nuralem Abizov, Amanzhol Bektemessov and Aidos Ibrayev from Kazakhstan’s International Engineering and Technological University developed the ROS2 infrastructure for the project. HG Chamika Wijayagrahi from the UK’s Coventry University supported the map design and the analysis of experimental results.
Vitharana and Mallikarachchi presented the robot and demonstrated its capabilities at the recent 22nd International Conference on Ubiquitous Robots. The research was published in A Walk to Remember: MLLM Memory-Driven Visual Navigation.
The Future of Autonomous and Wireless Charging
Scientists create robots smaller than a grain of salt that can think
Hyundai and Boston Dynamics unveil humanoid robot Atlas at CES
European researchers developed energy-efficient machine vision inspired by human eyesight and the brain
Top Ten Stories of the Year in AI Writing: 2025
In future years, AI writing in 2025 will most often be remembered as the year Google grew tired of being an also-ran and decisively grabbed the crown as the ‘Titan to Beat’ when it comes to AI automated thinking, writing and imaging.
That bold move by Google has been a great boon to writers, who can look forward to ever-more-fierce competition among AI’s key players in coming years – and ever more sophisticated AI writing tools.
Meanwhile, 2025 also decisively etched in the minds of business leaders that AI was more than simply a stunning wonder: It also became one the world’s most formidable new competitive tools that tech has to offer.
Specifically: Studies emerged that general use of ChatGPT at businesses was resulting in major productivity gains.
And still other studies found that ChatGPT and similar AI were logging significant productivity and quality of writing gains when AI was specifically used to auto-generate emails at businesses.
Meanwhile, AI grew significantly more intelligent, with ChatGPT releasing an AI engine deemed smarter than 98% of all humans.
Plus, a darkhorse research team from China shocked the world by releasing an AI engine nearly as good as ChatGPT that was built for pennies-on-the-dollar.
Bottom line: Given all the breakneck advances in AI during 2025, even the most skeptical can no longer claim that AI is a fanciful creation of the AI hype machine.
Instead, even the most skeptical must come to realize AI is the real deal.
And even the most skeptical must come to agree that AI and all its permutations will change the world as we know it.
Here’s detail on the top stories of the year that helped shape that takeaway:
*Gemini 3.0: The New Gold Standard In AI: After years of watching glumly from the sidelines as a nimble new start-up – ChatGPT – ate its lunch and soared to record-breaking, worldwide popularity, Google has finally decried “enough is enough” and released a new chatbot that’s literally in a league of its own.
Dubbed Gemini 3.0, the new AI definitively dusts its nearest overall competitor – ChatGPT-5.1 – across a wide array of critical, benchmark tests.
(A few weeks after this story broke, ChatGPT 5.2 was released, significantly reducing Gemini 3.0’s new lead in AI.)
*ChatGPT’s Top Use at Work: Writing: A new study by ChatGPT’s maker finds that writing is the number one use for the tool at work.
Observes the study’s lead researcher Aaron Chatterji: “Work usage is more common from educated users in highly paid professional occupations.”
Another major study finding: Once mostly embraced by men, ChatGPT is now popular with women.
Specifically, researchers found that by July 2025, 52% of ChatGPT users had names that could be classified as feminine.
*Bringing in ChatGPT for Email: The Business Case: While AI coders push the tech to ever-loftier heights, one thing we already know for sure is AI can write emails at the world-class level — in a flash.
True, long-term, AI may one day trigger a world in which AI-powered machines do all the work as we navigate a world resplendent with abundance.
But in the here and now, AI is already saving businesses and organizations serious coin in terms of slashing time spent on email, synthesizing ideas in new ways, ending email drudgery as we know it and boosting staff morale.
Essentially: There are all sorts of reasons for businesses and organizations to bring-in bleeding edge AI tools like ChatGPT, Gemini, Anthropic, Claude and similar to take over the heavy lifting when it comes to email.
This piece offers up the Top Ten.
*AI Users: ‘AI Has Tripled My Productivity:’ A new survey of U.S. workers finds they’re reducing the time it takes to complete some tasks by as much as two-thirds.
Moreover, 40% of U.S. workers reported they were using AI in some way in April 2025 –- as compared to 30% of workers just four months prior.
Even so, more gains would be possible if more of these early adopters would leverage relatively sophisticated applications of AI, such as AI-powered, deep research, AI agents and similar advanced AI systems, according to Ethan Mollick, a business technology professor at the University of Pennsylvania.
*New ChatGPT AI Engine Smarter than 98% of Humans: Stick a fork in it: Apparently, the battle of wits between humans and AI is so yesterday — and we flesh-bags have lost.
New test results from Mensa — the global group of the rumoredly smartest people in the world — show that one of ChatGPT’s newest AI engines, o3, has an IQ of 136.
Observes writer Liam Wright: “The score, calculated from a seven-run rolling average, places the model above approximately 98% of the human population.”
Currently, ChatGPT runs on a number of specialized AI engines — including ChatGPT-4o, which is rated best overall for writing.
ChatGPT-o3 was designed to excel in reasoning, math and other hard sciences applications.
*’Tweaked’ AI Writing Can Now Be Copyrighted: In a far-reaching decision, the U.S. Copyright Office has ruled that AI-generated content — modified by humans — can now be copyrighted.
The move has incredibly positive ramifications for writers who polish output from ChatGPT and similar AI to create blog posts, articles, books, poetry and more.
Observes writer Jacqueline So: “The U.S. Copyright Office processes approximately 500,000 copyright applications each year, with an increasing number being requests to copyright AI-generated works.”
“Most copyright decisions are made on a case-to-case basis.”
*ChatGPT-Maker Brings Back ChatGPT-4o, Other Legacy AI Engines: Responding to significant consumer backlash, OpenAI has restored access to GPT-4 and other legacy models that were popular before the release of GPT-5.
Essentially, many users were turned-off by GPT-5’s initial personality, which was perceived as cold, distant and terse.
Observes writer Will Knight: “The backlash has sparked a fresh debate over the psychological attachments some users form with chatbots trained to push their emotional buttons.”
*How DeepSeek Outsmarted the Market and Built a Highly Competitive AI Writer/Chatbot: New York Times writer Cade Metz offers an insightful look in this piece into how newcomer DeepSeek built its AI for pennies-on-the-dollar.
The chatbot stunned AI researchers — and roiled the stock market in February — after showing the world it could develop advanced AI for six million dollars.
DeepSeek’s secret: Moxie. Facing severely restricted access to the bleeding-edge chips needed to develop advanced AI, DeepSeek made-up for that deficiency by writing code that was much smarter and much more efficient than that of many competitors.
The bonus for consumers: “Because the Chinese start-up has shared its methods with other AI researchers, its technological tricks are poised to significantly reduce the cost of building AI.”
*Use AI or You’re Fired: In another sign that the days of ‘AI is Your Buddy’ are fading fast, increasing numbers of businesses have turned to strong-arming employees when it comes to AI.
Observes Wall Street Journal writer Lindsay Ellis: “Rank-and-file employees across corporate America have grown worried over the past few years about being replaced by AI.
“Something else is happening now: AI is costing workers their jobs if their bosses believe they aren’t embracing the technology fast enough.”
*Solution to AI Bubble Fears: U.S. Government?: The Wall Street Journal reports that AI is now considered so essential to U.S. defense, the U.S. government may step in to save the AI industry — should it implode from the irrational exuberance of investors.
Observes lead writer Sarah Myers West: “The federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback.
“Despite the lukewarm market signals, the U.S. government seems intent on backstopping American AI — no matter what.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post Top Ten Stories of the Year in AI Writing: 2025 appeared first on Robot Writers AI.
AI may not need massive training data after all
What if AI becomes conscious and we never know
What if AI becomes conscious and we never know
2025 Top Article – ANYbotics ANYmal robot is addressing key challenges in Industrial Robotics
MIT engineers design an aerial microrobot that can fly as fast as a bumblebee
A time-lapse photo shows a flying microrobot performing a flip. Credit: Courtesy of the Soft and Micro Robotics Laboratory.
By Adam Zewe
In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can’t reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.
So far, aerial microrobots have only been able to fly slowly along smooth trajectories, far from the swift, agile flight of real insects — until now.
MIT researchers have demonstrated aerial microrobots that can fly with speed and agility that is comparable to their biological counterparts. A collaborative team designed a new AI-based controller for the robotic bug that enabled it to follow gymnastic flight paths, such as executing continuous body flips.
With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 percent and 250 percent, respectively, compared to the researchers’ best previous demonstrations.
The speedy robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.

“We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate. Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the robot.
Chen is joined on the paper by co-lead authors Yi-Hsuan Hsiao, an EECS MIT graduate student; Andrea Tagliabue PhD ’24; and Owen Matteson, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro); as well as EECS graduate student Suhan Kim; Tong Zhao MEng ’23; and co-senior author Jonathan P. How, the Ford Professor of Engineering in the Department of Aeronautics and Astronautics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research appears today in Science Advances.
An AI controller
Chen’s group has been building robotic insects for more than five years.
They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilizes larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely fast rate.
But the controller — the “brain” of the robot that determines its position and tells it where to fly — was hand-tuned by a human, limiting the robot’s performance.
For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimizations quickly.
Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot.
To overcome this challenge, Chen’s group joined forces with How’s team and, together, they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid maneuvers, and the computational efficiency needed for real-time deployment.
“The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilize them,” How says.
For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behavior of the robot and plan the optimal series of actions to safely follow a trajectory.
While computationally intensive, it can plan challenging maneuvers like aerial somersaults, rapid turns, and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.
For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again.
“If small errors creep in, and you try to repeat that flip 10 times with those small errors, the robot will just crash. We need to have robust flight control,” How says.
They use this expert planner to train a “policy” based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.
Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very fast.
The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive maneuvers.
“The robust training method is the secret sauce of this technique,” How explains.
The AI-driven policy takes robot positions as inputs and outputs control commands in real time, such as thrust force and torques.
Insect-like performance
In their experiments, this two-step approach enabled the insect-scale robot to fly 447 percent faster while exhibiting a 255 percent increase in acceleration. The robot was able to complete 10 somersaults in 11 seconds, and the tiny robot never strayed more than 4 or 5 centimeters off its planned trajectory.
“This work demonstrates that soft and microrobots, traditionally limited in speed, can now leverage advanced control algorithms to achieve agility approaching that of natural insects and larger robots, opening up new opportunities for multimodal locomotion,” says Hsiao.
The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localize themselves and see clearly.
“This bio-mimicking flight behavior could help us in the future when we start putting cameras and sensors on board the robot,” Chen says.
Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work.
The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.
“For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,” says Chen.
“This work is especially impressive because these robots still perform precise flips and fast turns despite the large uncertainties that come from relatively large fabrication tolerances in small-scale manufacturing, wind gusts of more than 1 meter per second, and even its power tether wrapping around the robot as it performs repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved with this work.
“Although the controller currently runs on an external computer rather than onboard the robot, the authors demonstrate that similar, but less precise, control policies may be feasible even with the more limited computation available on an insect-scale robot. This is exciting because it points toward future insect-scale robots with agility approaching that of their biological counterparts,” she adds.
This research is funded, in part, by the National Science Foundation (NSF), the Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.

