Flexible mapping technique can help search-and-rescue robots navigate unpredictable environments
Advances in heavy-duty robotics and intelligent control support future fusion reactor maintenance
‘Brain-free’ robots that move in sync are powered entirely by air
Artificial neurons that behave like real brain cells
Teaching robots to map large environments
The artificial intelligence-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map, like of an office cubicle, while estimating the robot’s position in real-time. Image courtesy of the researchers.
By Adam Zewe
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.
Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.
To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.
The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.
Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.
Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.
“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.
Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.
Mapping out a solution
For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.
Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.
While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.
To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.
“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.
Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.
Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.
“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.
A more flexible approach
Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.
Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.
“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.
Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.
The average error in these 3D reconstructions was less than 5 centimeters.
In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.
“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.
This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.
How the Best Infrared Thermal Modules Enhance Machine Vision for OEMs
The value of physical intelligence: How researchers are working to safely advance capabilities of humanoid robots
How to Improve Warehouse Picking Speed with Automation
Digital coworkers: How AI agents are reshaping enterprise teams
Across industries, a new type of employee is emerging: the digital coworker.
AI agents that collaborate, learn, and make decisions are changing how enterprise teams operate and grow.
These aren’t like the static chatbots or RPA scripts running in the background. They’re autonomous agents that act as colleagues — not code — helping teams move faster, make smarter decisions, and scale institutional knowledge.
Managers are now learning to hire, onboard, and supervise AI agents like human employees, while teams are redefining trust, learning how to share context, and reshaping collaboration around intelligent systems that can act independently.
For leaders, this shift isn’t just about adopting new technology. It’s about transforming how organizations work and scale, and building more adaptive, resilient teams for the age of human-AI collaboration.
This post explores how AI leaders can guide trust, collaboration, and performance as digital coworkers become part of the workforce.
How AI agents are shifting from tools to digital coworkers
AI agents acting as digital coworkers can reason through problems, coordinate across departments, and make decisions that directly influence outcomes.
Unlike traditional rule-based automation tools, these digital colleagues have the autonomy and awareness to carry out complex tasks without constant human supervision.
Consider supply chain operations, for instance. In a “self-fulfilling” supply chain, an agent might:
- Monitor market conditions
- Detect disruptions
- Evaluate alternatives
- Negotiate vendor adjustments
And it can do it all without a human even glancing at their dashboard. Instead of chasing updates and keeping an eye on constant market fluctuations, the human role shifts to strategy.
For leaders, this shift redefines process efficiency and management itself. It completely changes what it means to assign responsibility, ensure accountability, and measure performance in a workforce that now includes intelligent systems.
Why enterprises are embracing AI employees
The rise of AI employees isn’t about chasing the latest technology trend — it’s about building a more resilient, adaptable workforce.
Enterprises are under constant pressure to sustain performance, manage risk, and respond faster to change. Digital coworkers are emerging as a way to extend capacity and improve consistency in how teams operate.
AI agents can already take on analytical workloads, process monitoring, and repeatable decisions that slow teams down. In doing so, they help human employees focus on the work that requires creativity, strategy, and sound judgment.
For leadership teams, value shows up in measurable outcomes:
- Greater productivity: Agents handle repeatable tasks autonomously, 24/7, compounding efficiency across departments.
- Operational resilience:: Continuous execution reduces bottlenecks and helps teams sustain performance through change.
- Faster, data-driven decisions: Agents analyze, simulate, and recommend actions in real time, giving leaders an information edge with less downtime.
- Higher human impact: Teams redirect their time toward creativity, strategy, and innovation.
Forward-looking organizations are already redesigning workflows around this partnership. In finance, agents handle “lights-out lending” processes around the clock while human analysts refine models and validate results. In operations, they monitor supply chains and surface insights before risks escalate.
The result: a more responsive, data-driven enterprise where people and AI each focus on what they do best.
Inside the partnership between humans and AI coworkers
Think about the process of onboarding a new team member: You introduce processes, show how systems connect, and gradually increase responsibility. Agent onboarding follows that same pattern, except the learning curve is measured in hours — not months.
Over time, the agent + employee partnership evolves. Agents handle the repeatable and time-sensitive (monitoring data flows, coordinating across systems, keeping decisions moving), while humans focus on creative, strategic, and relationship-driven work that requires context and judgment.
Let’s go back to the supply chain example above. In supply chain management, AI agents monitor demand signals, adjust inventory, and coordinate vendors automatically, while human leaders focus on long-term resilience and supplier strategy. That division of work turns human oversight into orchestration and gives teams the freedom (and time) to operate proactively instead of reactively.
This collaboration model is redefining how teams communicate, assign responsibility, and measure success, setting the stage for deeper cultural shifts.
The culture shift: Working with digital teammates
Cultural adaptation to digital coworkers follows a predictable pattern, but the timeline varies depending on how teams manage the change. Skepticism is normal early on as employees question how much they should trust automated decisions or delegate responsibility to agents. But over time, as AI coworkers prove reliable and transparent in their actions, teams feel more confident in them and collaboration starts to feel natural.
The initial hurdle often centers on trust and control. Human teams are used to knowing who’s responsible for what, how decisions get made, and where to go when problems arise. Digital agents introduce a new and unfamiliar element where some decisions happen automatically, processes run without human oversight, and coordination occurs between systems instead of people.
This “trust curve” typically:
- Starts with skepticism: “Can this agent really handle complex tasks and decisions?”
- Moves through cautious testing: “Let’s see how it performs on lower-risk processes.”
- Reaches collaborative confidence: “This agent consistently makes good decisions faster than we could.”
But what happens when agents disagree with human decisions, or when their recommendations go against “the way we’ve always done it”?
These are actually a blessing in disguise, and are opportunities where humans need to weigh competing agent recommendations.
It’s in these moments that hidden assumptions in your processes might surface, revealing potentially better approaches that neither humans nor agents would have discovered on their own. And the final solution might involve human expertise, agent automation, or a healthy combination of both.
Preparing for the next phase of human + AI collaboration
Moving from traditional teams to human-agent collaboration offers operational improvement and a competitive differentiation that can grow over time. Early adopters are already building organizational capabilities that competitors will struggle to replicate as they play catch-up.
AI agents are the digital employees that can learn your business context, maintain governance, streamline your processes, and develop institutional knowledge that stays in-house.
With agents handling more operational duties, human teams can focus on innovation, strategy, and relationship building. This gives you breathing room on growth, using the resources you already have. Organizations that embrace digital coworkers are building adaptive capacity for future challenges we can’t even anticipate (yet).
Discover how AI leaders are preparing their organizations for the agent workforce future.
The post Digital coworkers: How AI agents are reshaping enterprise teams appeared first on DataRobot.
Lenovo’s Secret Weapon: Solving AI’s Failure-to-Launch Crisis
For the past two years, the corporate world has been gripped by a singular obsession: Artificial Intelligence (AI). C-suites and boardrooms have mandated “AI-first” strategies, terrified of being left behind in a gold rush not seen since the dawn of […]
The post Lenovo’s Secret Weapon: Solving AI’s Failure-to-Launch Crisis appeared first on TechSpective.
Nike Unveils Project Amplify, the World’s First Powered Footwear System for Running and Walking
Gone Fishin’
RobotWritersAI.com is playing hooky.
We’ll be back Nov. 3, 2025 with fresh news and analysis on the latest in AI-generated writing.
The post Gone Fishin’ appeared first on Robot Writers AI.