Category robots in business

Page 27 of 595
1 25 26 27 28 29 595

Turning a flaw into a superpower: Researchers redefine how robots move

A research team led by Dr. Lin Cao from the University of Sheffield's School of Electrical and Electronic Engineering has reimagined one of robotics' long-standing flaws as a breakthrough feature—unveiling a new way for soft robots to move, morph, and even "grow" with unprecedented dexterity.

Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules

Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.

Anthony Jules is the CEO and co-founder of Robust.AI, a leader in AI-driven warehouse automation. The company’s flagship product Carter™, is built to work with people in their existing environments, without disrupting their workflows. Anthony has a career spanning over 30 years at the intersection of robotics, AI, and business. An MIT-trained roboticist, he was part of the founding team at Sapient, held leadership roles at Activision, and has built multiple startups, bringing a unique blend of technical depth and operational scale to human-centered automation.

Linear Actuators vs Rotary Actuators: The Core Choice for Humanoid Robot Joints

The robot joint module is the core hardware of humanoid robots, currently mainly divided into two major categories: rotary and linear. In humanoid robot designs, the choice often involves trade-offs based on the application scenario and manufacturing cost.

‘Brain-free’ robots that move in sync are powered entirely by air

A team led by the University of Oxford has developed a new class of soft robots that operate without electronics, motors, or computers—using only air pressure. The study, published in Advanced Materials, shows that these "fluidic robots" can generate complex, rhythmic movements and even automatically synchronize their actions.

Artificial neurons that behave like real brain cells

USC researchers built artificial neurons that replicate real brain processes using ion-based diffusive memristors. These devices emulate how neurons use chemicals to transmit and process signals, offering massive energy and size advantages. The technology may enable brain-like, hardware-based learning systems. It could transform AI into something closer to natural intelligence.

Teaching robots to map large environments

The artificial intelligence-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map, like of an office cubicle, while estimating the robot’s position in real-time. Image courtesy of the researchers.

By Adam Zewe

A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.

Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.

To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds. 

The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.

Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.

Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.

“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.

Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.

Mapping out a solution

For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.

Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.

While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.

To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.

“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.

Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.

Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.

“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.

A more flexible approach

Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.

Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.

“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.

Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.

The average error in these 3D reconstructions was less than 5 centimeters.

In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.

“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.

This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.

The value of physical intelligence: How researchers are working to safely advance capabilities of humanoid robots

You may not remember it, but odds are you took a few tumbles during your toddler era. You weren't alone. Falling, after all, is a natural consequence of learning to crawl, walk, climb and jump. Our balance, coordination and motor skills are developing throughout early childhood.

Generations in Dialogue: Bridging Perspectives in AI – a new podcast from AAAI

The Association for the Advancement of Artificial Intelligence (AAAI) is thrilled to announce the launch of a new podcast series, “Generations in Dialogue: Bridging Perspectives in AI.” Through this series, we aim to represent diverse perspectives across generations, fostering rich conversations about the ever changing landscape of AI. “Generations in Dialogue” aims to feature unique outlooks from AI experts, practitioners, enthusiasts of all backgrounds. Each episode will cover topics ranging from the history of AI, the latest research, to exploring the future ethical implications of its newest technologies. We will delve into how generational experiences shape views on AI, exploring the challenges, opportunities, and ethical considerations that come with the advancement of this transformative technology.

We welcome you to join us on this exciting journey. Whether you are an AI professional, researcher, or student, we hope this platform will be invaluable in discovering unique perspectives of AI with a global audience, bridging existing generational gaps in the understanding of AI and expanding your network beyond your local communities.

About the host

Ella Lan, a member of the AAAI Student Committee, is the host of “Generations in Dialogue: Bridging Perspectives in AI.” She is passionate about bringing together voices across career stages to explore the evolving landscape of artificial intelligence. Ella is a student at Stanford University tentatively studying Computer Science and Psychology, and she enjoys creating spaces where technical innovation intersects with ethical reflection, human values, and societal impact. Her interests span education, healthcare, and AI ethics, with a focus on building inclusive, interdisciplinary conversations that shape the future of responsible AI.

You can find out more about the series here, and you can also watch the first episodes on the AAAI YouTube channel.

Page 27 of 595
1 25 26 27 28 29 595