Page 1 of 603
1 2 3 603

Do you trust me? A framework for making networks of robots and vehicles safer

From birds flying in formation to students working on a group project, the functioning of a group requires not only coordination and communication but also trust—each member must be confident in the others. The same is true for networks of connected machines, which are rapidly gaining momentum in our modern world—from self-driving rideshare fleets, to smart power grids.

Air-powered artificial muscles could help robots lift 100 times their weight

Researchers at Arizona State University are developing bio-inspired robotic "muscles" that will enable robots to operate in boiling water, survive abrasive surfaces, bypass impediments that keep their motorized counterparts benched, and still lift up to 100 times their own weight. The new heavyweight champions of robotics will be lighter, smaller, and disconnected from a power source.

How to design and run an agent in rehearsal – before building it

Most AI agents fail because of a gap between design intent and production reality. Developers often spend days building only to find that escalation logic or tool calls fail in the wild, forcing a total restart. DataRobot Agent Assist closes this gap. It is a natural language CLI tool that lets you design, simulate, and validate your agent’s behavior in “rehearsal mode” before you write any implementation code. This blog will show you how to execute the full agent lifecycle from logic design to deployment within a single terminal session, saving you extra steps, rework, and time.

How to quickly develop and ship an agent from a CLI

DataRobot’s Agent Assist is a CLI tool built for designing, building, simulating, and shipping production AI agents. You run it from your terminal, describe in natural language what you want to build, and it guides the full journey from idea to deployed agent, without switching contexts, tools, or environments.

It works standalone and integrates with the DataRobot Agent Workforce Platform for deployment, governance, and monitoring. Whether you’re a solo developer prototyping a new agent or an enterprise team shipping to production, the workflow is the same: design, simulate, build, deploy.

Users are going from idea to a running agent quickly, reducing the scaffolding and setup time from days to minutes.

Why not just use a general-purpose coding agent?

General AI coding agents are built for breadth. That breadth is their strength, but it is exactly why they fall short for production AI agents.

Agent Assist was built for one thing: AI agents. That focus shapes every part of the tool. The design conversation, the spec format, the rehearsal system, the scaffolding, and the deployment are all purpose-built for how agents actually work. It understands tool definitions natively. It knows what a production-grade agent needs structurally before you tell it. It can simulate behavior because it was designed to think about agents end to end.

A comparison of DataRobot's Agent Assist to generic AI coding tools
A comparison of DataRobot’s Agent Assist to generic AI coding tools

The agent building journey: from conversation to production

Step 1: Start designing your agent with a conversation

You open your terminal and run dr assist. No project setup, no config files, no templates to fill out. You’ll immediately get a prompt asking what you want to build.

Agent Assist asks follow-up questions, not only technical ones, but business ones too. What systems does it need access to? What does a good escalation look like versus an unnecessary one? How should it handle a frustrated customer differently from someone with a simple question?

 Guided questions and prompts will help with building a complete picture of the logic, not just collecting a list of requirements. You can keep refining your ideas for the agent’s logic and behavior in the same conversation. Add a capability, change the escalation rules, adjust the tone. The context carries forward and everything updates automatically.

For developers who want fine-grained control, Agent Assist also provides configuration options for model selection, tool definitions, authentication setup, and integration configuration, all generated directly from the design conversation.

When the picture is complete, Agent Assist generates a full specification: system prompt, model selection, tool definitions, authentication setup, and integration configuration. Something a developer can build from and a business stakeholder can actually review before any code exists. From there, that spec becomes the input to the next step: running your agent in rehearsal mode, before a single line of implementation code is written.

Step 2: Watch your agent run before you build it

This is where Agent Assist does something no other tool does.

Before writing any implementation, it runs your agent in rehearsal mode. You describe a scenario and it executes tool calls against your actual requirements, showing you exactly how the agent would behave. You see every tool that fires, every API call that gets made, every decision the agent takes.

If the escalation logic is wrong, you catch it here. If a tool returns data in an unexpected format, you see it now instead of in production. You fix it in the conversation and run it again.

You validate the logic, the integrations, and the business rules all at once, and only move to code when the behavior is exactly what you want.

Step 3: The code that comes out is already production-ready

When you move to code generation, Agent Assist does not hand you a starting point. It hands you a foundation.

The agent you designed and simulated comes scaffolded with everything it needs to run in production, including OAuth authentication (no shared API keys), modular MCP server components, deployment configuration, monitoring, and testing frameworks. Out of the box, Agent Assist handles infrastructure that normally takes days to piece together.

The code is clean, documented, and follows standard patterns. You can take it and continue building in your preferred environment. But from the very first file, it is something you could show to a security team or hand off to ops without a disclaimer.

Step 4: Deploy from the same terminal you built in

When you are ready to ship, you stay in the same workflow. Agent Assist knows your environment, the models available to you, and what a valid deployment requires. It validates the configuration before touching anything.

One command. Any environment: on-prem, edge, cloud, or hybrid. Validated against your target environment’s security and model constraints. The same agent that helped you design and simulate also knows how to ship it.

What teams are saying about Agent Assist

“The hardest part of AI agent development is requirement definition, specifically bridging the gap between technical teams and domain experts. Agent Assist solves this interactively. A domain user can input a rough idea, and the tool actively guides them to flesh out the missing details. Because domain experts can immediately test and validate the outputs themselves, Agent Assist dramatically shortens the time from requirement scoping to actual agent implementation.”

The road ahead for Agent Assist

AI agents are becoming core business infrastructure, not experiments, and the tooling around them needs to catch up. The next phase of Agent Assist goes deeper on the parts that matter most once agents are running in production: richer tracing and evaluation so you can understand what your agent is actually doing, local experimentation so you can test changes without touching a live environment, and tighter integration with the broader ecosystem of tools your agents work with. The goal stays the same: less time debugging, more time shipping.

The hard part was never writing the code. It was everything around it: knowing what to build, validating it before it touched production, and trusting that what shipped would keep working. Agent Assist is built around that reality, and that is the direction it will keep moving in.

Get started with Agent Assist in 3 steps

Ready to ship your first production agent? Here’s all you need:

1.  Install the toolchain:

brew install datarobot-oss/taps/dr-cli uv pulumi/tap/pulumi go-task node git python

2.  Install Agent Assist:

dr plugin install assist

3.  Launch:

dr assist

Full documentation, examples, and advanced configuration are in the Agent Assist documentation.

The post How to design and run an agent in rehearsal – before building it appeared first on DataRobot.

Back to school: robots learn from factory workers

By Anthony King

What if training a robot to handle dirty, dangerous work on the factory floor was as simple as showing it how? Czech startup RoboTwin is doing exactly that, helping factory workers teach robots new skills by demonstration.

Instead of writing complex code, workers perform the job once and RoboTwin’s technology turns those movements into a robot programme – opening the door to automation for smaller manufacturers.

Founded in Prague in 2021, RoboTwin builds handheld devices and no-code software that capture human movements and translate them into instructions for industrial robots. The aim is to make automation faster, simpler and more accessible to manufacturers that do not have specialist robotics programmers.

“The robot basically copies the human demonstration,” said Megi Mejdrechová, RoboTwin’s co-founder and chief technology officer. “People with no coding skills can transfer their know-how and experience to robots.”

Mejdrechová, a mechanical engineer trained at the Czech Technical University in Prague, developed the core technology behind RoboTwin during her work in robotics research and industry. Her experience in robot control using AI and computer vision inspired her to create something practical for European manufacturers.

“Czech engineering is quite traditional and focused on scientific papers,” said Mejdrechová. “Visits to Singapore and Canada and other work experiences led me to focus on making a product that people could use.”

Getting started

In 2021, Mejdrechová entered a jump‑starter programme and won first prize in the manufacturing category. “We saw then that there was potential for the technology,” she said.

This encouraged her to start RoboTwin with colleagues Ladislav Dvořák and David Polák, who shared her enthusiasm for human‑robot partnerships. Mejdrechová received backing from Women TechEU, an EU scheme supporting women founders of deep‑tech startups. 

The RoboTwin team shared their results on the Horizon Results Platform, an online showcase for EU‑funded innovations, which led to an invitation to the EU’s Empowering Start‑ups and SMEs initiative. 

This helped fund their trip to Hannover Messe 2025, a major global manufacturing trade fair, and opened doors to new business contacts and deals.

Through a mix of public and private investment, RoboTwin has secured funding to refine its technology and expand to manufacturers in Central Europe, the Netherlands, Mexico and Canada.

In 2025, Mejdrechová was named in Forbes Czechia’s 30 Under 30 list for her work in making the training of robots accessible to more manufacturers.

Schooling robots

At the heart of RoboTwin’s system is a handheld device equipped with sensors. When a worker performs a task, for example spray painting a metal component, the system records the movement and converts it into a robot programme that can be reused in production.

Instead of requiring a specialist engineer to manually code every movement, the system captures the worker’s natural technique and translates it into precise instructions a robot can follow.

“We started with jobs that are ugly, dirty and unhealthy for workers to do manually,” said Mejdrechová.

Thanks to the no‑coding system, the process can be completed in a few steps and typically takes about a minute. For factories producing small batches or frequently changing products, this speed can make automation far more practical than traditional robot programming.

Making automation easy for all

Robotics in manufacturing is not new. The automotive industry already leads the way with about 23 000 new robots added to production lines in 2024. But while large companies can invest heavily, automation remains challenging and expensive for many SMEs.

This is where RoboTwin lends a hand. It has assisted firms in the surface‑treatment industry – companies that powder coat, paint or polish metal or plastic parts for car factories.

“Even if the batch of products you are producing is small, with our approach you can create a robot programme fast and easily,” said Mejdrechová.

For example, RoboTwin has assisted RobPainting, a Dutch company that robotises painting for SMEs to improve quality, reduce costs and minimise rework. 

“With our device we can teach the robot precise trajectories that are needed for a product and about its surroundings,” said David Vobr, a robotics specialist at RoboTwin who often assists customers.

Dangerous jobs

RoboTwin’s system can work with a wide range of industrial robots, including collaborative robots designed to operate safely alongside humans.

“We can have manipulators or painting robots and also collaborative robots, which can work alongside humans because they have sensors that tell them when to stop moving if someone could get hurt,” said Vobr.

RoboTwin initially focused on surface treatment in manufacturing, where tasks such as spray painting require workers to wear protective clothing and perform repetitive movements.

“These jobs are difficult to automate because there is often a lot of hidden movement involved,” said Mejdrechová, referring to small adjustments and gestures that workers make instinctively.

The sector also faces labour shortages.

“People are often not happy doing these things and there is a lack of workers willing to take these jobs. So there is a high demand for automation.”

Customers report that many robot programmes can now be created without shutting down a production line.

RoboTwin has already worked with a number of companies, including Surfin Technology, a Czech company specialising in robotic coating solutions, and Innovative Finishing Solutions in Canada, which brings its technology to North American customers.

Scaling up

EU support for RoboTwin is ongoing. A €2.3 million grant from the European Innovation Council secured in 2025 will help accelerate product development and market expansion.

The funding will support the next generation of RoboTwin’s technology. Instead of relying solely on manual demonstrations, the system will increasingly use stored experience and data to generate robot programmes automatically based on the shape of an object.

The company says this could make automation viable for many manufacturing tasks that were previously too complex or costly to automate.

For Europe, technologies like RoboTwin could play an important role in strengthening digital sovereignty and smart industrial innovation. They can help smaller manufacturers adopt advanced robotics without needing specialised programming expertise.

As factories become more flexible and data-driven, the ability to quickly teach robots new tasks may prove increasingly valuable.

Mejdrechová believes this shift will help bring automation within reach of a much wider range of companies.

“Our goal is to make robot training something that workers can do themselves,” she said. “If we succeed, automation will no longer be limited to large factories with specialised engineers. It will become a tool that any manufacturer can use.”

Research in this article was funded by the EU’s Horizon Programme. The views of the interviewees don’t necessarily reflect those of the European Commission. If you liked this article, please consider sharing it on social media.


This article was originally published in Horizon, the EU Research and Innovation magazine.

Researchers build a robotic swarm with no electronics, no batteries and no brains

A LEGO brick is not smart. It doesn't compute. It doesn't plug in. It just fits. A team of Georgia Tech researchers has applied that logic to robotics. Bolei Deng, an assistant professor in Georgia Tech's Daniel Guggenheim School of Aerospace Engineering, and Xinyi Yang, an aerospace engineering Ph.D. student, build swarms of tiny robotic particles that latch, release, and reorganize without a single electronic component. No sensors, no processors, and no code.

Combining the robot operating system with LLMs for natural-language control

Over the past few decades, robotics researchers have developed a wide range of increasingly advanced robots that can autonomously complete various real-world tasks. To be successfully deployed in real-world settings, such as in public spaces, homes and office environments, these robots should be able to make sense of instructions provided by human users and adapt their actions accordingly.

SAP NLP Search Solutions

SAP NLP Search Solutions: Adding Intelligent Search to Your SAP Environment

The Data Access Problem Most SAP Shops Have Stopped Talking About

The data is in SAP. Everyone knows it is there. But getting to it requires knowing which transaction code to use, which fields to filter, and often which table names to query — knowledge that lives in a small group of power users and SAP consultants, not in the operations team, the supply chain planner, or the plant manager who actually needs it.

The result is a predictable pattern: analysts spend hours pulling reports. Decisions wait for data. The people closest to the operational problem rely on spreadsheet exports that are already 24 hours stale by the time they reach the right desk.

SAP NLP search solves this at the access layer. It lets users ask questions in plain language and get answers drawn from live SAP data — without transaction codes, without filter configurations, and without a power user in the loop.

USM Business Systems is a CMMi Level 3, Oracle Gold Partner Artificial Intelligence (AI) and IT services firm based in Ashburn, VA. We design and deploy SAP NLP search solutions for manufacturers, pharma companies, logistics operators, and other enterprises where the gap between SAP data and operational decision-making is costing time and accuracy.

What SAP NLP Search Actually Is?

SAP NLP search is a natural language interface layered on top of SAP data. A user types or speaks a question — ‘Which suppliers are running more than 5 days late on open POs this week?’ or ‘What is the current inventory for material X across all plants?’ — and the system retrieves the relevant SAP data and returns a plain-language answer or a structured result.

The technical architecture underneath involves three components working together:

  • A retrieval layer that connects to SAP Datasphere views, HANA models, or structured data extracts and fetches the records relevant to the query
  • An LLM (large language model) that interprets the natural language question, reasons about the retrieved data, and formulates a response the user can act on
  • A user interface layer, typically embedded in SAP Fiori or a standalone web application, that surfaces the interaction in a format the team already uses

This architecture is known as retrieval-augmented generation (RAG). It is the standard pattern for enterprise AI search because it grounds the LLM’s responses in your actual data rather than its training knowledge — which means the answers are accurate to your environment, not generic.

Where SAP NLP Search Delivers Measurable Value?

  • Supply Chain and Procurement

Supply chain teams field constant questions about supplier performance, open purchase order status, inventory positions, and demand deviations. In a typical SAP environment without NLP search, each of these questions requires a different transaction, a different filter configuration, and often a trip to the analyst team.

With NLP search on SAP Ariba and S/4HANA data, a supply chain planner asks the question directly and gets the answer in under 30 seconds. Forrester research found that enterprises deploying AI-assisted data access in supply chain operations reduced average data retrieval time by 68% within 90 days of deployment.

  • Manufacturing Operations

Plant managers and production supervisors need fast access to quality data, work order status, equipment maintenance history, and production schedule adherence. In SAP PP and SAP PM, this data exists but requires navigation through multiple transaction codes.

NLP search allows a plant manager to ask ‘What is the current first-pass yield for line 3 this week compared to last week?’ and get an answer pulled from SAP QM data — in the moment, on a tablet on the shop floor. The decision that used to wait for an end-of-day report happens in real time.

  • Finance and Compliance

Finance teams use SAP NLP search to answer variance questions, retrieve specific transaction histories, and surface exceptions without constructing custom reports. Compliance teams in regulated environments use it to pull audit-relevant data on demand — a capability that previously required either a SAP power user or a scheduled report.

  • Procurement and Sourcing

Buyers and category managers use NLP search to surface contract terms, pricing history, and supplier qualification status from SAP Ariba without navigating the full Ariba interface. A buyer preparing for a supplier negotiation asks what the last five purchase prices were for a given material category and gets the answer directly from SAP contract and PO data.

How does NLP search on SAP handle questions the system cannot answer?

A well-designed SAP NLP search system will indicate when a query falls outside its data coverage rather than generating a fabricated answer. This is controlled by the retrieval layer — if the relevant data is not in the configured Datasphere view or HANA model, the system returns a ‘data not available’ response. Configuration of the retrieval layer’s scope is a key design decision during deployment.

Can SAP NLP search be used by non-technical users without SAP training?

Yes — that is the primary value proposition. Users who have never navigated an SAP transaction code can access operational data through plain language questions. The system requires user management and access controls, but the operational interface requires no SAP knowledge. Teams report adoption rates of 80%+ within 30 days when the deployment covers data that users actively need.

What a SAP NLP Search Deployment Involves?

  • Phase 1: Data Domain Scoping (Weeks 1-2)

Define which SAP data the search system will cover. This is not ‘all of SAP’ — it is a specific set of data domains aligned to the team or use case being served first. Supply chain planner access to procurement and inventory data is a typical first domain. Finance team access to transaction history and variance data is another common starting point.

  • Phase 2: Data Readiness (Weeks 2-4)

Build or validate the Datasphere views or HANA models that the retrieval layer will query. This phase surfaces master data quality issues that need resolution before the NLP layer can produce reliable answers. Budget 2-4 weeks depending on the cleanliness of the target data domain.

  • Phase 3: Retrieval Layer Build (Weeks 4-6)

Configure the retrieval system that connects user queries to the relevant SAP data. This includes the embedding model that converts queries and data into a format the LLM can reason about, the vector search or structured retrieval logic, and the data access controls that ensure users only see data they are authorized to access.

  • Phase 4: LLM Integration and Response Configuration (Weeks 6-8)

Connect the retrieval layer to the LLM, configure the response format, and build the prompt structure that guides the model to produce useful, accurate answers rather than general responses. Test on 50-100 representative queries across the target data domain. Tune accuracy.

  • Phase 5: UI Integration and Rollout (Weeks 8-10)

Deploy the interface — typically a Fiori tile, a Teams integration, or a standalone web application — and roll out to the target user group. Collect feedback on query coverage gaps and expand the data domain in the next iteration.

A first-domain deployment typically reaches productive use in 10-12 weeks. Enterprises that have invested in SAP Datasphere can move faster because the data layer is already structured.

What Separates Good SAP NLP Search From Poor Implementations?

  • Scoped retrieval, not open-ended LLM access. The model must be grounded in your SAP data, not relying on its training knowledge. RAG architecture is the standard. Implementations without a proper retrieval layer produce hallucinated data.
  • SAP data structure knowledge. The engineers building the retrieval layer need to understand SAP table relationships, master data objects, and SAP Datasphere modeling — not just LLM APIs. The two skill sets are both required.
  • Access control from the start. SAP data carries access restrictions for good reasons. An NLP search system that allows any user to query any data field is a governance problem. Role-based data access needs to be designed into the retrieval layer from the beginning.
  • Iteration planning. No first deployment covers every query the users will try. The difference between a successful deployment and an abandoned one is whether the team has a process for expanding data coverage based on user feedback.

Why USM Business Systems?

USM Business Systems is a CMMi Level 3, Oracle Gold Partner AI and IT services firm headquartered in Ashburn, VA. With 1,000+ engineers, 2,000+ delivered applications, and 27 years of enterprise delivery experience, USM specializes in AI implementation for supply chain, pharma, manufacturing, and SAP environments. Our SAP AI practice places specialized engineers inside enterprise programs within days — on contract, as dedicated delivery pods, or on a project basis.

Ready to put SAP AI into production? Book a 30-minute scoping call with our SAP AI team.

Get In Touch!

[contact-form-7]

 

FAQ

  • Does SAP NLP search require SAP Datasphere, or can it work with HANA directly?

Both work. SAP Datasphere is preferred for new deployments because it provides a governed, semantically structured data layer that is well-suited to retrieval-augmented generation. HANA views and OData APIs can serve as the retrieval source for organizations that have not yet adopted Datasphere, though more custom engineering is required.

  • Which LLM works best for SAP NLP search?

The answer depends on your governance requirements. Azure OpenAI (GPT-4) is the most common choice for enterprises with existing Microsoft agreements and data residency requirements. Anthropic Claude and AWS Bedrock models are increasingly common in regulated industries that require stronger content controls. The LLM selection is less important than the retrieval layer architecture.

  • How is accuracy measured for SAP NLP search?

The primary accuracy metric is the rate at which the system returns a correct answer to queries tested against known SAP data. A second metric is the rate of ‘I cannot answer this’ responses versus hallucinated answers — the former is acceptable; the latter is not. Measure both during the testing phase and set minimum thresholds before production rollout.

  • Can SAP NLP search write data back to SAP, or is it read-only?

Most initial deployments are read-only — the system retrieves and presents data but does not modify SAP records. Write-back capability, where the system can initiate a SAP workflow or update a field based on a user instruction, is the next level and requires agentic architecture rather than pure NLP search.

  • What user adoption approach works best for SAP NLP search?

Start with the team that has the most acute data access pain and the most frequent need to query SAP. Supply chain planners, procurement buyers, and plant managers are typically the highest-value early adopters. Get that team productive, collect their feedback on query gaps, and use their results as the business case for expanding to the next team.

Control framework lets flexible robots move in tight spaces with less math

We often imagine robots as machines with rigid arms, rotating joints, and targeted mechanical movements. The famous Optimus Prime and Bumblebee from the "Transformers" movies appear to fit these criteria. However, such robots would be unable to function in environments that are too confined and cramped.

IoT SIM Cards Become Critical Infrastructure for Industrial Automation, Robotics, and Drone Operations

As manufacturing, logistics, energy, and infrastructure sectors accelerate digital transformation initiatives, cellular-connected devices are replacing isolated machines with continuously communicating systems capable of real-time coordination and remote management.

DNA robots could deliver drugs and hunt viruses inside your body

DNA robots are emerging as tiny programmable machines that could one day deliver drugs, hunt viruses, and build molecular-scale devices. By borrowing ideas from traditional robotics and combining them with DNA folding techniques, scientists are creating structures that can move and act with precision. These robots can be guided using chemical reactions or external signals like light and magnetic fields.

Resource-sharing boosts robotic resilience

The Mori3 modular origami robot. Image credit: EPFL. Reproduced under CC-BY-SA.

By Celia Luterbacher

If the goal of a robot is to perform a function, then minimizing the possibility of failure is a top priority when it comes to robotic design. But this minimization is at odds with the robotic raison d’être: systems with multiple units, or agents, can perform more diverse functions, but they also have more different parts that can potentially fail.

Researchers led by Jamie Paik, head of the Reconfigurable Robotics Laboratory (RRL) in EPFL’s School of Engineering, have not only circumvented this problem, but flipped it: they have designed a modular robot that actually lowers its odds of failure by sharing resources among its individual agents.

“For the first time, we have found a way to reverse the trend of increasing odds of failure with increasing function,” Paik explains. “We introduce local resource sharing as a new paradigm in robotics, reducing the failure rate with a larger number of modules.”

In a paper published in Science Robotics, the team showed how exploiting redundant resources and sharing them locally enabled a modular origami robot to successfully navigate a complex terrain, even when one module was completely deprived of power, sensing, and wireless communication.

Sharing is caring

The RRL team took inspiration for their innovation from nature, where the problem of failure is often solved collectively. Birds share local sensing information through flocking behavior, some trees communicate threats to neighbors using airborne signals, and cells continuously transport nutrients across their membranes so that the death of any individual doesn’t significantly impact the overall organism.

Modular robots, which are composed of multiple units that connect to form a complete system, are analogous to multicellular or collective organisms, but until now, their design has been a source of vulnerability: the failure of one module often disables some, if not all, of the robot’s ability to perform tasks. Some modular robots get around this problem with built-in backup resources or self-reconfiguration abilities, but these approaches usually don’t completely restore functionality.

For their study, the RRL team used something called hyper-redundancy: the sharing of all critical power, communication, and sensing resources across all modules, without any change to the robot’s physical structure.

“We found that sharing just one or two resources was not enough: if each resource had an equal chance of failure, system reliability would continue to drop with an increasing number of agents. But when all resources were shared, this this trend was reversed,” Paik says.

In a locomotion task experiment with the Mori3 robot, which is composed of four triangular modules, the team experimented with cutting battery power, wireless communication, and sensing to the central module. Normally, this ‘dead’ central module would block the articulation and movement of the other three, but thanks to hyper-redundancy, the neighboring modules fully compensated for its lack of resources. This allowed the Mori3 to successfully ‘walk’ toward a barrier and contort itself effectively to pass underneath it.

“Essentially, our methodology allowed us to ‘revive’ a dead module in a collective and bring it back to full functionality. Our local resource-sharing framework therefore has the potential to support highly adaptive robots that can operate with unprecedented reliability, finally resolving the reliability-adaptability conflict,” summarizes RRL researcher and first author Kevin Holdcroft.

The researchers say that future work could focus on applying their resource sharing framework to more complex systems with increasing numbers of agents. In particular, the same concept could be extended to robotic swarms, with hardware adaptations that allow swarm members to dock to each other for energy and information transfer.

References

Scalable robot collective resilience by sharing resources, Holdcroft, K., Bolotnikova, A., Monforte, A.J., and Paik, J., Science Robotics (2026).

Page 1 of 603
1 2 3 603