Resource-sharing boosts robotic resilience
The Mori3 modular origami robot. Image credit: EPFL. Reproduced under CC-BY-SA.
By Celia Luterbacher
If the goal of a robot is to perform a function, then minimizing the possibility of failure is a top priority when it comes to robotic design. But this minimization is at odds with the robotic raison d’être: systems with multiple units, or agents, can perform more diverse functions, but they also have more different parts that can potentially fail.
Researchers led by Jamie Paik, head of the Reconfigurable Robotics Laboratory (RRL) in EPFL’s School of Engineering, have not only circumvented this problem, but flipped it: they have designed a modular robot that actually lowers its odds of failure by sharing resources among its individual agents.
“For the first time, we have found a way to reverse the trend of increasing odds of failure with increasing function,” Paik explains. “We introduce local resource sharing as a new paradigm in robotics, reducing the failure rate with a larger number of modules.”
In a paper published in Science Robotics, the team showed how exploiting redundant resources and sharing them locally enabled a modular origami robot to successfully navigate a complex terrain, even when one module was completely deprived of power, sensing, and wireless communication.
Sharing is caring
The RRL team took inspiration for their innovation from nature, where the problem of failure is often solved collectively. Birds share local sensing information through flocking behavior, some trees communicate threats to neighbors using airborne signals, and cells continuously transport nutrients across their membranes so that the death of any individual doesn’t significantly impact the overall organism.
Modular robots, which are composed of multiple units that connect to form a complete system, are analogous to multicellular or collective organisms, but until now, their design has been a source of vulnerability: the failure of one module often disables some, if not all, of the robot’s ability to perform tasks. Some modular robots get around this problem with built-in backup resources or self-reconfiguration abilities, but these approaches usually don’t completely restore functionality.
For their study, the RRL team used something called hyper-redundancy: the sharing of all critical power, communication, and sensing resources across all modules, without any change to the robot’s physical structure.
“We found that sharing just one or two resources was not enough: if each resource had an equal chance of failure, system reliability would continue to drop with an increasing number of agents. But when all resources were shared, this this trend was reversed,” Paik says.
In a locomotion task experiment with the Mori3 robot, which is composed of four triangular modules, the team experimented with cutting battery power, wireless communication, and sensing to the central module. Normally, this ‘dead’ central module would block the articulation and movement of the other three, but thanks to hyper-redundancy, the neighboring modules fully compensated for its lack of resources. This allowed the Mori3 to successfully ‘walk’ toward a barrier and contort itself effectively to pass underneath it.
“Essentially, our methodology allowed us to ‘revive’ a dead module in a collective and bring it back to full functionality. Our local resource-sharing framework therefore has the potential to support highly adaptive robots that can operate with unprecedented reliability, finally resolving the reliability-adaptability conflict,” summarizes RRL researcher and first author Kevin Holdcroft.
The researchers say that future work could focus on applying their resource sharing framework to more complex systems with increasing numbers of agents. In particular, the same concept could be extended to robotic swarms, with hardware adaptations that allow swarm members to dock to each other for energy and information transfer.
References
Scalable robot collective resilience by sharing resources, Holdcroft, K., Bolotnikova, A., Monforte, A.J., and Paik, J., Science Robotics (2026).
Resource-sharing boosts robotic resilience
The Mori3 modular origami robot. Image credit: EPFL. Reproduced under CC-BY-SA.
By Celia Luterbacher
If the goal of a robot is to perform a function, then minimizing the possibility of failure is a top priority when it comes to robotic design. But this minimization is at odds with the robotic raison d’être: systems with multiple units, or agents, can perform more diverse functions, but they also have more different parts that can potentially fail.
Researchers led by Jamie Paik, head of the Reconfigurable Robotics Laboratory (RRL) in EPFL’s School of Engineering, have not only circumvented this problem, but flipped it: they have designed a modular robot that actually lowers its odds of failure by sharing resources among its individual agents.
“For the first time, we have found a way to reverse the trend of increasing odds of failure with increasing function,” Paik explains. “We introduce local resource sharing as a new paradigm in robotics, reducing the failure rate with a larger number of modules.”
In a paper published in Science Robotics, the team showed how exploiting redundant resources and sharing them locally enabled a modular origami robot to successfully navigate a complex terrain, even when one module was completely deprived of power, sensing, and wireless communication.
Sharing is caring
The RRL team took inspiration for their innovation from nature, where the problem of failure is often solved collectively. Birds share local sensing information through flocking behavior, some trees communicate threats to neighbors using airborne signals, and cells continuously transport nutrients across their membranes so that the death of any individual doesn’t significantly impact the overall organism.
Modular robots, which are composed of multiple units that connect to form a complete system, are analogous to multicellular or collective organisms, but until now, their design has been a source of vulnerability: the failure of one module often disables some, if not all, of the robot’s ability to perform tasks. Some modular robots get around this problem with built-in backup resources or self-reconfiguration abilities, but these approaches usually don’t completely restore functionality.
For their study, the RRL team used something called hyper-redundancy: the sharing of all critical power, communication, and sensing resources across all modules, without any change to the robot’s physical structure.
“We found that sharing just one or two resources was not enough: if each resource had an equal chance of failure, system reliability would continue to drop with an increasing number of agents. But when all resources were shared, this this trend was reversed,” Paik says.
In a locomotion task experiment with the Mori3 robot, which is composed of four triangular modules, the team experimented with cutting battery power, wireless communication, and sensing to the central module. Normally, this ‘dead’ central module would block the articulation and movement of the other three, but thanks to hyper-redundancy, the neighboring modules fully compensated for its lack of resources. This allowed the Mori3 to successfully ‘walk’ toward a barrier and contort itself effectively to pass underneath it.
“Essentially, our methodology allowed us to ‘revive’ a dead module in a collective and bring it back to full functionality. Our local resource-sharing framework therefore has the potential to support highly adaptive robots that can operate with unprecedented reliability, finally resolving the reliability-adaptability conflict,” summarizes RRL researcher and first author Kevin Holdcroft.
The researchers say that future work could focus on applying their resource sharing framework to more complex systems with increasing numbers of agents. In particular, the same concept could be extended to robotic swarms, with hardware adaptations that allow swarm members to dock to each other for energy and information transfer.
References
Scalable robot collective resilience by sharing resources, Holdcroft, K., Bolotnikova, A., Monforte, A.J., and Paik, J., Science Robotics (2026).
Enterprise AI Engineers for SAP
Enterprise AI Engineers for SAP: What to Look For, What They Cost, and How to Get Them Fast?
The Staffing Problem That Is Slowing Every SAP AI Program
The program is approved. The use case is scoped. The SAP landscape is documented. And then the staffing process begins — and it stalls.
Standard AI engineering talent is available. Standard SAP consultants are available. But the engineer who understands both SAP data architecture and modern AI frameworks, who has deployed something in SAP AI Core before, who knows what OData looks like on the other side of a BTP integration — that person is scarce, expensive, and usually already committed to another program.
This article covers what enterprise AI engineers for SAP actually need to know, what they cost in today’s market, and what your options are for getting them deployed quickly.
USM Business Systems is a CMMi Level 3, Oracle Gold Partner AI and IT services firm headquartered in Ashburn, VA, with 1,000+ engineers and a specialized SAP AI practice. We place SAP BTP AI developers, SAP AI Core engineers, and enterprise LLM integration specialists on contract, as dedicated delivery pods, and on project-based engagements.
The Four Role Types That Matter for SAP AI Programs
- SAP AI Core Engineer
This is the role most programs understaff. SAP AI Core is the managed runtime where models are deployed, versioned, and governed inside the SAP ecosystem. An AI Core engineer configures the runtime environment, manages model lifecycle, handles the API connections between AI Core and external model providers, and sets up the monitoring and logging that auditors will ask about.
A general ML engineer can learn AI Core, but the learning curve runs 6-10 weeks in a live SAP environment. A program that needs AI Core production-ready in 8 weeks does not have that time.
- SAP BTP AI Developer
BTP developers build the application layer on top of SAP’s Business Technology Platform — the APIs, the integration flows, the Fiori extensions, and the AI Foundation services that connect the AI capability to SAP data and workflows. BTP AI developers need to know both SAP’s integration patterns and modern AI API integration. This combination is genuinely rare.
- Enterprise LLM Integration Engineer
This engineer connects external LLM providers — Azure OpenAI, Anthropic, AWS Bedrock — to the SAP environment through BTP Integration Suite and SAP AI Core’s generative AI hub. They manage authentication, data formatting, latency requirements, and the retrieval layer that ensures the model is reading the right SAP data. They also understand the governance requirements that determine what data can leave the SAP boundary.
- SAP Data Architecture Specialist
AI capabilities are only as good as the data they read. The SAP data architecture specialist structures SAP Datasphere views, HANA models, and data pipelines to give AI systems clean, semantically meaningful access to enterprise data. This role is often the first bottleneck — programs that try to deploy AI without involving a SAP data architect first spend weeks discovering master data quality problems they should have found in week one.
What These Engineers Cost in 2026?
Hourly bill rates for specialized SAP AI engineers reflect both the scarcity of the combined SAP and AI skill set and the urgency that drives most hiring decisions in this space.
| Role | US Contract Rate | Typical Availability | Ramp Time (SAP env) |
| SAP AI Core Engineer | $160-$210/hr | 3-6 week search | 1-2 weeks |
| SAP BTP AI Developer | $140-$180/hr | 2-5 week search | 1-2 weeks |
| Enterprise LLM Integration Engineer | $150-$200/hr | 3-6 week search | 2-3 weeks |
| SAP Data Architecture Specialist | $130-$170/hr | 2-4 week search | 1 week |
| Enterprise AI Solution Architect | $200-$260/hr | 4-8 week search | 2-3 weeks |
Rates reflect US market data as of Q1 2026. Rates for offshore or nearshore resources range 40-60% lower for equivalent technical profiles.
Why is it so hard to find engineers with both SAP and AI skills?
SAP expertise is typically built over years of working inside enterprise SAP programs — it is not a technology you learn from documentation alone. AI engineering has moved fast in the opposite direction, attracting engineers who have not worked in traditional enterprise software environments. The overlap between the two talent pools is small and has not kept pace with demand as SAP AI programs have accelerated.
Three Ways to Staff a SAP AI Program
- Option 1: Direct Hire
Best for organizations building a permanent internal SAP AI capability. Timeline to a qualified hire: 12-20 weeks for senior roles. Cost includes recruiting fees (20-30% of first-year salary for specialized roles), onboarding time, and the risk that the hire is not the right profile for the specific program.
- Option 2: Contract Staffing Through a Specialized Partner
Best for programs with a defined timeline and a specific skill gap. A specialized staffing partner with an existing bench of SAP AI engineers can place a qualified resource in 1-3 weeks. The engineer is already credentialed, has SAP environment experience, and ramps in days rather than weeks. Contracts typically run 3-12 months with extension options.
The key qualifier: specialized. A general IT staffing firm will not have SAP AI Core engineers on their bench. The right partner sources specifically from the SAP AI talent pool and has placed these roles in live programs before.
- Option 3: Dedicated AI Delivery Pod
Best for programs that need a full delivery capability rather than individual contributors. A pod typically includes one solution architect, two to three SAP AI engineers, and one LLM integration specialist. The pod operates as an embedded unit inside the client program, with the staffing partner responsible for team composition, continuity, and delivery quality.
Pods reach productive output faster than assembled teams of individual contractors because the team members have worked together before. For system integrators running large SAP programs with tight delivery milestones, this is often the fastest path to predictable output.
How Fast Can You Place?
This is the first question every system integrator asks, and the honest answer depends on the role and the depth of the partner’s existing bench.
- SAP Data Architecture Specialist: 5-10 business days for a contract placement from an active bench
- SAP BTP AI Developer: 7-14 business days
- SAP AI Core Engineer: 10-15 business days
- Enterprise LLM Integration Engineer: 10-18 business days
- Dedicated AI Delivery Pod (3-5 person): 2-4 weeks for full team mobilization
USM maintains an active bench of SAP AI engineers across these role types. If your program has a specific role requirement and a near-term start date, reach out directly for a bench availability check: usmsystems.com/services/sap-ai-engineering-talent.
Why USM Business Systems?
USM Business Systems is a CMMi Level 3, Oracle Gold Partner AI and IT services firm headquartered in Ashburn, VA. With 1,000+ engineers, 2,000+ delivered applications, and 27 years of enterprise delivery experience, USM specializes in AI implementation for supply chain, pharma, manufacturing, and SAP environments. Our SAP AI practice places specialized engineers inside enterprise programs within days — on contract, as dedicated delivery pods, or on a project basis.
Ready to put SAP AI into production? Book a 30-minute scoping call with our SAP AI team at usmsystems.com.
[contact-form-7]
FAQ
What certifications should a SAP AI engineer have?
SAP offers certifications in SAP BTP, SAP AI Core, and SAP Integration Suite that are relevant. For the LLM and agentic framework layer, certifications from major cloud providers (Microsoft, AWS, Google) combined with hands-on project experience in SAP environments are more indicative of capability than credentials alone.
Can SAP AI engineers work remotely on enterprise programs?
Yes. Most SAP AI engineering work — integration configuration, model deployment, API development — is done remotely. Periods of on-site collaboration are common during initial environment access, architecture review, and production go-live. Hybrid models work well for programs with security-cleared or regulated environments.
How do you assess whether a SAP AI engineer has the right skills for a specific program?
The most reliable assessment is a structured technical review covering the specific platforms involved — SAP AI Core, BTP Integration Suite, SAP Datasphere — combined with a review of prior program experience that matches your environment. Ask specifically about production deployments, not proofs of concept.
What is a SAP AI delivery pod and how is it different from a contract team?
A delivery pod is a pre-assembled, small team — typically 3-5 people — with defined roles and prior working experience together. A contract team is assembled from individual resources who may not have worked together before. Pods are faster to productive output because team formation and working pattern development have already happened.
What engagement length makes sense for contract SAP AI engineers?
Initial contracts of 3-6 months cover most first-deployment programs. Extensions of 6-12 months are common when the engineer is embedded in an ongoing program. Project-based engagements with fixed deliverables and defined end dates work well for enterprises that prefer milestone-based contracting.
Is it better to hire SAP engineers and train them on AI, or AI engineers and train them on SAP?
The answer depends on the role. For SAP AI Core and BTP work, starting with a strong SAP BTP developer and adding AI integration skills is faster — the SAP platform knowledge takes longer to build than the AI API skills. For the LLM integration and agentic framework layer, starting with a strong AI engineer and adding SAP data access patterns is often faster. The data architecture role almost always needs a dedicated SAP specialist.
Q&A: Robots can’t feel, but novel sensors could change that
Robots with different bodies can now share skills: What intention-based learning changes
HEAPGrasp: A faster, smarter way for robots to handle tricky objects
AGIBOT Reaches 10,000 Units as Real-World Demand for Robots Accelerates
ChatGPT’s Next Big Thing
The Fully Automated Researcher
Writer Will Douglas Heaven reports that the next, major enhancement of ChatGPT is focused on transforming the AI into an incredibly in-depth researcher.
Observes Heaven: “The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself.”
Look for an entry level, ‘intern version’ of this system by September – followed by a fully automated, multi-AI-agent powered upgrade by 2028, Heaven adds.
In other news and analysis on AI writing:
*AI Gifted? You’re a Preferred Employee at Social Media Network Reddit: Young college grads with AI chops are the top choice as employees by Steve Huffman, CEO, Reddit.
Observes writer Emma Burleigh: “While some CEOs marvel over the abilities of chatbots and AI agents, recent graduates are actually ripe for the new tech-driven world of work: the digital natives grew up with the internet, and spent most of their higher education in the ChatGPT era.
“They’re deeply familiar with the technology and are much more apt to leverage it in their work.”
Word to the wise.
*Publisher Yanks Horror Novel for Suspected AI Use: “Shy Girl,” a walk on the spooky side, has been pulled from publication for suspected use of AI in its creation.
Observes writer Alexandra Alter: “The cancellation of the novel reveals the challenges the book world is navigating as the adoption of AI becomes more widespread.
“Readers and many writers remain ferociously opposed to the use of the technology for writing — which they regard as cheating or a form of theft.”
*AI as Journalist: At Fortune Magazine, It’s De Rigueur: As many fiction and nonfiction media outlets express outrage over AI-generated content, others are embracing it unabashedly.
Case-in-point: Fortune Magazine, where nearly 20% of all articles are generated in part by AI, according to writer Isabella Simonetti.
Most of those articles are penned – with the help of AI – by journalist Nick Lichtenberg, who has “produced more stories in six months than any of his colleagues at Fortune delivered in a year,” according to Simonetti.
*Thanks But No Thanks: Microsoft Lightens Up on AI in Windows 11: In response to popular demand, Microsoft is paring down the presence of its AI assistant – Copilot – in Windows’ latest version.
Observes writer Ross Kelly: “Microsoft has faced criticism over its persistent integration of Copilot features across the operating system — a strategy it has pursued for over 18 months now.”
Apparently, many users are put-off by the Redmond Titan’s desire to transform Windows 11 into an ever-evolving, ‘agentic’ operating system.
*OpenAI Kills Its Sora Video App: An AI video-maker that once struck fear in the hearts of Hollywood filmmakers has been scrapped.
Writer Connie Loizos reports that maker OpenAI pulled-the-plug on Sora. The reason: Sora was simply too unprofitable.
Observes Loizos: “The app was burning through roughly $1 million every day — not because people loved it, but because video generation is so costly to run.”
*OpenAI Puts ‘Adult Mode’ on Ice: OpenAI has abandoned the release of an ‘adult mode’ for ChatGPT, which it has been mulling for many months.
Observes writer Alina Maria Stan: “The feature was announced with confidence, delayed twice, and ultimately abandoned after pushback from staff, advisors, and investors.”
Also a factor: OpenAI’s widely reported decision to pare-down side projects and redouble its efforts on enhancing the core functions of ChatGPT.
*Google Experimenting With ‘Auto-Reply to a Review’ Tool: Business owners tongue-tied when faced with a negative – or positive — review may want to check-out an auto-reply tool now in beta in Google Business Profile.
The new AI-powered helper is designed to automatically serve-up responses to a customer review, which can be assessed and edited by the business – and then manually submitted.
Observes writer Danny Goodwin: “Availability is inconsistent across accounts and reviews. The feature has been spotted in the U.S., Brazil, and India, but not widely in Europe.”
*63% of Mid-Sized Law Firms Now Use AI: New survey finds that a healthy majority of mid-sized law firms are all in on AI use.
Equally eyebrow raising: 94% of those users predict that AI will spike revenue and enhance customer service.
Observes writer Bob Ambrogi: “Mid-sized firms are moving beyond experimentation into operational integration, the report says. Common implementations include automation of document creation (70%), email filing (60%), and data extraction (53%).”
*AI Pioneer Grammarly Hit With Class Action Suit: AI editing and writing tool Grammarly has been hit with a lawsuit, which accuses the firm of using writers’ identities without their permission.
Essentially, more than a few authors and writers are angry that Grammarly’s now-abandoned AI ‘Expert Review’ feature analyzed users’ writing — then attributed that analysis those scribes without their permission.
Observes Top Class Actions: Plaintiff Julia Angwin “Grammarly users were able to upload their writing and receive real-time comments on how to improve their prose from Angwin, (Stephen) King and other acclaimed writers for $12-a-month.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT’s Next Big Thing appeared first on Robot Writers AI.
What is a Planetary Gear?
AI benchmark helps robots plan and complete their chores in the real world
Alive or not? Tiny 3D-printed robots that swim and navigate just like animals
Robot Talk Episode 150 – House building robots, with Vikas Enti
Claire chatted to Vikas Enti from Reframe Systems about using robotics and automation to build climate-resilient, high-performance homes.
Vikas Enti is the co-founder and CEO of Reframe Systems, a physical AI company rethinking how homes are built through automation and localized fabrication. He previously spent more than a decade at Amazon Robotics, where he helped scale advanced robotics systems across global logistics networks. Today, he is applying those same principles of systems design and repeatable production to address the housing shortage. Vikas focuses on building climate-resilient, high-performance homes faster and more predictably than traditional methods.

