Welcome to The Robotics World

Robotics is an interdisciplinary research area at the interface of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design intelligent machines that can help and assist humans in their day-to-day lives and keep everyone safe.

What do we Offer?

We offer any company connected with robotics in any way to contact us for further cooperation on mutually beneficial terms.

Promotion & Advertisement

Promote and Advertise your technology if you're a robotics company

Searching for Robotics

We help to search for technologies to integrate robots into your working process

Aggregation Of Information

We collect and aggregate news and other robotics information for you to able use it in the most efficient way

Robotics News

Latest headlines and updates on news from around the world. Find breaking stories, upcoming events and expert opinion.

The 100-agent benchmark: why enterprise AI scale stalls and how to fix it

Most enterprises scaling agentic AI are overspending without knowing where the capital is going. This isn’t just a budget oversight. It points to deeper gaps in operational strategy. While building a single agent is a common starting point, the true enterprise challenge is managing quality, scaling use cases, and capturing measurable value across a fleet of 100+ agents.

Organizations treating AI as a collection of isolated experiments are hitting a “production wall.” In contrast, early movers are pulling ahead by building, operating, and governing a mission-critical digital agent workforce.

New IDC research reveals the stakes: 

  • 96% of organizations deploying generative AI report costs higher than expected
  • 71% admit they have little to no control over the source of those costs. 

The competitive gap is no longer about build speed. It is about who can operate a safe, “Tier 0” service foundation in any environment.

Screenshot 2025 12 18 at 3.45.43 PM

The high cost of complexity: why pilots fail to scale

The “hidden AI tax” is not a one-time fee; it is a compounding financial drain that multiplies as you move from pilot to production. When you scale from 10 agents to 100, a lack of visibility and governance turns minor inefficiencies into an enterprise-wide cost crisis.

The true cost of AI is in the complexity of operation, not just the initial build. Costs compound at scale due to three specific operational gaps:

  • Recursive loops: Without strict monitoring and AI-first governance, agents can enter infinite loops of re-reasoning. In a single night, one unmonitored agent can consume thousands of dollars in tokens.
  • The integration tax: Scaling agentic AI often requires moving from a few vendors to a complex web of providers. Without a unified runtime, 48% of IT and development teams are bogged down in maintenance and “plumbing” rather than innovation (IDC).
  • The hallucination remediator: Remediating hallucinations and poor results has emerged as a top unexpected cost. Without production-focused governance baked into the runtime, organizations are forced to retrofit guardrails onto systems that are already live and losing money.

The production wall: why agentic AI stalls in production

Moving from a pilot to production is a structural leap. Challenges that seem manageable in a small experiment compound exponentially at scale, leading to a production wall where technical debt and operational friction stall progress.

Production reliability

Teams face a hidden burden maintaining zero downtime in mission-critical environments. In high-stakes industries like manufacturing or healthcare, a single failure can stop production lines or cause a network outage.

Example: A manufacturing firm deploys an agent to autonomously adjust supply chain routing in response to real-time disruptions. A brief agent failure during peak operations causes incorrect routing decisions, forcing multiple production lines offline while teams manually intervene.

Deployment constraints

Cloud vendors typically lock organizations into specific environments, preventing deployment on-premises, at the edge, or in air-gapped sites. Enterprises need the ability to maintain AI ownership and comply with sovereign AI requirements that cloud vendors cannot always meet.

Example: A healthcare provider builds a diagnostic agent in a public cloud, only to find that new Sovereign AI compliance requirements demand data stay on-premises. Because their architecture is locked, they are forced to restart the entire project.

Infrastructure complexity

Teams are overwhelmed by “infrastructure plumbing” and struggle to validate or scale agents as models and tools constantly evolve. This unsustainable burden distracts from developing core business requirements that drive value.

Example: A retail giant attempts to scale customer service agents. Their engineering team spends weeks manually stitching together OAuth, identity controls, and model APIs, only to have the system fail when a tool update breaks the integration layer.

Inefficient operations 

Connecting inference serving with runtimes is complex, often driving up compute costs and failing to meet strict latency requirements. Without efficient runtime orchestration, organizations struggle to balance performance and value in real time.

Example: A telecommunications firm deploys reasoning agents to optimize network traffic. Without efficient runtime orchestration, the agents suffer from high latency, causing service delays and driving up costs.

Governance: the constraint that determines whether agents scale

For 68% of organizations, clarifying risk and compliance implications is the top requirement for agent use. Without this clarity, governance becomes the single biggest obstacle to expanding AI. 

Success is no longer defined by how fast you experiment, but by your ability to focus on productionizing an agentic workforce from the start. This requires AI-first governance that enforces policy, cost, and risk controls at the agent runtime level, rather than retrofitting guardrails after systems are already live.

Example: A company uses an agent for logistics. Without AI-first governance, the agent might trigger an expensive rush-shipping order through an external API after misinterpreting customer frustration. This results in a financial loss because the agent operated without a policy-based safeguard or a “human-in-the-loop” limit.

This productionization-focused approach to governance highlights a key difference between platforms designed for agentic systems and those whose governance remains limited to the underlying data layer. 

Screenshot 2025 12 18 at 3.40.07 PM

Building for the 100 agent benchmark

The 100-agent mark is where the gap between early movers and the rest of the market becomes a permanent competitive divide. Closing this gap requires a unified platform approach, not a fragmented stack of point tools.

Platforms built for managing an agentic workforce are designed to address the operational challenges that stall enterprise AI at scale. DataRobot’s Agent Workforce Platform reflects this approach by focusing on several foundational capabilities:

  • Flexible deployment: Whether in the public cloud, private GPU cloud, on-premises, or air-gapped environments, ensure you can deploy consistently across all environments. This prevents vendor lock-in and ensures you maintain full ownership of your AI IP.
  • Vendor-neutral and open architecture: Build a flexible layer between hardware, models, and governance rules that allows you to swap components as technology evolves. This future-proofs your digital workforce and reduces the time teams spend on manual validation and integration.
  • Full lifecycle management: Managing an agentic workforce requires solving for the entire lifecycle — from Day 0 inception to Day 90 maintenance. This includes leveraging specialized tools like syftr for accurate, low-latency workflows and Covalent for efficient runtime orchestration to control inference costs and latency.
  • Built-in AI-first governance: Unlike tools rooted purely in the data layer, DataRobot focuses on agent-specific risks like hallucination, drift, and responsible tool use. Ensure your agents are safe, always operational, and strictly governed from day one.

The competitive gap is widening. Early movers who invest in a foundation of governance, unified tooling, and cost visibility from day one are already pulling ahead. By focusing on the digital agent workforce as a system rather than a collection of experiments, you can finally move beyond the pilot and deliver real business impact at scale.

Want to learn more? Download the research to discover why most AI pilots fail and how early movers are driving real ROI. Read the full IDC InfoBrief here.

The post The 100-agent benchmark: why enterprise AI scale stalls and how to fix it appeared first on DataRobot.

2025 Top Article – The Ultimate Guide to Depth Perception and 3D Imaging Technologies

Depth perception helps mimic natural spatial awareness by determining how far or close objects are, which makes it invaluable for 3D imaging systems. Get expert insights on how depth perception works, the cues involved, as well as the various types of depth sensing cameras.

The science of human touch – and why it’s so hard to replicate in robots

By Perla Maiolino, University of Oxford

Robots now see the world with an ease that once belonged only to science fiction. They can recognise objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely or meaningfully, and the limits appear instantly.

As a researcher in soft robotics working on artificial skin and sensorised bodies, I’ve found that trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is.

My work began with the seemingly simple question of how robots might sense the world through their bodies. Develop tactile sensors, fully cover a machine with them, process the signals and, at first glance, you should get something like touch.

Except that human touch is nothing like a simple pressure map. Our skin contains several distinct types of mechanoreceptor, each tuned to different stimuli such as vibration, stretch or texture. Our spatial resolution is remarkably fine and, crucially, touch is active: we press, slide and adjust constantly, turning raw sensation into perception through dynamic interaction.

Engineers can sometimes mimic a fingertip-scale version of this, but reproducing it across an entire soft body, and giving a robot the ability to interpret this rich sensory flow, is a challenge of a completely different order.

Working on artificial skin also quickly reveals another insight: much of what we call “intelligence” doesn’t live solely in the brain. Biology offers striking examples – most famously, the octopus.

Octopuses distribute most of their neurons throughout their limbs. Studies of their motor behaviour show an octopus arm can generate and adapt movement patterns locally based on sensory input, with limited input from the brain.

Their soft, compliant bodies contribute directly to how they act in the world. And this kind of distributed, embodied intelligence, where behaviour emerges from the interplay of body, material and environment, is increasingly influential in robotics.

Touch also happens to be the first sense that humans develop in the womb. Developmental neuroscience shows tactile sensitivity emerging from around eight weeks of gestation, then spreading across the body during the second trimester. Long before sight or hearing function reliably, the foetus explores its surroundings through touch. This is thought to help shape how infants begin forming an understanding of weight, resistance and support – the basic physics of the world.

This distinction matters for robotics too. For decades, robots have relied heavily on cameras and lidars (a sensing method that uses pulses of light to measure distance) while avoiding physical contact. But we cannot expect machines to achieve human-level competence in the physical world if they rarely experience it through touch.

Simulation can teach a robot useful behaviour, but without real physical exploration, it risks merely deploying intelligence rather than developing it. To learn in the way humans do, robots need bodies that feel.

A ‘soft’ robot hand with tactile sensors, developed by the University of Oxford’s Soft Robotics Lab, gets to grips with an apple. Video: Oxford Robotics Institute.

One approach my group is exploring is giving robots a degree of “local intelligence” in their sensorised bodies. Humans benefit from the compliance of soft tissues: skin deforms in ways that increase grip, enhance friction and filter sensory signals before they even reach the brain. This is a form of intelligence embedded directly in the anatomy.

Research in soft robotics and morphological computation argues that the body can offload some of the brain’s workload. By building robots with soft structures and low-level processing, so they can adjust grip or posture based on tactile feedback without waiting for central commands, we hope to create machines that interact more safely and naturally with the physical world.

Occupational therapist Ruth Alecock uses the training robot 'Mona'
Occupational therapist Ruth Alecock uses the training robot ‘Mona’. Perla Maiolino/Oxford Robotics Institute, CC BY-NC-SA

Healthcare is one area where this capability could make a profound difference. My group recently developed a robotic patient simulator for training occupational therapists (OTs). Students often practise on one another, which makes it difficult to learn the nuanced tactile skills involved in supporting someone safely. With real patients, trainees must balance functional and affective touch, respect personal boundaries and recognise subtle cues of pain or discomfort. Research on social and affective touch shows how important these cues are to human wellbeing.

To help trainees understand these interactions, our simulator, known as Mona, produces practical behavioural responses. For example, when an OT presses on a simulated pain point in the artificial skin, the robot reacts verbally and with a small physical “hitch” of the body to mimic discomfort.

Similarly, if the trainee tries to move a limb beyond what the simulated patient can tolerate, the robot tightens or resists, offering a realistic cue that the motion should stop. By capturing tactile interaction through artificial skin, our simulator provides feedback that has never previously been available in OT training.

Robots that care

In the future, robots with safe, sensitive bodies could help address growing pressures in social care. As populations age, many families suddenly find themselves lifting, repositioning or supporting relatives without formal training. “Care robots” would help with this, potentially meaning the family member could be cared for at home longer.

Surprisingly, progress in developing this type of robot has been much slower than early expectations suggested – even in Japan, which introduced some of the first care robot prototypes. One of the most advanced examples is Airec, a humanoid robot developed as part of the Japanese government’s Moonshot programme to assist in nursing and elderly-care tasks. This multifaceted programme, launched in 2019, seeks “ambitious R&D based on daring ideas” in order to build a “society in which human beings can be free from limitations of body, brain, space and time by 2050”.

Japan’s Airec care robot is one of the most advanced in development. Video by Global Update.

Throughout the world, though, translating research prototypes into regulated robots remains difficult. High development costs, strict safety requirements, and the absence of a clear commercial market have all slowed progress. But while the technical and regulatory barriers are substantial, they are steadily being addressed.

Robots that can safely share close physical space with people need to feel and modulate how they touch anything that comes into contact with their bodies. This whole-body sensitivity is what will distinguish the next generation of soft robots from today’s rigid machines.

We are still far from robots that can handle these intimate tasks independently. But building touch-enabled machines is already reshaping our understanding of touch. Every step toward robotic tactile intelligence highlights the extraordinary sophistication of our own bodies – and the deep connection between sensation, movement and what we call intelligence.

This article was commissioned in conjunction with the Professors’ Programme, part of Prototypes for Humanity, a global initiative that showcases and accelerates academic innovation to solve social and environmental challenges. The Conversation is the media partner of Prototypes for Humanity 2025.The Conversation

Perla Maiolino, Associate Professor of Engineering Science, member of the Oxford Robotics Institute, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.