Page 1 of 600
1 2 3 600

A multi-armed robot for assisting with agricultural tasks

Humans often use one hand to grasp the branch for better accessibility, while the other hand is used to perform primary tasks like (a) branch pruning and (b) hand pollination of the flower. (c) An overview of the approach used by Madhav and colleagues, where one robot manipulates the branch to move the flower to the field of view of another robot by planning a force-aware path. Figure from Force Aware Branch Manipulation To Assist Agricultural Tasks.

In their paper Force Aware Branch Manipulation To Assist Agricultural Tasks, which was presented at IROS 2025, Madhav Rijal, Rashik Shrestha, Trevor Smith, and Yu Gu proposed a methodology to safely manipulate branches to aid various agricultural tasks. We interviewed Madhav to find out more.

Could you give us an overview of the problem you were addressing in the paper?

Madhav Rijal (MR): Our work is motivated by StickBug [1], a multi-armed robotic system for precision pollination in greenhouse environments. One of the main challenges StickBug faces is that many flowers are partially or fully hidden within the plant canopy, making them difficult to detect and reach directly for pollination. This challenge also arises in other agricultural tasks, such as fruit harvesting, where target fruits may be occluded by surrounding branches and foliage.

To address this, we study how one robot arm can safely manipulate branches so that these occluded flowers can be brought into the field of view or reachable workspace of another robot arm. This is a challenging manipulation problem because plant branches are deformable, fragile, and vary significantly from one branch to another. In addition, unlike pick-and-place tasks, where objects move freely in space, branches remain attached to the plant, which imposes additional motion constraints during manipulation. If the robot moves a branch without accounting for these constraints and safety limits, it can apply excessive force and damage the branch.

So, the core problem we addressed in this paper is: how can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

How did your approach go about tackling the problem?

MR: Our approach [2] combines motion planning that accounts for branch constraints with real-time force feedback.

First, we generate a feasible manipulation path using an RRT* (rapidly exploring random tree) algorithm-based planner in the workspace. The planner respects the geometric constraints of the branch and the task requirements. We model branches as deformable linear objects and use a geometric heuristic to identify configurations that are safer to manipulate.

Then, during execution, we monitor the interaction force using a force sensor mounted on the manipulator. If the measured force exceeds a predefined safe threshold, the system does not continue along the same path. Instead, it re-plans the motion online and searches for an alternative path or goal configuration that can reduce branch stress while still achieving the task.

So, the key idea is that the robot does not plan only for reachability. It also adapts its motion based on the physical response of the branch during manipulation.

Madhav with the multi-armed pollination robot, StickBug.

What are the main contributions of your work?

MR: The main contributions of our work are:

  1. A geometric heuristic model for branch manipulation that does not require branch-specific parameter tuning or physical probing.
  2. A motion planning strategy for branch manipulation that respects both workspace and branch constraints, using the geometric heuristic to guide RRT* and incorporating online replanning based on force feedback.
  3. An experimental demonstration showing that force feedback-based motion planning can protect branches from excessive force during manipulation.
  4. Generalization across different branch types, since the method relies primarily on branch geometry and can adapt online to compensate for model inaccuracies.

Could you talk about the experiments that you carried out to test the approach?

MR: We evaluated the proposed method through a set of branch manipulation experiments using five different starting poses, all targeting a common goal region. Each configuration was tested 10 times, resulting in a total of 50 trials. A trial was considered successful if the robot brought the grasp point to within 5 cm of the goal point. For all trials, the planning time limit was set to 400 seconds, and the allowable interaction force range was −40 N to 40 N. Across the 50 trials, 39 were successful and 11 failed, corresponding to a success rate of about 78%. The average number of replanning attempts across all scenarios was 20.

In terms of force reduction, the results show a clear progression in safety. Constraint-aware planning reduced the manipulation force from above 100 N to below 60 N. Building on this, online force-aware replanning further reduced the force from about 60 N to below the desired 40 N threshold. This indicates that safety awareness through geometric heuristics, which model branches as deformable linear objects, together with force-aware online replanning, can effectively lower interaction forces during manipulation.

Overall, the experiments demonstrate that the proposed framework enables safer branch manipulation while maintaining task feasibility. By combining branch-constraint-aware planning with real-time force feedback, the robot can adapt its motion to reduce excessive force and minimize the risk of branch damage. These findings highlight the value of force-aware planning for practical robotic manipulation in agricultural environments.

Do you have plans to further extend this work?

MR: Yes, there are several directions for extending this work.

One current limitation is the need to define a safe force threshold in advance. In practice, different types of branches require different force limits for safe manipulation. A key direction for future work is to learn or estimate safe force thresholds automatically from branch geometry or visual cues.

Another extension is to improve grasp-point selection. Instead of only replanning after grasping, the system could also reason about the most suitable grasp point beforehand so that the required manipulation force is reduced from the start.

We are also interested in designing a compliant gripper with integrated force sensing that is better suited for manipulating delicate branches. In the longer term, we plan to integrate this method into a multi-arm agricultural robot, where one arm manipulates the branch and another performs pollination, pruning, or harvesting.

Overall, this work advances the development of agricultural robots that can actively manipulate branches to support tasks such as harvesting, pruning, and pollination. By exposing fruits, cut points, and hidden flowers within the canopy, this capability can help overcome key barriers to the broader adoption of robot-assisted agricultural technologies.

References

[1] Smith, Trevor, Madhav Rijal, Christopher Tatsch, R. Michael Butts, Jared Beard, R. Tyler Cook, Andy Chu, Jason Gross, and Yu Gu. Design of Stickbug: a six-armed precision pollination robot. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 69-75. IEEE, 2024.
[2] Rijal, Madhav, Rashik Shrestha, Trevor Smith, and Yu Gu, Force Aware Branch Manipulation To Assist Agricultural Tasks. In 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1217-1222. IEEE, 2025.

About Madhav

Madhav Rijal is a Ph.D. candidate in Mechanical Engineering at West Virginia University working in agricultural robotics. His research combines motion planning, optimization, multi-agent collaboration and distributed decision making to develop robotic systems for precision pollination and other plant-interaction tasks. His current work focuses on branch manipulation and safe robot operation in agricultural environments.

Sheepdogs reveal a better way to guide robot swarms

Sheepdogs, bred to control large groups of sheep in open fields, have demonstrated their skills in competitions dating back to the 1870s. In these contests, a handler directs a trained dog with whistle signals to guide a small group of sheep across a field and sometimes split the flock cleanly into two groups. But sheep do not always cooperate.

AI uses as much energy as Iceland but scientists aren’t worried

AI’s growing energy use sounds alarming, but its global climate impact may be far smaller than expected. Researchers found that while AI consumes huge amounts of electricity, it barely moves the needle on overall emissions. The real impact is more localized, especially around data centers. Meanwhile, AI could become a powerful tool for building greener technologies.

AI-powered robot learns how to harvest tomatoes more efficiently

A new tomato-picking robot is learning to think before it acts. Instead of simply identifying ripe fruit, it predicts how easy each tomato will be to harvest and adjusts its approach accordingly. This smarter strategy boosted success rates to 81%, with the robot even switching angles when needed. The breakthrough could pave the way for farms where robots and humans work side by side.

Study finds ChatGPT gets science wrong more often than you think

A new study put ChatGPT to the test by asking it to judge whether hundreds of scientific hypotheses were true or false—and the results were far from reassuring. While the AI got it right about 80% of the time on the surface, its performance dropped significantly when accounting for random guessing, revealing only modest reasoning ability. Even more concerning, it frequently contradicted itself when asked the exact same question multiple times, sometimes flipping answers back and forth.

Compostable robot endures over 1 million uses before becoming plant food

The rapid proliferation of robots and electronic devices is placing the world under a new and growing environmental burden. According to the United Nations Institute for Training and Research (UNITAR), global electronic waste (e-waste) reached approximately 62 million metric tons in 2022, a significant portion of which was neither properly collected nor recycled but instead landfilled or incinerated.

Autonomous navigation of microrobots in complex flows demonstrated for the first time

For the first time, researchers at Leipzig University have shown that tiny synthetic microswimmers can perceive their surroundings directly through their own body shape and autonomously adapt to rapidly changing fluid flows. The study, now published in Science Advances, establishes a new paradigm for autonomous microsystems whose control functions reliably in challenging environments where conventional sensors fail. This opens up new prospects for autonomous medical microrobots, for example for the targeted delivery of medication in the bloodstream.

Identity-first AI governance: Securing the agentic workforce

AI agents are now operating inside production systems, querying Snowflake, updating Salesforce, and executing business logic autonomously. In many enterprises, they authenticate using static API keys or shared credentials rather than distinct identities in the corporate IDP. 

Authenticating autonomous systems through shared credentials introduces real governance risk.

When an agent executes an action, logs often attribute it to a developer key or service account instead of a clearly defined autonomous actor. Attribution becomes ambiguous. Least privilege weakens. Revocation may require rotating credentials or modifying code rather than disabling a governed identity. In a non-deterministic environment, that delay slows investigation and containment.

Shared credentials turn autonomous systems into “shadow identities”: actors operating inside production without a distinct, governed identity in the enterprise directory.

Most organizations have monitoring and guardrails in place. The issue is structural. Autonomous systems are operating outside first-class identity governance within the same control plane that secures human users. Closing this gap requires aligning agents with the identity model that governs your workforce, ensuring every autonomous actor is traceable, permission scoped, and centrally revocable.

The hidden risk: Modern agentic AI is non-deterministic

Traditional enterprise software follows predefined logic. Given the same input, it produces the same output.

Agentic AI systems operate differently. Instead of executing a fixed script, they use probabilistic models to:

  • Evaluate context
  • Retrieve information dynamically
  • Construct action paths in real time 

If you instruct an agent to optimize a supply chain route, it may reference weather forecasts, fuel cost data, and historical performance before determining a route. That flexibility enables agents to solve complex, multi-system problems that traditional software cannot address.

However, non-deterministic systems introduce new governance considerations:

  • Execution paths may vary from one request to the next.
  • Retrieved data sources may differ depending on context.
  • Outputs can contain reasoning errors or inaccurate conclusions.
  • Actions may extend beyond what a developer explicitly scripted.

When a system can continuously access company data and execute actions autonomously, it cannot be governed like a static application. It requires clear identity attribution, tightly scoped permissions, continuous monitoring, and centralized revocation authority.

Why credential-based security breaks in agentic environments

Most enterprises still secure AI agents using static API keys or shared service credentials. That model worked when software executed predictable logic. It breaks down when autonomous systems operate across production environments.

When an agent authenticates with a shared credential, activity is logged but not clearly attributed. A Salesforce update or Snowflake query may appear to originate from a developer key rather than from a distinct autonomous system. Attribution becomes blurred. Least privilege is harder to enforce. Containment depends on rotating credentials or modifying code instead of disabling a governed identity.

The problem is identity governance, not monitoring visibility.

Traditional security assumes credentials map to accountable users or services. Shared credentials break that assumption. In a non-deterministic environment, that ambiguity slows investigation and increases exposure.

The strategic shift: Identity-first governance

The governance gap created by shadow identities cannot be solved with additional monitoring. It requires a structural shift in how autonomous systems are governed.

When a system can dynamically retrieve data, generate probabilistic outputs, and execute actions across enterprise platforms, it is no longer just an application. It is an operational actor. Governance must reflect that.

Identity-first governance treats autonomous systems as first-class identities within the same directory that governs human users. Each agent receives a distinct identity, clearly scoped permissions, and auditable activity attribution.

This changes the control model. Access is tied to identity rather than static credentials. Actions are logged to a specific actor. Permissions can be adjusted without modifying code. Revocation occurs at the identity layer, not inside application logic.

The result is a unified identity plane for human and autonomous actors. Instead of building parallel AI security stacks, organizations extend existing identity controls. Policy remains consistent. Incident response remains centralized. Innovation scales without fragmenting governance.

A practical example: Identity backed agents in practice

One architectural response to the identity governance gap is to provision autonomous systems as first-class identities inside the corporate directory, rather than authenticating them through static API keys.

This approach requires coordination between agent orchestration and enterprise identity infrastructure. Through a deep integration between DataRobot and Okta, organizations can now provision agents built in the DataRobot Agentic Workforce Platform as governed, first-class identities directly inside Okta. Agents deployed within the DataRobot Agentic Workforce Platform can be provisioned as governed identities inside Okta instead of relying on shared credentials.

In this model, each agent receives a directory backed identity. Authentication occurs through short lived, policy controlled tokens rather than long lived credentials embedded in code. Actions are logged to a specific autonomous actor. Permissions are scoped using existing least privilege controls.

This directly addresses the attribution and revocation challenges described earlier. When an agent is deployed, its identity is created within the corporate IDP. When permissions change, governance workflows apply. If behavior deviates from expectation, security teams can restrict or disable the agent at the identity layer, immediately adjusting its access across integrated systems such as Salesforce or Snowflake.

The impact is operational. Autonomous systems become visible actors inside the same identity plane that secures human users. Rather than introducing a parallel AI security stack, organizations extend the controls they already operate and audit.

blog diagram

Three governance principles for agentic AI

As autonomous systems move into production environments, governance must become explicit. At minimum, three principles are essential.

1. Eliminate static credentials

Autonomous systems should not authenticate through long lived API keys or shared service accounts. Production agents must use short lived, policy controlled credentials tied to a governed identity. If an autonomous system can access enterprise systems, it must authenticate as a distinct actor within the identity provider.

2. Audit the actor, not the platform

Security logs should attribute actions to specific autonomous identities, not to generic services or developer keys. In non-deterministic systems, platform level visibility is insufficient. Governance requires actor level attribution to support investigation, anomaly detection, and access review.

3. Centralize revocation authority

Security teams must be able to restrict or disable an autonomous system through the primary identity control plane. Containment should not depend on code changes, credential rotation, or redeployment. Identity must function as an operational control surface.

Non-deterministic systems are not inherently unsafe. But when autonomous systems operate without identity level governance, exposure increases. Clear identity boundaries convert autonomy from a governance liability into a manageable extension of enterprise operations.

AI governance is workforce governance

Agentic systems now operate inside core workflows, access regulated data, and execute actions with real consequence. Governance models designed for deterministic software are not sufficient for autonomous systems.

If a system can act, it must exist as a governed identity within the same control plane that secures your workforce. Identity becomes the foundation for attribution, least privilege, monitoring, and centralized revocation. When agents operate inside the corporate directory rather than outside it, oversight scales with innovation.

This model is taking shape through closer integration between agent orchestration platforms and enterprise identity providers, including the collaboration between DataRobot and Okta. Rather than building parallel AI security stacks, organizations can extend the identity infrastructure they already operate to autonomous systems. To see how identity-backed agents can operate securely inside enterprise environments, explore The Enterprise Guide to Agentic AI or schedule a demo to learn how DataRobot and Okta integrate agent orchestration with enterprise identity governance.

The post Identity-first AI governance: Securing the agentic workforce appeared first on DataRobot.

The foundation for a governed agent workforce: DataRobot and NVIDIA RTX PRO 4500

Moving AI agents from experimental pilots to a full-scale enterprise workforce requires more than just a model; it requires a hardware foundation that balances high-performance inference with industry-leading cost and power performance.

DataRobot has technically validated the NVIDIA RTX PRO 4500 as an inference engine with a Blackwell architecture for the DataRobot Agent Workforce Platform. This combination provides the compute power and control necessary for mission-critical autonomous agents.

Performance without over-provisioning

For the modern AI Factory, the NVIDIA RTX PRO 4500 occupies a strategic middle ground in the NVIDIA lineup. With 32GB of high-speed GDDR7 memory, 800 GB/s bandwidth, FP4 precision, and a 2nd-Gen Transformer Engine it sits between the entry-level L4 (24GB) and the high-end L40S (48GB).

This 32GB VRAM buffer is specifically optimized for agentic workflows:

  • Local Execution: Enough headroom to host sophisticated LLMs alongside multi-agent orchestration layers.
  • Low Latency: Reduces the delay in complex reasoning tasks, essential for real-time applications.
  • Data Privacy: Supports on-premises deployment for sensitive enterprise data.

Validated use cases for the enterprise

The price-to-performance ratio of the NVIDIA RTX PRO 4500 excels in two high-impact areas:

1. Real-time logistics and business planning: By leveraging NVIDIA cuOpt, agents can solve complex routing and scheduling problems. The NVIDIA RTX PRO 4500 provides the parallel processing power to run these heavy optimization engines in concert with the agent’s reasoning LLM on a single node.

2. Production-grade RAG pipelines: Retrieval-Augmented Generation (RAG) is the backbone of reliable agents. Combined with NeMo Retriever NIM, including multimodal document understanding models that extract structured content from tables, charts, and complex page elements, this hardware excels at the embedding, indexing, and retrieval steps, ensuring agents maintain context across diverse data formats without performance bottlenecks.

From infrastructure to orchestration

Hardware provides the raw horsepower, but the DataRobot Agent Workforce Platform provides the ability to leverage that compute to build useful customer applications in a secure, governed manner. As organizations transition to autonomous agents, DR provides a runtime and build environments to fully utilize the GPU power.

Runtime

1/ Seamless scalable and cost effective inferencing

2/ Embedded governance and monitoring in agents and apps

3/ Out-of-the-box security and identity

Build

1/ Comprehensive set of builder tools

2/ Extensive evaluation

3/ Embedded hooks to make deployment easy

Completing the stack with dataRobot

Hardware is the engine, and DataRobot’s Agent Workforce Platform makes it work for the business. While the NVIDIA RTX PRO 4500 provides the compute, DataRobot provides the platform to  build and manage mission-critical agents with guardrails, observability, and governance.

By combining NVIDIA’s market-leading hardware with DataRobot’s end-to-step platform, organizations can finally transition from experimental AI to a governed, scalable agent workforce. Whether you are running on-premises today or looking toward a hybrid cloud future, this stack is the definitive blueprint for the AI-driven enterprise.

The post The foundation for a governed agent workforce: DataRobot and NVIDIA RTX PRO 4500 appeared first on DataRobot.

4D printing technology uses waste sulfur to enable self-actuating soft robots

A joint research team led by Dr. Dong-Gyun Kim of the Korea Research Institute of Chemical Technology (KRICT), Professor Jeong Jae Wie of Hanyang University, and Professor Yong Seok Kim of Sejong University report the world's first 4D printing technology based on sulfur-rich polymers that respond to heat, light, and magnetic fields. The study was published in Advanced Materials.

Graphene-based sensor to improve robot touch

Schematic showing the materials used in the sensor and the sensing array on a robotic manipulator. Figure from Multiscale-structured miniaturized 3D force sensors. Reproduced under a CC BY 4.0 licence.

Robots are becoming increasingly capable in vision and movement, yet touch remains one of their major weaknesses. Now, researchers have developed a miniature tactile sensor that could give robots something much closer to a human sense of touch.

The technology, developed by researchers at the University of Cambridge, is based on liquid metal composites and graphene – a two-dimensional form of carbon. The ‘skin’ allows robots to detect not just how hard they are pressing on an object, but also the direction of applied forces, whether an object is slipping, and even how rough a surface is, at a scale small enough to rival the spatial resolution of human fingertips. Their results are reported in the journal Nature Materials.

Human fingers rely on multiple types of mechanoreceptors to sense pressure, force, vibration, and texture simultaneously. Reproducing this level of multidimensional tactile perception in artificial systems is a significant challenge, especially in devices that are both small and durable enough for practical use.

“Most existing tactile sensors are either too bulky, too fragile, too complex to manufacture or unable to accurately distinguish between normal and tangential forces,” said Professor Tawfique Hasan from the Cambridge Graphene Centre, who led the research. “This has been a major barrier to achieving truly dexterous robotic manipulation.”

To overcome this, the research team developed a soft, flexible composite material, combining graphene sheets, deformable metal microdroplets, and nickel particles, embedded in a silicone matrix.

Inspired by the microstructures found in human skin, the researchers shaped the material into tiny pyramids, some as small as 200 micrometres across. These pyramid structures concentrate stress at their tips, enabling the sensor to detect extremely small forces while maintaining a wide measurement range.

The result is a tactile sensor sensitive enough to detect a grain of sand. Compared with existing flexible tactile sensors, the new device improves size and detection limits by roughly an order of magnitude.

The sensor can also distinguish shear forces from normal pressure, a capability that allows it to detect when an object begins to slip. By measuring signals from four electrodes beneath each pyramid, the sensor can mathematically reconstruct the full three-dimensional force vector in real time.

In demonstrations, the team integrated the sensors into robotic grippers. The robots were able to grasp fragile objects, such as thin paper tubes, without crushing them. Unlike conventional force sensors, which rely on prior information about an object’s properties, the new system adapts in real time through slip detection.

At even smaller scales, microsensor arrays could identify the mass, geometry, and material density of tiny metal spheres by analysing force magnitude and direction. This opens the door to applications in minimally invasive surgery or microrobotics, where conventional force sensors are far too large.

Beyond robotics, the technology could have significant implications for prosthetics. Advanced artificial limbs increasingly rely on tactile feedback to provide users with a sense of touch. Highly sensitive, miniaturised 3D force sensors could enable more natural interactions with objects, improving control, safety, and user confidence.

“Our approach shows that bulky mechanical structures or complex optics are not required to achieve high-resolution 3D tactile sensing,” said lead author Dr Guolin Yun, a former Royal Society Newton International Fellow at Cambridge, and now Professor at the University of Science and Technology of China. “By combining smart materials with skin-inspired structures, we achieve performance that comes remarkably close to human touch.”

Looking ahead, the researchers believe the sensors could be miniaturised even further, potentially below 50 micrometres, approaching the density of mechanoreceptors in human skin. Future versions may also integrate temperature and humidity sensing, moving closer to a fully multimodal artificial skin.

As robots increasingly move out of controlled factory environments and into homes, hospitals, and unpredictable real-world settings, such advances in touch could be transformative — allowing machines not just to see and act, but to truly feel.

A patent application has been filed through Cambridge Enterprise, the University’s innovation arm. The research was supported by the Royal Society, the Henry Royce Institute, and the Advanced Research and Invention Agency (ARIA). Tawfique Hasan is a Fellow of Churchill College, Cambridge.

Reference

Multiscale-structured miniaturized 3D force sensors, Guolin Yun, Zesheng Chen, Zhuo Chen, Jinrui Chen, Binghan Zhou, Mingfei Xiao, Michael Stevens, Manish Chhowalla & Tawfique Hasan, Nature Materials (2026).

Page 1 of 600
1 2 3 600