Robot Talk Episode 149 – Robot safety and security, with Krystal Mattich
Claire chatted to Krystal Mattich from Brain Corp about trustworthy autonomous robots in public spaces.
Krystal Mattich leads global data governance, system security, and privacy compliance for Brain Corp: the world’s leading autonomy platform for commercial robotics. As Senior Director of Security, Privacy, and Risk, she is the architect of the privacy-first infrastructure that powers over 40,000 BrainOS®-enabled robots across retail, airports, education and logistics. Krystal played a central role in launching Brain Corp’s public-facing Trust Center, reinforcing the company’s commitment to data transparency, GDPR compliance, and responsible AI.
Data Centers Are Expanding — Will Operators Turn to Robots for Management?
Data Centers Are Expanding — Will Operators Turn to Robots for Management?
How Chicago robot tutors are teaching SEL effectively, without pretending to be human
SAP AI Integration Services
SAP AI Integration Services: Connecting Your SAP Environment to Enterprise AI
Where Most SAP AI Projects Actually Break?
An enterprise spends three months selecting an AI vendor, six weeks scoping the use case, and then hits a wall: the AI system and the SAP environment are not talking to each other the way anyone expected. Data pipelines stall. API authentication fails in the production environment. The model produces outputs that make no sense because it is reading the wrong SAP table.
SAP AI integration is where most enterprise AI programs lose momentum. Not in the model selection. Not in the use case design. In the connection layer between the AI capability and the SAP data and workflows it needs to be useful.
USM Business Systems is a specialized SAP AI delivery partner headquartered in Ashburn, VA. We integrate enterprise AI systems — LLMs, agentic frameworks, predictive models — into live SAP environments for manufacturers, pharma companies, logistics operators, and the system integrators that serve them.
What SAP AI Integration Actually Covers?
SAP AI integration is not a single service. It spans five distinct layers, and the difficulty of each depends on your SAP landscape, your data maturity, and the AI capability you are connecting.
- Data Layer Integration
Before any AI system can reason accurately about your SAP environment, it needs a clean, structured feed of the right data. This typically means connecting to SAP Datasphere (SAP’s data fabric), SAP HANA views, or extracting structured data from S/4HANA tables using OData APIs or SAP Data Services.
The most common failure point here is master data quality. AI models amplify whatever is in your data. If your material master has inconsistent UoM coding across plants, a demand forecasting model will surface that inconsistency as erratic predictions.
- API and Middleware Integration
Most enterprise AI integration with SAP runs through SAP BTP Integration Suite — SAP’s managed integration platform that handles API management, protocol translation, and event streaming between SAP and external systems. Engineers who have not worked with BTP Integration Suite before underestimate the configuration depth it requires, particularly for high-volume transactional workflows.
- AI Runtime Integration
SAP AI Core is the managed runtime where enterprise AI models are deployed, versioned, and governed inside the SAP ecosystem. Integrating an external LLM or a custom predictive model into SAP AI Core requires specific API patterns, credential management, and lifecycle configuration that differs from deploying the same model in AWS or Azure. SAP AI Core engineers — not general ML engineers — are the right resource here.
- Workflow and Process Integration
An AI capability that produces a recommendation but cannot act on it is a dashboard, not an integration. Real SAP AI integration connects the AI output back into SAP workflows: a quality prediction that triggers a production hold in SAP PP, a demand signal that adjusts a replenishment order in SAP IBP, a document analysis result that routes an invoice exception in SAP Finance.
- User Experience Integration
For AI capabilities that surface to end users inside SAP, integration with SAP Fiori and SAP Joule determines whether the capability gets adopted. Engineers who understand both the AI layer and the SAP UX layer are required. These are not the same people.
What is the fastest path to a production SAP AI integration?
The fastest path starts with a single, well-scoped workflow that has clean source data in SAP. A supplier performance monitoring integration or an invoice exception routing integration can reach production in 8-12 weeks when the data is ready. Broad integrations that touch multiple SAP modules simultaneously take 4-6 months minimum.
Can we integrate a third-party LLM — like GPT-4 or Claude — directly into SAP?
Yes. SAP AI Core supports external model connections, and SAP BTP Integration Suite handles the API management layer. The integration work involves authentication, data formatting, latency management, and governance configuration. This is a well-established integration pattern for document analysis, NLP search, and content generation use cases.
The Three Integration Patterns We See Most Often
Pattern 1: NLP Search on SAP Data
Enterprises add a natural language search layer on top of SAP Datasphere or HANA, allowing users to query supply chain, financial, or operational data in plain language rather than through SAP transaction codes. According to Forrester’s 2024 Enterprise AI Survey, 61% of SAP users report that data accessibility is the primary barrier to AI adoption. NLP search directly addresses this.
The integration connects an LLM to SAP data views, with a retrieval layer that fetches relevant records and passes them to the model as context. The model returns an answer in plain language. The SAP Fiori interface surfaces the result. This pattern reaches production in 6-10 weeks for a defined data domain.
Pattern 2: Document AI on SAP-Connected Document Flows
Enterprises processing high volumes of documents — invoices, purchase orders, quality certificates, compliance filings — integrate document AI to extract, classify, and route content automatically. The integration reads documents from SAP Document Management or external repositories, processes them through a document AI model, and writes the structured output back to the relevant SAP object.
Pharma and life sciences companies use this pattern for batch record processing and supplier qualification documents. Logistics companies use it for freight invoice reconciliation. The accuracy rate on standard document types typically reaches 90%+ within the first 30 days of production operation.
Pattern 3: Predictive Models on SAP Operational Data
Predictive models trained on historical SAP transaction data — demand history, equipment sensor readings, supplier delivery records — produce forward-looking signals that feed back into SAP planning processes. A demand forecasting model reads S/4HANA sales history and external market signals, produces a forecast, and updates SAP IBP automatically. A predictive maintenance model reads equipment telemetry and writes a maintenance recommendation to SAP PM.
This pattern has the longest data preparation phase — 4-8 weeks to clean and structure SAP historical data — but produces the highest sustained value once in production.
What to Look for When Evaluating SAP AI Integration Partners
- SAP AI Core and BTP Integration Suite experience, specifically. Ask for examples of integrations built on these platforms, not SAP integrations in general.
- Data readiness assessment as part of the scoping process. Partners who jump straight to architecture without assessing your SAP master data quality are skipping the step that determines whether the integration will work.
- A clear governance model. Enterprise SAP environments are audited. Any AI integration needs logging, version control, human override capability, and a rollback procedure.
- Engineers who have worked in both the AI layer and the SAP layer. The rarest and most valuable profile is an engineer who understands SAP data structures and modern AI frameworks simultaneously. Firms that staff these roles separately add significant coordination overhead.
Why USM Business Systems?
USM Business Systems is a CMMi Level 3, Oracle Gold Partner AI and IT services firm headquartered in Ashburn, VA. With 1,000+ engineers, 2,000+ delivered applications, and 27 years of enterprise delivery experience, USM specializes in AI implementation for supply chain, pharma, manufacturing, and SAP environments. Our SAP AI practice places specialized engineers inside enterprise programs within days — on contract, as dedicated delivery pods, or on a project basis.
Ready to put SAP AI into production? Book a 30-minute scoping call with our SAP AI team at usmsystems.com.
[contact-form-7]FAQ
How does SAP BTP Integration Suite differ from standard API middleware?
BTP Integration Suite is SAP’s managed platform for enterprise integration — it handles API management, event streaming, protocol translation, and pre-built connectors to SAP and third-party systems. It also integrates directly with SAP AI Core, which is what makes it the preferred integration layer for SAP AI programs.
What data from SAP can be used to train AI models?
Historical transactional data from S/4HANA, master data from SAP MDG, sensor data connected through SAP IoT, and document data from SAP Document Management are all commonly used. The key requirement is data governance — understanding what data can leave SAP boundaries and what must stay in the SAP environment.
How long does a SAP AI integration project take from scoping to production?
A single, well-defined integration — one workflow, one AI capability, one SAP module — typically takes 8-14 weeks from scoping to production deployment. Multi-module integrations or programs that require significant data preparation first run 4-6 months.
What is SAP Datasphere and why does it matter for AI integration?
SAP Datasphere is SAP’s data fabric platform — it creates a unified, governed data layer across SAP and non-SAP sources. For AI integration, it is important because it gives AI models a clean, semantically structured view of enterprise data without requiring direct access to S/4HANA tables.
Can AI integrations be built incrementally, or do they require a full platform build first?
Incremental is the right approach for most enterprises. A first integration scoped to one workflow proves the pattern, builds internal confidence, and reveals integration requirements you did not anticipate. Enterprises that try to build a complete AI integration platform before demonstrating value rarely reach production.
Wind-powered robot could enable long-term exploration of hostile environments
Smarter, faster, and more human: AI system helps robots outpace their human teachers
Humanoid robot learns impressive tennis skills from imperfect human motion
Swimming robot propelled by lab-grown muscle hits record speed
Generative AI improves a wireless vision system that sees through obstructions
Hand Tracking Streamer: A Practical Bridge from Quest Hand Tracking to Robotics Teleoperation and Data Collection
DataRobot + Nebius: An enterprise-ready AI Factory optimized for agents
DataRobot and Nebius have partnered to introduce AI Factory for Enterprises, a joint solution designed to accelerate the development, operation, and governance of AI agents. This platform allows agents to reach production in days, rather than months.
AI Factory for Enterprises provides a scalable, cost-effective, governed, and managed enterprise-grade platform for agents. It achieves this by combining DataRobot’s Agent Workforce Platform: the most comprehensive, flexible, secure, and enterprise-ready agent lifecycle management platform, with Nebius’ purpose-built cloud infrastructure for AI.
Our partnership
Nebius: The purpose-built cloud for AI
The challenge today is that general-purpose cloud platforms often introduce unpredictable performance, latency, and a “virtualization tax” that cripples continuous, production-scale AI.
To solve this, DataRobot is leveraging Nebius AI Cloud, a GPU cloud platform engineered from the hardware layer up specifically to deliver the bare-metal performance, low latency, and predictable throughput essential for sustained AI training and inference. This eliminates the “noisy-neighbor” problem and ensures your most demanding agent workloads run reliably, delivering predictable outcomes and transparent costs.
Nebius’ Token Factory augments the offering by providing a pay-per-token model access layer for key open-source models, which customers can use during agent building and experimentation, and then deploy the same models with DataRobot when running the agents in production.
DataRobot: Seamlessly build, operate, and govern agents at scale
DataRobot’s Agent Workforce Platform is the most comprehensive Agent Lifecycle Management platform that enables customers to build, operate, and govern their agents seamlessly.
The platform offers two primary components:
- An enterprise-grade, scalable, reliable, and cost-effective runtime for models and agents, featuring out-of-the-box governance and monitoring.
- An easy-to-use agent builder environment that enables customers to seamlessly build production-ready agents in hours, rather than days or months.
Comprehensive enterprise-grade runtime capabilities
- Scalable, cost-effective runtime: Features single-click deployment of 50+ NIMs and Hugging Face models with autoscaling or deploy any containerized artifacts via Workload API (both with inbuilt monitoring/governance), optimized utilization through endpoint level multi-tenancy (token quota), and high-availability inferencing. You can deploy containerized agents, applications or other composite systems built using a combination of say LLMs, domain specific libraries like PhysicsNemo, cuOpt etc., or your own proprietary models, with a single command using Workload API.
- Governance and monitoring: Provides the industry’s most comprehensive out-of-the-box metrics (behavioral and operational), tracing capabilities for agent execution paths, full lineage/versioning with audit logging, and industry-leading governance against Security, Operational, and Compliance Risks with real-time intervention and automated reporting.
- Security and identity: Includes Unified Identity and Access Management with OAuth 2.0, granular RBAC for least-privilege access across resources, and secure secret management with an encrypted vault.
Comprehensive enterprise-grade agent building capabilities
- Builder tools: Support for popular frameworks (Langchain, Crew AI, Llamaindex, Nvidia NeMo Agent Toolkit) and out-of-the-box support for MCP, authentication, managed RAG, and data connectors. Nebius token factory integration enables on-demand model use during the build.
- Evaluation & tracing: Industry-leading evaluation with LLM as a Judge, Human-in-the-Loop, Playground/API, and agent tracing. Offers comprehensive behavioral (e.g., task adherence) and operational (latency, cost) metrics, plus custom metric support.
- Out-of-the box production readiness: Enterprise hooks abstract away infrastructure, security, authentication, and data complexity. Agents deploy with a single command; DataRobot handles component deployment with embedded monitoring and governance at both the full agent and individual component/tool levels.
Build and deploy using the AI Factory for Enterprises
Want to take agents you have built elsewhere, or even open source industry specific models and deploy them in a scalable, secure and governed manner using the AI Factory? Or would you like to build agents without worrying about the heavy lifting of making them production ready? This section will show you how to do both.
1. DataRobot STS on Nebius
DataRobot Single-Tenant SaaS (STS) is deployed on Nebius Managed Kubernetes and can be backed by GPU-enabled node groups, high-performance networking, and storage options appropriate for AI workloads.For DataRobot deployments, Nebius is a high-performance low cost environment for agent workloads. Dedicated NVIDIA clusters (H100, H200, B200, B300, GB200 NVL72, GB300 NVL72) enable efficient tensor parallelism and KV-cache-heavy serving patterns, while InfiniBand RDMA supports high-throughput cross-node scaling. The DataRobot/Nebius partnership provides a robust AI infrastructure:
- Managed kubernetes with GPU-aware scheduling simplifies STS installation and upgrades, pre-configured with NVIDIA operators.
- Dedicated GPU worker pools (H100, B200, etc.) isolate demanding STS services (LLM inference, vector databases) from generic CPU-only workloads.
- High-throughput networking and storage support large model artifacts, embeddings, and telemetry for continuous evaluation and logging.
- Security and tenancy is maintained: STS uses dedicated tenant boundaries, while Nebius IAM and network policies meet enterprise requirements.
- Built-in node health monitoring proactively identifies and addresses GPU/network issues for stable clusters and smarter maintenance.
2. Governed, monitored model inference deployment
The challenge with GenAI isn’t getting a model running; it’s getting it running with the same monitoring, governance, and security your organization expects. DataRobot’s NVIDIA NIM integration deploys NIM containers from NGC onto Nebius GPUs in four clicks:
- In Registry > Models, click Import from NVIDIA NGC and browse the NIM gallery.
- Select the model, review the NGC model card, and choose a performance profile.
- Review the GPU resource bundle automatically recommended based on the NIM’s requirements.
- Click Deploy, select the Serverless environment, and deploy the model.

Out-of-the-box observability and governance for deployed models
- Automated monitoring & risk assessment: Leverage the NeMo Evaluator integration for model faithfulness, groundness, and relevance scoring. Automatically scan for Bias, PII, and Prompt Injection risks.
- Real-time moderation & deep observability: DataRobot offers a platform for NIM moderation and monitoring. Deploy out-of-the-box guards for risks like PII, Prompt Injection, Toxicity, and Content Safety. OTel-compliant monitoring provides visibility into NIM operational health, quality, safety, and resource use.
- Enterprise governance & compliance: DataRobot provides the administrative layer for safe, organization-wide scaling. It automatically compiles monitoring and evaluation data into compliance documentation, mapping performance to regulatory standards for audits and reporting.
3. Agent deployment using the Workload API
An MCP tool server, a LangGraph agent, a FastAPI backend, composite systems built using combination of say LLMs and domain specific libraries like cuOpt, PhysicsNemo etc; these are containers, not models, and they need their own path to production. The Workload API gives you a governed endpoint with autoscaling, monitoring, and RBAC in a single API call.
curl -X POST "${DATAROBOT_API_ENDPOINT}/workloads/" \
-H "Authorization: Bearer ${DATAROBOT_API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "agent-service",
"importance": "HIGH",
"artifact": {
"name": "agent-service-v1",
"status": "locked",
"spec": {
"containerGroups": [{
"containers": [{
"imageUri": "your-registry/agent-service:latest",
"port": 8080,
"primary": true,
"entrypoint": ["python", "server.py"],
"resourceRequest": {"cpu": 1, "memory": 536870912},
"environmentVars": [
],
"readinessProbe": {"path": "/readyz", "port": 8080}
}]
}]
}
},
"runtime": {
"replicaCount": 2,
"autoscaling": {
"enabled": true,
"policies": [{
"scalingMetric": "inferenceQueueDepth",
"target": 70,
"minCount": 1,
"maxCount": 5
}]
}
}
}'
The agent is immediately accessible at /endpoints/workloads/{id}/ with monitoring, RBAC, audit trails, and autoscaling.
Out-of-the-box observability and governance for deployed agentic workloads
DataRobot drives the AI Factory by providing robust governance and observability for agentic workloads:
- Observability (OTel Standard): DataRobot standardizes on OpenTelemetry (OTel): logs, metrics, and traces—to ensure consistent, high-fidelity telemetry for all deployed entities. This telemetry seamlessly integrates with existing enterprise observability stacks, allowing users to monitor critical dimensions, including:
- Agent-specific metrics: Such as Agent Task Adherence and Agent Task Accuracy.
- Operational health and resource utilization.
- Tracing and Logging: OTel-compliant tracing interweaves container-level logs with execution spans to simplify root cause analysis within complex logic loops.
- Governance and Access Control: DataRobot enforces enterprise-wide authentication and authorization protocols across deployed agents using OAuth-based access control combined with Role-Based Access Control (RBAC).
4. Enterprise-ready agent building capabilities
A comprehensive toolkit for every builder with the DataRobot Agent Workforce Platform on Nebius
The DataRobot Agent Workforce Platform helps developers build agents faster by extending existing flows. Our builder kits support complex multi-agent workflows and single-purpose bots, accommodating various tools and environments.
Our kit includes native support includes:
- Open source frameworks: Native integration with LangChain, CrewAI, and LlamaIndex.
- NAT (Node Architecture Tooling): DataRobot’s framework for modular, node-based agent design.
- Advanced standards: Skills, MCP (Model Context Protocol) for data/tool interaction, and robust Prompt Management for versioning/optimization.
The Nebius advantage: DataRobot’s Agent Workforce Platform integrates with the Nebius Token Factory, allowing developers to consume models like Nemotron 3 (and any open source model) on a pay-per-token basis during the experimental phase. This enables rapid, low-cost iteration without heavy infrastructure provisioning. Once perfected, agents can seamlessly transition from the Token Factory to a dedicated deployment (e.g., NVIDIA NIM) for enterprise scale and low latency.
Getting Started: Building is simple using our Node Architecture Tooling (NAT). You define agent nodes as structured, testable steps in YAML.
First, connect your deployed LLM in the Nebius token factors to DataRobot

Add DataRobot deployment to you agentic starter application in the DataRobot CLI

functions:
planner:
_type: chat_completion
llm_name: datarobot_llm
system_prompt: |
You are a content planner. You create brief, structured outlines for blog articles.
You identify the most important points and cite relevant sources. Keep it simple and to the point -
this is just an outline for the writer.
Create a simple outline with:
1. 10-15 key points or facts (bullet points only, no paragraphs)
2. 2-3 relevant sources or references
3. A brief suggested structure (intro, 2-3 sections, conclusion)
Do NOT write paragraphs or detailed explanations. Just provide a focused list.
writer:
_type: chat_completion
llm_name: datarobot_llm
system_prompt: |
You are a content writer working with a planner colleague.
You write opinion pieces based on the planner's outline and context. You provide objective and
impartial insights backed by the planner's information. You acknowledge when your statements are
opinions versus objective facts.
1. Use the content plan to craft a compelling blog post.
2. Structure with an engaging introduction, insightful body, and summarizing conclusion.
3. Sections/Subtitles are properly named in an engaging manner.
4. CRITICAL: Keep the total output under 500 words. Each section should have 1-2 brief paragraphs.
Write in markdown format, ready for publication.
content_writer_pipeline:
_type: sequential_executor
tool_list: [planner, writer]
description: A tool that plans and writes content on the requested topic.
function_groups:
mcp_tools:
_type: datarobot_mcp_client
authentication:
datarobot_mcp_auth:
_type: datarobot_mcp_auth
llms:
datarobot_llm:
_type: datarobot-llm-component
workflow:
_type: tool_calling_agent
llm_name: datarobot_llm
tool_names:
- content_writer_pipeline
- mcp_tools
return_direct:
- content_writer_pipeline
system_prompt:
Choose and call a tool to answer the query.
Evaluation capabilities: The “how-to”
Building is only half the battle; knowing if it works is the other. Our evaluation framework moves beyond simple “thumbs up/down” and into data-driven validation.
To evaluate your agent, you can:
- Define a test suite: Upload a “golden dataset” of expected queries and ground-truth answers.
- Automated metrics: Run your agent against built-in evaluators for faithfulness, relevance, and toxicity.
- LLM-as-a-Judge: Use a “critic” model to score agent responses based on custom rubrics (e.g., “Did the agent follow the brand’s tone of voice?”).
- Side-by-side comparison: Run two versions of your agent (e.g., one using NAT and one using LangChain) against the same dataset to compare cost, latency, and accuracy in a single dashboard.
Enterprise hooks: Deployment-ready from day one
We automate the “enterprise tax” (security, logging, auth) that separates notebooks from production services by embedding build “hooks”:
- Observability: Automatic OTel-compliant tracing captures every step without boilerplate.
- Identity & auth: Built-in OAuth 2.0 and Service Accounts ensure agents use the user’s actual permissions when calling internal APIs (CRM, ERP), maintaining strict security.
- Production hand-off: Deployment packages the environment, components, and auth hooks into a secure, governed container, ensuring a consistent agent from dev to production. Complex agents are autoparsed into orchestrated containers for granular monitoring while deployed as a single pipeline entity.
Governed, scalable inference
The DataRobot and Nebius partnership delivers a validated, enterprise-ready deployment stack for agentic AI built on NVIDIA accelerated computing. For teams moving beyond experimentation, it provides a governed and scalable path to sustained production inference.
Nebius and DataRobot will be showcasing this solution at NVIDIA GTC 2026, taking place March 16-19 in San Jose, California.
Read the executive summary blog
Connect with DataRobot (booth #104) and Nebius (booth #713) at GTC 2026
The post DataRobot + Nebius: An enterprise-ready AI Factory optimized for agents appeared first on DataRobot.
Swapping batteries for hydrogen gives drones a whole new range
A multi-armed robot for assisting with agricultural tasks
Humans often use one hand to grasp the branch for better accessibility, while the other hand is used to perform primary tasks like (a) branch pruning and (b) hand pollination of the flower. (c) An overview of the approach used by Madhav and colleagues, where one robot manipulates the branch to move the flower to the field of view of another robot by planning a force-aware path. Figure from Force Aware Branch Manipulation To Assist Agricultural Tasks.
In their paper Force Aware Branch Manipulation To Assist Agricultural Tasks, which was presented at IROS 2025, Madhav Rijal, Rashik Shrestha, Trevor Smith, and Yu Gu proposed a methodology to safely manipulate branches to aid various agricultural tasks. We interviewed Madhav to find out more.
Could you give us an overview of the problem you were addressing in the paper?
Madhav Rijal (MR): Our work is motivated by StickBug [1], a multi-armed robotic system for precision pollination in greenhouse environments. One of the main challenges StickBug faces is that many flowers are partially or fully hidden within the plant canopy, making them difficult to detect and reach directly for pollination. This challenge also arises in other agricultural tasks, such as fruit harvesting, where target fruits may be occluded by surrounding branches and foliage.
To address this, we study how one robot arm can safely manipulate branches so that these occluded flowers can be brought into the field of view or reachable workspace of another robot arm. This is a challenging manipulation problem because plant branches are deformable, fragile, and vary significantly from one branch to another. In addition, unlike pick-and-place tasks, where objects move freely in space, branches remain attached to the plant, which imposes additional motion constraints during manipulation. If the robot moves a branch without accounting for these constraints and safety limits, it can apply excessive force and damage the branch.
So, the core problem we addressed in this paper is: how can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?
How did your approach go about tackling the problem?
MR: Our approach [2] combines motion planning that accounts for branch constraints with real-time force feedback.
First, we generate a feasible manipulation path using an RRT* (rapidly exploring random tree) algorithm-based planner in the workspace. The planner respects the geometric constraints of the branch and the task requirements. We model branches as deformable linear objects and use a geometric heuristic to identify configurations that are safer to manipulate.
Then, during execution, we monitor the interaction force using a force sensor mounted on the manipulator. If the measured force exceeds a predefined safe threshold, the system does not continue along the same path. Instead, it re-plans the motion online and searches for an alternative path or goal configuration that can reduce branch stress while still achieving the task.
So, the key idea is that the robot does not plan only for reachability. It also adapts its motion based on the physical response of the branch during manipulation.
Madhav with the multi-armed pollination robot, StickBug.
What are the main contributions of your work?
MR: The main contributions of our work are:
- A geometric heuristic model for branch manipulation that does not require branch-specific parameter tuning or physical probing.
- A motion planning strategy for branch manipulation that respects both workspace and branch constraints, using the geometric heuristic to guide RRT* and incorporating online replanning based on force feedback.
- An experimental demonstration showing that force feedback-based motion planning can protect branches from excessive force during manipulation.
- Generalization across different branch types, since the method relies primarily on branch geometry and can adapt online to compensate for model inaccuracies.
Could you talk about the experiments that you carried out to test the approach?
MR: We evaluated the proposed method through a set of branch manipulation experiments using five different starting poses, all targeting a common goal region. Each configuration was tested 10 times, resulting in a total of 50 trials. A trial was considered successful if the robot brought the grasp point to within 5 cm of the goal point. For all trials, the planning time limit was set to 400 seconds, and the allowable interaction force range was −40 N to 40 N. Across the 50 trials, 39 were successful and 11 failed, corresponding to a success rate of about 78%. The average number of replanning attempts across all scenarios was 20.
In terms of force reduction, the results show a clear progression in safety. Constraint-aware planning reduced the manipulation force from above 100 N to below 60 N. Building on this, online force-aware replanning further reduced the force from about 60 N to below the desired 40 N threshold. This indicates that safety awareness through geometric heuristics, which model branches as deformable linear objects, together with force-aware online replanning, can effectively lower interaction forces during manipulation.
Overall, the experiments demonstrate that the proposed framework enables safer branch manipulation while maintaining task feasibility. By combining branch-constraint-aware planning with real-time force feedback, the robot can adapt its motion to reduce excessive force and minimize the risk of branch damage. These findings highlight the value of force-aware planning for practical robotic manipulation in agricultural environments.
Do you have plans to further extend this work?
MR: Yes, there are several directions for extending this work.
One current limitation is the need to define a safe force threshold in advance. In practice, different types of branches require different force limits for safe manipulation. A key direction for future work is to learn or estimate safe force thresholds automatically from branch geometry or visual cues.
Another extension is to improve grasp-point selection. Instead of only replanning after grasping, the system could also reason about the most suitable grasp point beforehand so that the required manipulation force is reduced from the start.
We are also interested in designing a compliant gripper with integrated force sensing that is better suited for manipulating delicate branches. In the longer term, we plan to integrate this method into a multi-arm agricultural robot, where one arm manipulates the branch and another performs pollination, pruning, or harvesting.
Overall, this work advances the development of agricultural robots that can actively manipulate branches to support tasks such as harvesting, pruning, and pollination. By exposing fruits, cut points, and hidden flowers within the canopy, this capability can help overcome key barriers to the broader adoption of robot-assisted agricultural technologies.
References
[1] Smith, Trevor, Madhav Rijal, Christopher Tatsch, R. Michael Butts, Jared Beard, R. Tyler Cook, Andy Chu, Jason Gross, and Yu Gu. Design of Stickbug: a six-armed precision pollination robot. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 69-75. IEEE, 2024.
[2] Rijal, Madhav, Rashik Shrestha, Trevor Smith, and Yu Gu, Force Aware Branch Manipulation To Assist Agricultural Tasks. In 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1217-1222. IEEE, 2025.
About Madhav
|
Madhav Rijal is a Ph.D. candidate in Mechanical Engineering at West Virginia University working in agricultural robotics. His research combines motion planning, optimization, multi-agent collaboration and distributed decision making to develop robotic systems for precision pollination and other plant-interaction tasks. His current work focuses on branch manipulation and safe robot operation in agricultural environments. |