Page 4 of 606
1 2 3 4 5 6 606

Digital twins to rescue robots: What faster 3D point cloud processing enables

What if technology, such as self-driving cars, drones, or intelligent navigation systems, could understand the world the way we do—not just seeing shapes, but recognizing meaning? A person waiting at a crosswalk, a bicycle left on the pavement, or a dog running across a yard—for us, these distinctions are instant. For systems that rely on data, they have long been a challenge.

Video-based AI gives robots a visual imagination

In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots "envision" their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.

Video-based AI gives robots a visual imagination

In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots "envision" their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.

AI system learns to prevent warehouse robot traffic jams, boosting throughput 25%

Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns. To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly.

Bat-inspired ultrasound helps palm-sized drones navigate fog and smoke

A team led by Worcester Polytechnic Institute (WPI) researcher Nitin J. Sanket has shown that ultrasound sensors and a form of artificial intelligence (AI) can enable palm-sized aerial robots to navigate with limited power and computation through fog, smoke, and other challenging conditions during search-and-rescue operations.

Deepfake X-rays are so real even doctors can’t tell the difference

Deepfake X-rays created by AI are now convincing enough to fool both doctors and AI models. In tests, radiologists had limited success identifying fake images, especially when they didn’t know they were being shown. This opens the door to risks like fraudulent medical claims and tampered diagnoses. Experts say stronger safeguards and detection tools are critical as the technology advances.

The DevOps guide to governing and managing agentic AI at scale

What do autopilot and enterprise agentic AI have in common? Both can operate autonomously. Both require a human to set the rules, boundaries, and alerts before the system takes the controls. And in both cases, skipping that step isn’t bold. It’s reckless.

Most enterprises are deploying AI agents the same way early teams deployed cloud infrastructure: fast, with governance as an afterthought. What looked like speed at first turned into sprawl, security gaps, and years of technical debt.

AI agents that reason, decide, and act autonomously demand a different approach. Governance isn’t a constraint. It’s what keeps these systems reliable, secure, and under control.

As enterprises adopt AI agents as a new class of autonomous systems, DevOps teams are responsible for keeping them inside the guardrails. Right now, those agents are starting to route tickets, execute workflows, and make decisions across your systems at a scale traditional software never required you to manage.

This is your survival guide to the agentic AI lifecycle: what to plan for, what to watch, and how to build governance that accelerates deployment instead of blocking it.

Key takeaways

  • Governance must be built into every stage of the agentic AI lifecycle. Unlike static software, AI agents evolve over time, so governance can’t be an afterthought.
  • Agentic AI changes what DevOps teams need to monitor and control. Success depends on observing agent behavior, decisions, and interactions, not just uptime or resource usage.
  • Identity-first security is foundational for safe agent deployments. Agents need their own credentials, permissions, and policies to prevent data exposure and compliance failures.
  • Automation is essential to scale AgentOps responsibly. CI/CD, containerization, orchestration, and automated observability reduce risk while preserving speed.
  • Governed agents deliver more business value over time. When governance is embedded in the lifecycle, teams can scale agent workloads without accumulating security debt or compliance risk.

Why governance matters in AI agent deployments

Ungoverned agents don’t just underperform. They trigger compliance failures, expose sensitive data, and interact unpredictably across the systems they touch. Once that happens, the damage is hard to contain.

Governance gives you visibility and control across the full agentic AI lifecycle, from ideation through deployment to retirement. It enforces policies, monitors agent behavior, and keeps deployments compliant, secure, and resilient. It also makes complex workflows easier to standardize, scale, and repeat across the business.

But governance for agentic AI is fundamentally different from governance for static software. Agents have identities, permissions, task-specific responsibilities, and behaviors that can change over time. They don’t just execute. They reason, act, and adapt. Your governance framework has to keep up across the full lifecycle, not just at deployment.

Category Traditional DevOps Agentic AI
System type Static applications Autonomous agents with persistent identities and task ownership
Scaling Based on resource demand Based on agent workload, orchestration demands, and inter-agent dependencies
Monitoring System performance metrics, such as uptime and latency Agent behavior, decisions, and tool usage
Security and compliance User and system access controls Agent actions, decisions, and data access

How to plan and design a secure AI agent lifecycle

Planning for static software and planning for AI agents are not the same problem. With software, you’re managing infrastructure. With agents, you’re managing behavior: how they make decisions, how they interact with existing systems, and how they stay compliant as they evolve.

Get this stage wrong, and everything downstream pays for it. Get it right, and you’re catching problems before they’re expensive, building agents that are reliable and scalable, and setting your team up to govern them without constant firefighting.

This section lays out the blueprint for getting that foundation right.

Determining organizational goals

No AI for the sake of AI. Agents should solve real business challenges, integrate into core processes, and have measurable outcomes attached from day one.

Start by identifying the specific problems you want agents to address. Then connect those problems to quantifiable KPIs. In traditional DevOps, that means tracking uptime and performance metrics. In agentic AI, that means tracking decision accuracy, task completion rates, policy adherence, and productivity impact.

The framework below gives you a starting point for aligning goals to the right metrics.

Framework Key metrics
OKR-Based Decision accuracy
Task completion rates
ROI-Driven Cost savings
Revenue growth
Risk-Based Compliance adherence
Policy violations

Governing agent behavior and compliance 

You’re not just governing what data agents can access. You’re governing how they reason over that data and what they do with it. That’s a fundamentally different problem from traditional software governance.

With traditional software, role-based access control (RBAC) is usually sufficient. With agents, it’s a starting point at best. Agents make decisions, generate answers, and take actions, none of which RBAC was designed to govern.

Agentic AI governance must include: 

  • Auditing agent answers
  • Monitoring for violations
  • Enforcing guardrails
  • Documenting agent behavior

Agents should only interact with the data needed to complete their specific tasks. Early compliance planning keeps agent behavior in check and helps prevent violations before they become incidents. 

Selecting tools and frameworks for agent management

Most teams try to manage AI agents by stitching together existing MLOps, DevOps, and DataOps tooling. The problem is that none of it was built to handle agents that reason, decide, and act autonomously. You end up with visibility gaps, compliance blind spots, and a fragile stack that doesn’t scale.

You need a unified platform built for the full agent management lifecycle.

Look for a platform that: 

  • Integrates with your existing AI systems and data sources
  • Provides real-time observability into agent decisions, behavior, and performance
  • Scales to support growing agent workloads
  • Supports compliance requirements and industry standards, such as HIPAA, ISO 27001, and SOC 2
  • Demonstrates robust auditing capabilities 

How to deploy and orchestrate AI agents at scale

Deployment is where planning meets reality. This is where you start measuring agent performance under real-world conditions and validating that agents are actually solving the business challenges you defined earlier.

Orchestration is what keeps agents, tasks, and workflows moving in sync. Dependencies have to be managed, failures have to be recovered, and resources have to be allocated without disrupting ongoing operations.

Automation makes that possible at scale without introducing new risk:

  • CI/CD pipelines accelerate testing and deployment while reducing manual error.
  • Version control ensures consistency and traceability, so you can roll back changes when problems arise.

Configuring orchestration and scheduling

Orchestrating AI agents isn’t the same as orchestrating traditional workloads. Agents have dependencies, interact with other agents and tools, and can overwhelm downstream systems if not properly managed. In a multi-agent environment, one poorly configured agent can trigger cascading failures. 

Tools like Kubernetes help manage part of this complexity by handling container orchestration, scheduling, and recovery. If a service fails, Kubernetes can automatically restart or reschedule it, helping restore availability without manual intervention.

But agent orchestration goes beyond infrastructure management. It also requires structured execution: coordinating task flow, enforcing policy controls, managing retries and failures, and allocating resources as agent workloads grow. That is what keeps operations stable, scalable, and compliant.

Implementing observability and alert mechanisms

With traditional software, observability means tracking uptime and resource usage. With agents, you’re monitoring behavior, decisions, and interactions in real time. The signals are different, and missing them has different consequences.

Observability for agentic AI covers logs, metrics, and traces that tell you not just whether an agent is running, but whether it’s behaving as expected, staying within policy boundaries, and interacting with other systems as intended.

Proactive alerts close the loop. When an agent violates policy or behaves unexpectedly, your team is notified immediately to contain the issue before it affects downstream systems or triggers a compliance incident. The goal isn’t to watch every decision. It’s to catch the ones that matter before they become problems.

Monitor, observe, and improve

Deployment isn’t the finish line. Agents evolve, data changes, and business requirements shift. Continuous monitoring is what keeps agents aligned with the goals you set at the start.

Start by establishing baselines: the performance benchmarks you’ll measure agents against over time. These should tie directly to the KPIs you defined during planning, whether that’s response time, decision accuracy, or policy adherence. Without clear baselines, you’re monitoring noise.

From there, build a continuous improvement loop. Update models, prompts, and workflows as new data and operational insights become available. Run A/B tests to validate changes before rolling them out. Track whether iterative improvements are actually moving your core metrics. The agents that drive the most business value aren’t the ones that launched well. They’re the ones that continue improving over time.

Identity-first security and compliance best practices

In traditional security, you govern users, then applications. With agentic AI, you govern agents too, and the rules are more complex.

An agent doesn’t just need its own credentials, policies, and privileges. If that agent interacts with an employee, it must also understand and respect that employee’s access rights. The agent may have broader reach across data sources to complete its task, but it can’t expose information the employee isn’t entitled to see. That’s a security boundary traditional access controls weren’t designed to manage.

Identity-first security addresses this directly. Every agent gets unique credentials scoped to its specific tasks, nothing more. Core controls include:

  • RBAC to restrict agent actions based on roles
  • Least privilege to limit agent access to the minimum required
  • Encryption to protect data in transit and at rest
  • Logging to maintain audit trails for compliance and troubleshooting

Conduct quarterly access control audits to prevent scope creep and privilege sprawl. Inventory agent permissions, decommission unused access, and verify compliance. Agents accumulate permissions over time. Audits keep that in check.

Handling AI agent upgrading, transitions, retraining, and retirement

Unlike static software, agents don’t just become outdated. Their behavior can shift over time. They interact with new data, adapt their behavior, and can drift beyond the guardrails and logic you originally built around them. That makes retirement more complex than deprecating a software version.

Knowing when to retire an agent requires active monitoring and judgment, not just a scheduled update cycle. When an agent’s behavior no longer aligns with business goals, compliance requirements, or security boundaries, it’s time to decommission it.

Responsible AI retirement includes: 

  • Data migration: archiving data from retired agents or transferring it to replacements 
  • Documentation: capturing agent behavior, decisions, and dependencies before decommissioning
  • Compliance verification: reviewing data retention and other security policies to confirm compliance 

Skipping end-of-life management creates exactly the kind of technical debt and security gaps that governed deployments are designed to prevent. Retirement isn’t the last step you get around to. It’s part of the lifecycle from day one.

Driving business value with fully governed AI agents

Governance isn’t what slows deployment down. It’s what makes deployment worth doing. Agents with governance embedded across their lifecycle are more consistent, more reliable, and easier to scale without accumulating security debt or compliance risk.

That’s how governed AI becomes a competitive advantage: not by moving faster, but by moving with confidence.

See how enterprise teams are operationalizing agentic AI from day zero to day 90.

FAQs

Why is governance more critical for agentic AI than traditional applications? Agentic AI systems make autonomous decisions, interact with other agents and systems, and change behaviorally over time. Without governance, that autonomy creates unpredictable behavior, security risks, and compliance violations that are expensive and difficult to remediate.

How is agentic AI governance different from traditional DevOps governance? Traditional DevOps focuses on infrastructure stability and application performance. Agentic AI governance must also cover agent decisions, task ownership, data usage, and behavioral constraints across the full lifecycle.

What should DevOps teams monitor for AI agents? In addition to system health, teams should monitor decision accuracy, policy adherence, task completion rates, unusual behavior patterns, and interactions between agents. These signals catch issues before they become incidents.How can organizations scale governed AI agents without slowing innovation? DataRobot embeds governance, observability, and security directly into the agent lifecycle. DevOps teams move fast while maintaining control, compliance, and trust as agent workloads grow.

The post The DevOps guide to governing and managing agentic AI at scale appeared first on DataRobot.

Robots take the heat for humans maintaining our biggest solar farms

AI-powered robots are set to track across thousands of kilometers of baked, uneven ground, reducing the danger for maintenance workers on Australia's large-scale solar farms. A successful trial by CSIRO, Australia's national science agency, repurposed autonomous robots originally designed for the mining industry. Without robots, the work is done on foot, bringing significant cost and safety risks.

A robotic hand without motors? How a sub-second shape-shifting actuator could work

While space structures and robotic arms require lightweight actuation devices capable of repetitive movement, conventional motor-based systems face limitations due to their heavy weight and complex structures. A KAIST research team has developed a smart material-based actuation technology that operates rapidly in less than a second without a motor, suggesting new possibilities for next-generation robotics and space deployable structures.

Wristband enables wearers to control a robotic hand with their own movements

The next time you're scrolling on your phone, take a moment to appreciate the feat: The seemingly mundane act is possible thanks to the coordination of 34 muscles, 27 joints, and over 100 tendons and ligaments in your hand. Indeed, our hands are the most nimble parts of our bodies. Mimicking their many nuanced gestures has been a longstanding challenge in robotics and virtual reality.
Page 4 of 606
1 2 3 4 5 6 606