Category robots in business

Page 4 of 607
1 2 3 4 5 6 607

AI benchmark helps robots plan and complete their chores in the real world

No matter how sophisticated they are, robots can often be indecisive and struggle with multi-step chores in the real world. For example, if you tell a robot to tidy a messy room, it might understand the goal but not know where to grab each object. It could even end up inventing steps. To address these common mistakes, Microsoft and a group of academics have developed an AI benchmark system to improve the accuracy of robot planning. The details of their work are published in a paper on the arXiv preprint server.

How to build an agentic AI governance framework that scales

Agentic AI is already reshaping how enterprises operate. But most governance frameworks aren’t built for it.

AI agents are most successful when they work within human-defined guardrails: governance frameworks designed for autonomous systems. Good governance doesn’t limit what agents can do. It defines where they can operate freely, and makes it safe to give them that freedom. 

But finding that balance requires consequential tradeoffs. AI leaders have to make deliberate decisions to develop governance frameworks that build trust, ensure compliance, and protect organizational reputation, while scaling confidently.

This is your decision-making guide to help you develop an agentic AI governance framework that lets you deploy with confidence — maximizing what agents can do while controlling what they shouldn’t.

​​Key takeaways

  • Agentic AI needs a new governance approach because autonomy changes the risk model. Agents make decisions, take actions, and connect to enterprise tools and data, so governance must cover the whole system, not just the model.
  • Governance is a scalable set of principles, not a one-time checklist. The goal is to define acceptable behavior, protect data, and ensure accountability in a way that stays consistent as agents and teams multiply.
  • Governance must be built in, not bolted on. If you wait until after agents are live to define scope, permissions, and controls, you’ll create rework, slow deployment, and increase exposure to security and compliance failures.
  • The best frameworks balance autonomy with oversight. “Governed autonomy” means letting agents run freely in low-risk scenarios while enforcing escalation paths and human review for high-impact, irreversible, or regulated actions.
  • Access control is the most important (and most commonly overlooked) layer. Agents are effectively digital employees: they need defined identities, least-privilege permissions, and explicit constraints on which tools (including MCP servers) they can access.

Why agentic AI requires a new governance framework

Governance frameworks aren’t anything new. But what most businesses have in place to oversee machine learning (ML) isn’t sufficient for autonomous agents. 

Unlike traditional models or basic automations, AI agents aren’t constrained by predefined scripts. They can make independent decisions, take autonomous actions, and access diverse business tools and data. 

This autonomy makes agentic AI better suited for complex, multi-step tasks, like orchestrating end-to-end workflows, but it also introduces more risk. After all, with more data access and decision authority comes more responsibility — and more governance dimensions. 

To account for these new risks, frameworks overseeing agentic AI systems must not only govern what autonomous agents do but what they connect to: enterprise tools and data sources. Model context protocol (MCP) is fast becoming the standard for agent-tool connections, adding another connectivity layer that governance has to address. 

Core principles of an agentic AI governance framework

Before designing a governance framework, get clear on what governance actually is. It’s more than a set of rules to follow or tools to deploy.

Governance is a set of principles that defines acceptable agent behavior, protects data privacy, and ensures accountability to mitigate downstream risks.

And it must be scalable. As your business grows and use cases become more complex, a governance framework needs to keep up with evolving needs while maintaining consistency across teams and systems. 

Governance must be built in, not bolted on

The most common mistake AI leaders make with governance is treating it as an add-on instead of an integral part of AI infrastructure

If you treat governance as an afterthought, you risk leaving gaps that force future rework and may undermine the success of your entire AI initiative. 

Once core agent behaviors, tool integrations, and permissions are already fixed, it’s challenging — and risky — to go back and add controls. It’s also time-consuming and labor-intensive, often requiring architectural changes and manual fixes. 

Instead of playing catch-up with band-aid governance, set yourself up for long-term success by making governance a design-time decision, not a final step. Design-time governance helps ensure you have clear, enforceable guardrails that guide behavior and limit risk from day one.

The governance golden rule: The earlier you embed governance, the more you can count on fast, safe production readiness, and the less you’ll scramble with last-minute security, legal, and compliance measures that stall deployment. 

Think of built-in governance like “governance as code.” Just like infrastructure as code, governance policies are more effective when defined programmatically from day one instead of manually managed after the fact. This way, you can easily apply, review, and reuse your governance framework consistently across agents and teams, now and as you scale. 

Governance must balance autonomy with oversight

The hardest part of building agentic AI governance is implementing enough controls to mitigate risks while still giving agents the autonomy to reason and act independently. 

If your governance framework overextends itself and curbs autonomy completely, then you’ve gone too far and defeated the entire point of deploying AI agents. 

AI agents best serve your business when they can make and execute decisions independently, without constantly deferring to humans. Overly restrictive frameworks undermine AI efficiency and shift the work back to human teams. 

Rather than restricting autonomy, governance frameworks should define clear boundaries where agents can act freely and where escalation is required. 

Well-planned governance creates decision boundaries based on risk, impact, and reversibility. If regulated financial or health data is involved, human-in-the-loop controls take priority. Conversely, low-risk, repeatable actions (like routine workflow steps) should be left to agents to run alone. 

What about keeping humans in the loop? 

Agentic AI governance should strategically incorporate human-in-the-loop controls, pulling in teams specifically where human judgment is required — not as the default fallback. 

Defining what must be governed in agentic systems

Unlike traditional ML governance, agentic AI governance must extend beyond models to cover your full autonomous system, from agent behavior and performance to access, tool connections, and outcomes.

Access, identity, and permissions

The access control layer is the most important part of your governance framework. It’s also the most overlooked. 

With the ability to access data, make decisions, and execute actions independently, agentic AI agents aren’t simple tools. Think of agentic AI agents less like software and more like digital workers taking real actions, touching real data, and connecting to real systems. And when something goes wrong, there are real consequences, like data exposure. 

Like human workers, AI agents need clear identities. But where human identities are often tied to roles, agent identities should be scoped to specific responsibilities, always founded on least-privilege access (i.e., the minimum access required to complete the task). 

As agents connect to more tools via MCP, governance should also define which MCP servers agents can access. 

Decision scope and authority

Independent decision-making is one of the core strengths of agentic AI that enables speed and scale, but left unchecked, it can cause agents to become unwieldy and introduce new risks. 

That’s why agents need defined decision boundaries to govern what kinds of decisions they can take and which require escalation to human judgment. 

Decision boundaries also help rein in scope creep. 

Over time, agents can exceed their original tasks and access controls, taking actions or acquiring permissions outside their defined scope. Decision boundaries keep agents in check by limiting authority where needed and enforcing escalation paths. 

To best balance risk mitigation and autonomy, governance frameworks should champion decision-level guardrails, not general, system-level permissions. If too broadly defined, permissions risk unnecessarily constraining agents, ultimately rendering them useless. 

Data usage and handling

To make autonomous decisions and execute tasks, AI agents have to interact with data and tools across enterprise systems. As use cases scale, AI agents only touch more (and more sensitive) data. 

That’s where the risk lives, especially for heavily regulated industries like finance or healthcare. 

A key part of agentic AI governance frameworks isn’t just governing what agents do. It’s governing what data those agents are allowed to access, when, and how much. That includes: 

  • Data minimization: Limiting agent access to only need-to-know data to complete assigned tasks
  • Residency: Ensuring data is only stored and accessed by agents in approved geographic regions
  • Privacy requirements: Enforcing policies for personally identifiable information (PII), protected health information (PHI), or otherwise regulated data

For large enterprises managing complex datasets with varying regulatory requirements, governance for data usage and handling isn’t just a nice-to-have.

Applying governance across the agent lifecycle

Well-thought-out, effective governance frameworks are never universal, but they should cover the full agent lifecycle. In other words, agentic AI governance should be a horizontal capability that covers the full agent lifecycle across your entire autonomous system. 

From design to deployment and beyond, it’s this end-to-end coverage that makes a governance framework different from a simple checklist. 

Design-time governance

Good governance begins on day one. That means defining and implementing clear guardrails before you even start building and deploying agents. 

Specifically, design-time governance should define:

  • Scope: What tasks is the agent allowed to do? What is explicitly off limits? 
  • Access: Which systems, tools, and data is the agent allowed to access? 
  • Constraints: What decisions must the agent escalate to humans? When? 

At this point, you should also conduct tests to identify governance gaps before they surface in production:

  • Simulate scenarios to see where agents exceed scope or misuse access.
  • Test edge cases to validate escalation paths.
  • Audit tool access to catch misconfigurations.

For governance, there’s no such thing as better late than never. Involve security, IT, and compliance teams early to align on governance needs and avoid risks and rework post-production. 

Deployment and runtime governance

After design-time decisions, don’t wait. Begin enforcing governance immediately during deployment. 

When you apply governance only after the fact, issues can slip by unnoticed, meaning you only identify gaps and start problem-solving after risks (and potential damage) have already taken hold. 

Conversely, by enforcing governance during runtime, you empower teams to detect and stop (or even prevent) unsafe actions before they can do real damage. 

Runtime governance should include: 

  • Logging: Capture detailed records of agent actions, tool usage, and data access for audit and investigations.
  • Monitoring: Continuously observe agent behavior to detect scope violations or policy drift.
  • Real-time enforcement: Actively block or escalate agent actions when necessary.

Remember: Real-time governance enforcement is impossible without real-time visibility. To identify risks and enforce policies, you first need continuous, trustworthy insights into what agents are doing, where, and when. 

Ongoing governance and evolution

Yes, governance work should start on day one, but it shouldn’t stop there. 

Agents evolve over time through updated tools, new data sources, and changing configurations, and your governance frameworks need to keep up. That means regularly revisiting your governance policies to ensure they’re still relevant and useful. 

Your quick checklist to manage ongoing governance: 

  • Schedule periodic reviews to evaluate agent scope, access controls, and evolving behaviors. 
  • Update policies where needed to reflect changes in regulations, tools, or business priorities.
  • Prepare for audits with continuous, granular documentation that demonstrates compliance.

Your governance framework requires ongoing maintenance. Don’t treat it like a simple playbook you can set and forget.

Signals that an agentic AI governance framework is missing

You might already have agentic AI governance in place (or think you do). But it can be hard to know if your policies are effective, where the gaps are, and how to fix them. 

Often, warning signs surface as you start to scale agents across teams and use cases, creating new orchestration complexities like: 

  • Cross-team agent conflicts
  • Duplicate tool access requests
  • Inconsistent policy enforcement across teams

Not sure where your agentic AI governance stands? Run a quick litmus test: 

Do you have a centralized view of all agents and their permissions? If not, you’re almost certainly working with governance gaps. 

Governance risk, cost, and enterprise impact

Leave governance until post-production, and you’re inviting extra work and unnecessary risks. 

When AI agents don’t have task-specific access controls or defined decision boundaries, you open the door to accidental data exposure, compliance violations, and other high-stakes incidents that come with big financial and reputational consequences. 

Just imagine what might happen if an agent with overly generous data access inadvertently exposes or modifies sensitive records. That’s a real risk without solid, intentional governance.

On top of reputational damage and financial losses from fines and audits, poor governance can leave further lasting financial consequences. Bills for incident response and remediation can keep rolling in for months or even years after an initial incident is contained. 

Strategic, preemptive governance paints a different picture. It doesn’t just improve agent performance and support regulatory compliance. It creates real cost savings by mitigating the risk of costly breaches, investigations, and other operational disruptions. 

Why agentic AI governance frameworks matter most in regulated industries

While every industry needs sound agentic AI governance, those with strict regulations have more at stake. 

Businesses in finance, healthcare, and the public sector face intense regulatory scrutiny with stiff consequences for breaking privacy or security obligations. Even small violations can threaten your organization’s financial and reputational standing, and the risks only get bigger as you scale agentic AI. 

With an ungoverned fleet of AI agents at work, your systems may inadvertently misuse data or otherwise break compliance with data protection, privacy, and safety regulations. 

But to work, governance must be auditable and explainable. It’s not enough to simply have checked the box “implement governance.” Regulators expect to see reproducible evidence of agent decision-making via complete audit trails that document what decisions were made, when, where, and why. 

Many organizations mistakenly assume older compliance frameworks — like SOC and ISO standards — don’t apply to agentic AI. They do, and regulators will expect evidence of compliance.

The governance “aha moment” for AI leaders

Governance isn’t about distrust. It’s about definition.

AI agents perform best when they have the autonomy to act — and the boundaries that make acting safely possible. The leaders who move fastest with agentic AI aren’t the ones who skip governance. They’re the ones who built it in from the start.

That’s the shift: from governance as a constraint to governance as the foundation for scale.

Learn how leading enterprises develop, deliver, and govern AI agents with DataRobot.

Building or evaluating agentic AI infrastructure? Check out our GitHub and dev portal.

FAQs

What is an agentic AI governance framework?

An agentic AI governance framework is a set of scalable principles, policies, and controls that define acceptable agent behavior, manage access to tools and data, and ensure accountability. Unlike traditional ML governance, it must govern not only model outputs but also agent actions, tool connections, and downstream business impact.

Why can’t we use our existing ML governance for agentic AI?

Traditional ML governance assumes bounded behavior. Models produce outputs, and humans or systems interpret them. Agents take autonomous actions, call tools, access data, and can change behavior over time, which introduces new risk dimensions like permissioning, tool governance, and decision authority.

What does “governance must be built in, not bolted on” actually mean?

It means governance decisions. Scope, access, constraints, and escalation paths should all be defined during design and enforced from deployment onward. If governance is added after agents are running, teams often discover permission gaps, compliance risks, or missing audit trails too late, forcing costly redesign and delays.

How do you balance autonomy with human oversight without undermining the agent’s effectiveness?

Use decision boundaries based on risk, impact, and reversibility. Low-risk, repeatable actions can remain fully autonomous, while high-risk actions (regulated data access, write actions in systems of record, irreversible decisions) require escalation or human-in-the-loop checkpoints.

The post How to build an agentic AI governance framework that scales appeared first on DataRobot.

Alive or not? Tiny 3D-printed robots that swim and navigate just like animals

Leiden researchers Professor Daniela Kraft and Mengshi Wei have created microscopic robots that move without sensors, software, or external control. Instead, their behavior emerges entirely from their shape and the way they interact with their environment. They are only a few tens of micrometers long—far smaller than the width of a human hair—yet these robots can swim, sense, navigate and adapt in ways that look surprisingly life-like. And all this without having a brain.

Robot Talk Episode 150 – House building robots, with Vikas Enti

Claire chatted to Vikas Enti from Reframe Systems about using robotics and automation to build climate-resilient, high-performance homes.

Vikas Enti is the co-founder and CEO of Reframe Systems, a physical AI company rethinking how homes are built through automation and localized fabrication. He previously spent more than a decade at Amazon Robotics, where he helped scale advanced robotics systems across global logistics networks. Today, he is applying those same principles of systems design and repeatable production to address the housing shortage. Vikas focuses on building climate-resilient, high-performance homes faster and more predictably than traditional methods.

Digital twins to rescue robots: What faster 3D point cloud processing enables

What if technology, such as self-driving cars, drones, or intelligent navigation systems, could understand the world the way we do—not just seeing shapes, but recognizing meaning? A person waiting at a crosswalk, a bicycle left on the pavement, or a dog running across a yard—for us, these distinctions are instant. For systems that rely on data, they have long been a challenge.

Video-based AI gives robots a visual imagination

In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots "envision" their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.

Video-based AI gives robots a visual imagination

In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots "envision" their actions before carrying them out. The system, which uses video to help robots imagine what might happen next, could transform how robots navigate and interact with the physical world.

AI system learns to prevent warehouse robot traffic jams, boosting throughput 25%

Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns. To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly.

Bat-inspired ultrasound helps palm-sized drones navigate fog and smoke

A team led by Worcester Polytechnic Institute (WPI) researcher Nitin J. Sanket has shown that ultrasound sensors and a form of artificial intelligence (AI) can enable palm-sized aerial robots to navigate with limited power and computation through fog, smoke, and other challenging conditions during search-and-rescue operations.

Deepfake X-rays are so real even doctors can’t tell the difference

Deepfake X-rays created by AI are now convincing enough to fool both doctors and AI models. In tests, radiologists had limited success identifying fake images, especially when they didn’t know they were being shown. This opens the door to risks like fraudulent medical claims and tampered diagnoses. Experts say stronger safeguards and detection tools are critical as the technology advances.

The DevOps guide to governing and managing agentic AI at scale

What do autopilot and enterprise agentic AI have in common? Both can operate autonomously. Both require a human to set the rules, boundaries, and alerts before the system takes the controls. And in both cases, skipping that step isn’t bold. It’s reckless.

Most enterprises are deploying AI agents the same way early teams deployed cloud infrastructure: fast, with governance as an afterthought. What looked like speed at first turned into sprawl, security gaps, and years of technical debt.

AI agents that reason, decide, and act autonomously demand a different approach. Governance isn’t a constraint. It’s what keeps these systems reliable, secure, and under control.

As enterprises adopt AI agents as a new class of autonomous systems, DevOps teams are responsible for keeping them inside the guardrails. Right now, those agents are starting to route tickets, execute workflows, and make decisions across your systems at a scale traditional software never required you to manage.

This is your survival guide to the agentic AI lifecycle: what to plan for, what to watch, and how to build governance that accelerates deployment instead of blocking it.

Key takeaways

  • Governance must be built into every stage of the agentic AI lifecycle. Unlike static software, AI agents evolve over time, so governance can’t be an afterthought.
  • Agentic AI changes what DevOps teams need to monitor and control. Success depends on observing agent behavior, decisions, and interactions, not just uptime or resource usage.
  • Identity-first security is foundational for safe agent deployments. Agents need their own credentials, permissions, and policies to prevent data exposure and compliance failures.
  • Automation is essential to scale AgentOps responsibly. CI/CD, containerization, orchestration, and automated observability reduce risk while preserving speed.
  • Governed agents deliver more business value over time. When governance is embedded in the lifecycle, teams can scale agent workloads without accumulating security debt or compliance risk.

Why governance matters in AI agent deployments

Ungoverned agents don’t just underperform. They trigger compliance failures, expose sensitive data, and interact unpredictably across the systems they touch. Once that happens, the damage is hard to contain.

Governance gives you visibility and control across the full agentic AI lifecycle, from ideation through deployment to retirement. It enforces policies, monitors agent behavior, and keeps deployments compliant, secure, and resilient. It also makes complex workflows easier to standardize, scale, and repeat across the business.

But governance for agentic AI is fundamentally different from governance for static software. Agents have identities, permissions, task-specific responsibilities, and behaviors that can change over time. They don’t just execute. They reason, act, and adapt. Your governance framework has to keep up across the full lifecycle, not just at deployment.

Category Traditional DevOps Agentic AI
System type Static applications Autonomous agents with persistent identities and task ownership
Scaling Based on resource demand Based on agent workload, orchestration demands, and inter-agent dependencies
Monitoring System performance metrics, such as uptime and latency Agent behavior, decisions, and tool usage
Security and compliance User and system access controls Agent actions, decisions, and data access

How to plan and design a secure AI agent lifecycle

Planning for static software and planning for AI agents are not the same problem. With software, you’re managing infrastructure. With agents, you’re managing behavior: how they make decisions, how they interact with existing systems, and how they stay compliant as they evolve.

Get this stage wrong, and everything downstream pays for it. Get it right, and you’re catching problems before they’re expensive, building agents that are reliable and scalable, and setting your team up to govern them without constant firefighting.

This section lays out the blueprint for getting that foundation right.

Determining organizational goals

No AI for the sake of AI. Agents should solve real business challenges, integrate into core processes, and have measurable outcomes attached from day one.

Start by identifying the specific problems you want agents to address. Then connect those problems to quantifiable KPIs. In traditional DevOps, that means tracking uptime and performance metrics. In agentic AI, that means tracking decision accuracy, task completion rates, policy adherence, and productivity impact.

The framework below gives you a starting point for aligning goals to the right metrics.

Framework Key metrics
OKR-Based Decision accuracy
Task completion rates
ROI-Driven Cost savings
Revenue growth
Risk-Based Compliance adherence
Policy violations

Governing agent behavior and compliance 

You’re not just governing what data agents can access. You’re governing how they reason over that data and what they do with it. That’s a fundamentally different problem from traditional software governance.

With traditional software, role-based access control (RBAC) is usually sufficient. With agents, it’s a starting point at best. Agents make decisions, generate answers, and take actions, none of which RBAC was designed to govern.

Agentic AI governance must include: 

  • Auditing agent answers
  • Monitoring for violations
  • Enforcing guardrails
  • Documenting agent behavior

Agents should only interact with the data needed to complete their specific tasks. Early compliance planning keeps agent behavior in check and helps prevent violations before they become incidents. 

Selecting tools and frameworks for agent management

Most teams try to manage AI agents by stitching together existing MLOps, DevOps, and DataOps tooling. The problem is that none of it was built to handle agents that reason, decide, and act autonomously. You end up with visibility gaps, compliance blind spots, and a fragile stack that doesn’t scale.

You need a unified platform built for the full agent management lifecycle.

Look for a platform that: 

  • Integrates with your existing AI systems and data sources
  • Provides real-time observability into agent decisions, behavior, and performance
  • Scales to support growing agent workloads
  • Supports compliance requirements and industry standards, such as HIPAA, ISO 27001, and SOC 2
  • Demonstrates robust auditing capabilities 

How to deploy and orchestrate AI agents at scale

Deployment is where planning meets reality. This is where you start measuring agent performance under real-world conditions and validating that agents are actually solving the business challenges you defined earlier.

Orchestration is what keeps agents, tasks, and workflows moving in sync. Dependencies have to be managed, failures have to be recovered, and resources have to be allocated without disrupting ongoing operations.

Automation makes that possible at scale without introducing new risk:

  • CI/CD pipelines accelerate testing and deployment while reducing manual error.
  • Version control ensures consistency and traceability, so you can roll back changes when problems arise.

Configuring orchestration and scheduling

Orchestrating AI agents isn’t the same as orchestrating traditional workloads. Agents have dependencies, interact with other agents and tools, and can overwhelm downstream systems if not properly managed. In a multi-agent environment, one poorly configured agent can trigger cascading failures. 

Tools like Kubernetes help manage part of this complexity by handling container orchestration, scheduling, and recovery. If a service fails, Kubernetes can automatically restart or reschedule it, helping restore availability without manual intervention.

But agent orchestration goes beyond infrastructure management. It also requires structured execution: coordinating task flow, enforcing policy controls, managing retries and failures, and allocating resources as agent workloads grow. That is what keeps operations stable, scalable, and compliant.

Implementing observability and alert mechanisms

With traditional software, observability means tracking uptime and resource usage. With agents, you’re monitoring behavior, decisions, and interactions in real time. The signals are different, and missing them has different consequences.

Observability for agentic AI covers logs, metrics, and traces that tell you not just whether an agent is running, but whether it’s behaving as expected, staying within policy boundaries, and interacting with other systems as intended.

Proactive alerts close the loop. When an agent violates policy or behaves unexpectedly, your team is notified immediately to contain the issue before it affects downstream systems or triggers a compliance incident. The goal isn’t to watch every decision. It’s to catch the ones that matter before they become problems.

Monitor, observe, and improve

Deployment isn’t the finish line. Agents evolve, data changes, and business requirements shift. Continuous monitoring is what keeps agents aligned with the goals you set at the start.

Start by establishing baselines: the performance benchmarks you’ll measure agents against over time. These should tie directly to the KPIs you defined during planning, whether that’s response time, decision accuracy, or policy adherence. Without clear baselines, you’re monitoring noise.

From there, build a continuous improvement loop. Update models, prompts, and workflows as new data and operational insights become available. Run A/B tests to validate changes before rolling them out. Track whether iterative improvements are actually moving your core metrics. The agents that drive the most business value aren’t the ones that launched well. They’re the ones that continue improving over time.

Identity-first security and compliance best practices

In traditional security, you govern users, then applications. With agentic AI, you govern agents too, and the rules are more complex.

An agent doesn’t just need its own credentials, policies, and privileges. If that agent interacts with an employee, it must also understand and respect that employee’s access rights. The agent may have broader reach across data sources to complete its task, but it can’t expose information the employee isn’t entitled to see. That’s a security boundary traditional access controls weren’t designed to manage.

Identity-first security addresses this directly. Every agent gets unique credentials scoped to its specific tasks, nothing more. Core controls include:

  • RBAC to restrict agent actions based on roles
  • Least privilege to limit agent access to the minimum required
  • Encryption to protect data in transit and at rest
  • Logging to maintain audit trails for compliance and troubleshooting

Conduct quarterly access control audits to prevent scope creep and privilege sprawl. Inventory agent permissions, decommission unused access, and verify compliance. Agents accumulate permissions over time. Audits keep that in check.

Handling AI agent upgrading, transitions, retraining, and retirement

Unlike static software, agents don’t just become outdated. Their behavior can shift over time. They interact with new data, adapt their behavior, and can drift beyond the guardrails and logic you originally built around them. That makes retirement more complex than deprecating a software version.

Knowing when to retire an agent requires active monitoring and judgment, not just a scheduled update cycle. When an agent’s behavior no longer aligns with business goals, compliance requirements, or security boundaries, it’s time to decommission it.

Responsible AI retirement includes: 

  • Data migration: archiving data from retired agents or transferring it to replacements 
  • Documentation: capturing agent behavior, decisions, and dependencies before decommissioning
  • Compliance verification: reviewing data retention and other security policies to confirm compliance 

Skipping end-of-life management creates exactly the kind of technical debt and security gaps that governed deployments are designed to prevent. Retirement isn’t the last step you get around to. It’s part of the lifecycle from day one.

Driving business value with fully governed AI agents

Governance isn’t what slows deployment down. It’s what makes deployment worth doing. Agents with governance embedded across their lifecycle are more consistent, more reliable, and easier to scale without accumulating security debt or compliance risk.

That’s how governed AI becomes a competitive advantage: not by moving faster, but by moving with confidence.

See how enterprise teams are operationalizing agentic AI from day zero to day 90.

FAQs

Why is governance more critical for agentic AI than traditional applications? Agentic AI systems make autonomous decisions, interact with other agents and systems, and change behaviorally over time. Without governance, that autonomy creates unpredictable behavior, security risks, and compliance violations that are expensive and difficult to remediate.

How is agentic AI governance different from traditional DevOps governance? Traditional DevOps focuses on infrastructure stability and application performance. Agentic AI governance must also cover agent decisions, task ownership, data usage, and behavioral constraints across the full lifecycle.

What should DevOps teams monitor for AI agents? In addition to system health, teams should monitor decision accuracy, policy adherence, task completion rates, unusual behavior patterns, and interactions between agents. These signals catch issues before they become incidents.How can organizations scale governed AI agents without slowing innovation? DataRobot embeds governance, observability, and security directly into the agent lifecycle. DevOps teams move fast while maintaining control, compliance, and trust as agent workloads grow.

The post The DevOps guide to governing and managing agentic AI at scale appeared first on DataRobot.

Robots take the heat for humans maintaining our biggest solar farms

AI-powered robots are set to track across thousands of kilometers of baked, uneven ground, reducing the danger for maintenance workers on Australia's large-scale solar farms. A successful trial by CSIRO, Australia's national science agency, repurposed autonomous robots originally designed for the mining industry. Without robots, the work is done on foot, bringing significant cost and safety risks.
Page 4 of 607
1 2 3 4 5 6 607