Page 1 of 561
1 2 3 561

IT as the new HR: Managing your AI workforce

Your organization is already hiring digital workers. Now, the question is whether IT is actually managing these “people-like” systems as part of the workforce, or as just another application in the tech stack.

Far from just another AI tool, AI agents are becoming digital coworkers that need the same lifecycle management as human employees: onboarding, supervision, performance reviews, and eventually, responsible decommissioning.

Many companies are already deploying agents to handle customer inquiries, process invoices, and make recommendations. The mistake is treating agents like software instead of managing them like team members.

IT is the natural leader to take on this “human resources for AI agents” role, managing agents’ lifecycle proactively versus inheriting a mismanaged system later. That’s how organizations move beyond pilots and manage agent lifecycles responsibly — with IT leading in partnership with business and compliance teams.

This is Post 3 in our Agent Workforce series, exploring how IT is well-positioned to manage agents as workforce assets, not just technology deployments.

Why IT is becoming the new HR for AI agents

AI agents are already steering IT into an expanded role. Just as HR oversees the employee lifecycle, IT is beginning to take ownership of managing the complete journey of AI agents: 

  1. Recruiting the right talent (selecting appropriate agents)
  2. Onboarding (integrating with enterprise systems)
  3. Supervising performance (monitoring accuracy and behavior)
  4. Training and development (retraining and updates)
  5. Offboarding (decommissioning and knowledge transfer)

HR doesn’t just hire people and walk away. It creates policies, sets cultural norms, and enforces accountability frameworks. IT must do the same thing for agents, balancing developer autonomy with governance requirements, much like HR balances employee freedom with company policy.

The stakes of getting it wrong are comparable, too. HR works to prevent unvetted hires that could damage the business and brand. IT must prevent deployment that introduces uncontrolled risk. When business units spin up their own agents without oversight or approval, it’s like bringing on a new hire without a background check.

When IT owns agent lifecycle management, organizations can curb shadow AI, embed governance from day one, and measure ROI more effectively. IT becomes the single source of truth (SSOT) for enterprise-wide consistency across digital workers.

But governance is only part of the job. IT’s larger mandate is to build trust between humans and digital coworkers, ensuring clarity, accountability, and confidence in every agent decision. 

How IT manages the digital coworker lifecycle

IT isn’t just tech support anymore. With a growing digital workforce, managing AI agents requires the same structure and oversight HR applies to employees. When agents misbehave or underperform, the financial and reputational costs can be significant. 

Recruiting the right agents

Think of agent deployment as hiring: Just like you’d interview candidates to determine their capabilities and readiness for the role, IT needs to evaluate accuracy, cost, latency, and role fit before any agent is deployed. 

It’s a balance between technical flexibility and enterprise governance. Developers need room to experiment and iterate, but IT still owns consistency and control. Frameworks should enable innovation within governance standards.

When business teams build or deploy agents without IT alignment, visibility and governance start to slip, turning small experiments into enterprise-level risks. This “shadow AI” can quickly erode consistency and accountability.

Without a governed path to deployment, IT will inherit the risk. An agent catalog solves this with pre-approved, enterprise-ready agents that business units can deploy quickly and safely. It’s self-service that maintains control and prevents shadow AI from becoming a cleanup project later on.

Supervising and upskilling agents

Monitoring is the performance review portion of the agent lifecycle, tracking task adherence, accuracy, cost efficiency, and business alignment — the same metrics HR uses for people. 

Retraining cycles mirror employee development programs. Agents need regular updates to maintain performance and adapt to changing requirements, just as people need ongoing training to stay current (and relevant).

Proactive feedback loops matter: 

  • Identify high-value interactions 
  • Document failure modes 
  • Track improvement over time

This historical knowledge becomes invaluable for managing your broader agent workforce.

Performance degradation is often gradual, like an employee becoming slowly disengaged over time. Regular check-ins with agents (reviewing their decision patterns, accuracy trends, and resource consumption) can help IT spot potential issues before they become bigger problems.

Offboarding and succession planning

When a long-tenured employee leaves without proper knowledge transfer, it’s hard to recoup those lost insights. The same risks apply to agents. Decision patterns, learned behaviors, and accumulated context should be preserved and transferred to successor systems to make them even better.

Like employee offboarding and replacement, agent retirement is the final step of agentic workforce planning and management. It involves archiving decision history, compliance records, and operational context. 

Continuity depends on IT’s discipline in documentation, version control, and transition planning. Handled well, this leads to succession planning, ensuring each new generation of agents starts smarter than the last. 

How IT establishes control: The agent governance framework

Proactive governance starts at onboarding, not after the first failure. Agents should immediately integrate into enterprise systems, workflows, and policies with controls already in place from day one. This is the “employee handbook” moment for digital coworkers. CIOs set the expectations and guardrails early, or risk months of remediation later. 

Provisioning and access controls

Identity management for agents needs the same rigor as human accounts, with clear permissions, audit trails, and role-based access controls. For example, an agent handling financial data needs different permissions than one managing customer inquiries.

Access rights should align to each agent’s role. For example: 

  • Customer service agents can access CRMs and knowledge bases, but not financial systems.
  • Procurement agents can read supplier data, but can’t modify contracts without human approval.
  • Analytics agents can query specific databases, but not personally identifiable information.

The principle of least privilege applies equally to digital and human workers. Start off extra restrictive, then expand access based on proven need and performance.

Workflow integration

Map workflows and escalation paths that define when agents act independently and when they collaborate with humans. Establish clear triggers, document decision boundaries, and build feedback loops for continuous improvement.

For example, an artificial intelligence resume screener might prioritize and escalate top candidates to human recruiters using defined handoff rules and audit trails. Ultimately, agents should enhance human capabilities, not blur the lines of accountability.

Retraining schedules

Ongoing training plans for agents should mirror employee development programs. Monitor for drift, schedule regular updates, and document improvements. 

Much like employees need different types of training (technical skill sets, soft skills, compliance), agents need different updates as well, like accuracy improvements, new capability additions, security patches, and behavioral adjustments.

Retirement or decommissioning

Criteria for offboarding agents should include obsolescence, performance decline, or strategic changes. Archive decision history to preserve institutional knowledge, maintain compliance, and inform future deployments. 

Retirement planning isn’t just turning a system off. You need to preserve its value, maintain compliance, and capture what it’s learned. Each retiring agent should leave behind insights that shape smarter, more capable systems in the future.

Tackling AI lifecycle management challenges

Like HR navigating organizational change, IT faces both technical and cultural hurdles in managing the AI agent lifecycle. Technical complexity, skills gaps, and governance delays can easily stall deployment initiatives.

Standardization is the foundation of scale. Establish repeatable processes for agent evaluation, deployment, and monitoring, supported by shared templates for common use cases. From there, build internal expertise through training and cross-team collaboration.

The DataRobot Agent Workforce Platform enables enterprise-scale orchestration and governance across the agent lifecycle, automating deployment, oversight, and succession planning for a scalable digital workforce.

But ultimately, CIO leadership drives adoption. Just as HR transformations rely on executive sponsorship, agent workforce initiatives demand transparent, sustained commitment, including budget, skills development, and cultural change management.

The skills gap is real, but manageable. Partner with HR to identify and train champions who can lead agent operations, model good governance, and mentor peers. Building internal champions isn’t optional; it’s how culture scales alongside technology.

From monitoring systems to managing digital talent

IT owns the rhythm of agent performance (setting goals, monitoring outcomes, and coordinating retraining cycles). But what’s truly transformative is scale.

For the first time, IT can oversee hundreds of digital coworkers in real time, spotting trends and performance shifts as they happen. This continuous visibility turns performance management from a reactive task into a strategic discipline, one that drives measurable business value. 

With clear insight into which agents deliver the most impact, IT can make sharper decisions about deployment, investment, and capability development, treating performance data as a competitive advantage, not just an operational metric. 

Getting AI agents to operate ethically (and with compliance)

The reputational stakes for CIOs are enormous. Biased agents, privacy breaches, or compliance failures directly reflect on IT leadership. AI governance frameworks aren’t optional. They’re a required part of the enterprise infrastructure.

Just as HR teams define company values and behavioral standards, IT must establish ethical norms for digital coworkers. That means setting policies that ensure fairness, transparency, and accountability from the start. 

Three pillars define digital workforce governance: 

  1. Fairness
    Prevent discrimination and systemic bias in agent behavior. HR upholds equitable hiring practices; IT must ensure agents don’t exhibit bias in their decision-making. Regular audits, diverse testing scenarios, and bias detection tools should be standard.
  2. Compliance
    Compliance mapping to GDPR, CCPA, and industry-specific regulations requires the same rigor as human employee compliance training. Agents handling personal data need privacy safeguards; financial and healthcare agents require sector-specific oversight. 
  3. Explainability
    Every agent decision should be documented and auditable. Clear reasoning builds trust, supports accountability, and enables continuous improvement. As HR manages employee performance and conduct issues, IT needs parallel processes for digital workers.

When people understand how agents operate — and how they’re governed — trust grows, resistance falls, and adoption accelerates.

Preparing today’s IT leaders to manage tomorrow’s AI teams

A strong ROI comes from treating agents as workforce investments, not technology projects. Performance metrics, compliance frameworks, and lifecycle management then become competitive differentiators, rather than overhead costs.

AI agents are the newest members of the enterprise workforce. Managed well, they help IT and business leaders:

  • Scale without proportional headcount increases
  • Enforce consistency across global operations
  • Streamline routine tasks to focus on innovation
  • Gain agility to respond to market changes

AI agents are the future of work. And it’s IT’s stewardship that will define how the future unfolds. 

Learn more about why AI leaders choose DataRobot to help them build, operate, and govern AI agents at scale. 

The post IT as the new HR: Managing your AI workforce appeared first on DataRobot.

Turning a flaw into a superpower: Researchers redefine how robots move

A research team led by Dr. Lin Cao from the University of Sheffield's School of Electrical and Electronic Engineering has reimagined one of robotics' long-standing flaws as a breakthrough feature—unveiling a new way for soft robots to move, morph, and even "grow" with unprecedented dexterity.

Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules

Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.

Anthony Jules is the CEO and co-founder of Robust.AI, a leader in AI-driven warehouse automation. The company’s flagship product Carter™, is built to work with people in their existing environments, without disrupting their workflows. Anthony has a career spanning over 30 years at the intersection of robotics, AI, and business. An MIT-trained roboticist, he was part of the founding team at Sapient, held leadership roles at Activision, and has built multiple startups, bringing a unique blend of technical depth and operational scale to human-centered automation.

Linear Actuators vs Rotary Actuators: The Core Choice for Humanoid Robot Joints

The robot joint module is the core hardware of humanoid robots, currently mainly divided into two major categories: rotary and linear. In humanoid robot designs, the choice often involves trade-offs based on the application scenario and manufacturing cost.

‘Brain-free’ robots that move in sync are powered entirely by air

A team led by the University of Oxford has developed a new class of soft robots that operate without electronics, motors, or computers—using only air pressure. The study, published in Advanced Materials, shows that these "fluidic robots" can generate complex, rhythmic movements and even automatically synchronize their actions.

Artificial neurons that behave like real brain cells

USC researchers built artificial neurons that replicate real brain processes using ion-based diffusive memristors. These devices emulate how neurons use chemicals to transmit and process signals, offering massive energy and size advantages. The technology may enable brain-like, hardware-based learning systems. It could transform AI into something closer to natural intelligence.

Teaching robots to map large environments

The artificial intelligence-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map, like of an office cubicle, while estimating the robot’s position in real-time. Image courtesy of the researchers.

By Adam Zewe

A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.

Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.

To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds. 

The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.

Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.

Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.

“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.

Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.

Mapping out a solution

For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.

Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.

While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.

To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.

“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.

Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.

Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.

“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.

A more flexible approach

Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.

Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.

“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.

Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.

The average error in these 3D reconstructions was less than 5 centimeters.

In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.

“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.

This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.

The value of physical intelligence: How researchers are working to safely advance capabilities of humanoid robots

You may not remember it, but odds are you took a few tumbles during your toddler era. You weren't alone. Falling, after all, is a natural consequence of learning to crawl, walk, climb and jump. Our balance, coordination and motor skills are developing throughout early childhood.

Digital coworkers: How AI agents are reshaping enterprise teams

Across industries, a new type of employee is emerging: the digital coworker. 

AI agents that collaborate, learn, and make decisions are changing how enterprise teams operate and grow. 

These aren’t like the static chatbots or RPA scripts running in the background. They’re autonomous agents that act as colleagues — not code — helping teams move faster, make smarter decisions, and scale institutional knowledge. 

Managers are now learning to hire, onboard, and supervise AI agents like human employees, while teams are redefining trust, learning how to share context, and reshaping collaboration around intelligent systems that can act independently.

For leaders, this shift isn’t just about adopting new technology. It’s about transforming how organizations work and scale, and building more adaptive, resilient teams for the age of human-AI collaboration.

This post explores how AI leaders can guide trust, collaboration, and performance as digital coworkers become part of the workforce.  

How AI agents are shifting from tools to digital coworkers

AI agents acting as digital coworkers can reason through problems, coordinate across departments, and make decisions that directly influence outcomes.

Unlike traditional rule-based automation tools, these digital colleagues have the autonomy and awareness to carry out complex tasks without constant human supervision. 

Consider supply chain operations, for instance. In a “self-fulfilling” supply chain, an agent might:

  • Monitor market conditions
  • Detect disruptions
  • Evaluate alternatives
  • Negotiate vendor adjustments

And it can do it all without a human even glancing at their dashboard. Instead of chasing updates and keeping an eye on constant market fluctuations, the human role shifts to strategy. 

For leaders, this shift redefines process efficiency and management itself. It completely changes what it means to assign responsibility, ensure accountability, and measure performance in a workforce that now includes intelligent systems.

Why enterprises are embracing AI employees

The rise of AI employees isn’t about chasing the latest technology trend — it’s about building a more resilient, adaptable workforce. 

Enterprises are under constant pressure to sustain performance, manage risk, and respond faster to change. Digital coworkers are emerging as a way to extend capacity and improve consistency in how teams operate. 

AI agents can already take on analytical workloads, process monitoring, and repeatable decisions that slow teams down. In doing so, they help human employees focus on the work that requires creativity, strategy, and sound judgment.

For leadership teams, value shows up in measurable outcomes:

  • Greater productivity: Agents handle repeatable tasks autonomously, 24/7, compounding efficiency across departments.
  • Operational resilience:: Continuous execution reduces bottlenecks and helps teams sustain performance through change. 
  • Faster, data-driven decisions: Agents analyze, simulate, and recommend actions in real time, giving leaders an information edge with less downtime.
  • Higher human impact: Teams redirect their time toward creativity, strategy, and innovation.

Forward-looking organizations are already redesigning workflows around this partnership. In finance, agents handle “lights-out lending” processes around the clock while human analysts refine models and validate results. In operations, they monitor supply chains and surface insights before risks escalate. 

The result: a more responsive, data-driven enterprise where people and AI each focus on what they do best. 

Inside the partnership between humans and AI coworkers

Think about the process of onboarding a new team member: You introduce processes, show how systems connect, and gradually increase responsibility. Agent onboarding follows that same pattern, except the learning curve is measured in hours — not months.

Over time, the agent + employee partnership evolves. Agents handle the repeatable and time-sensitive (monitoring data flows, coordinating across systems, keeping decisions moving), while humans focus on creative, strategic, and relationship-driven work that requires context and judgment.

Let’s go back to the supply chain example above. In supply chain management, AI agents monitor demand signals, adjust inventory, and coordinate vendors automatically, while human leaders focus on long-term resilience and supplier strategy. That division of work turns human oversight into orchestration and gives teams the freedom (and time) to operate proactively instead of reactively.

This collaboration model is redefining how teams communicate, assign responsibility, and measure success, setting the stage for deeper cultural shifts.

The culture shift: Working with digital teammates

Cultural adaptation to digital coworkers follows a predictable pattern, but the timeline varies depending on how teams manage the change. Skepticism is normal early on as employees question how much they should trust automated decisions or delegate responsibility to agents. But over time, as AI coworkers prove reliable and transparent in their actions, teams feel more confident in them and collaboration starts to feel natural.

The initial hurdle often centers on trust and control. Human teams are used to knowing who’s responsible for what, how decisions get made, and where to go when problems arise. Digital agents introduce a new and unfamiliar element where some decisions happen automatically, processes run without human oversight, and coordination occurs between systems instead of people.

This “trust curve” typically:

  • Starts with skepticism: “Can this agent really handle complex tasks and decisions?”
  • Moves through cautious testing: “Let’s see how it performs on lower-risk processes.”
  • Reaches collaborative confidence: “This agent consistently makes good decisions faster than we could.”

But what happens when agents disagree with human decisions, or when their recommendations go against “the way we’ve always done it”? 

These are actually a blessing in disguise, and are opportunities where humans need to weigh competing agent recommendations. 

It’s in these moments that hidden assumptions in your processes might surface, revealing potentially better approaches that neither humans nor agents would have discovered on their own. And the final solution might involve human expertise, agent automation, or a healthy combination of both.

Preparing for the next phase of human + AI collaboration

Moving from traditional teams to human-agent collaboration offers operational improvement and a competitive differentiation that can grow over time. Early adopters are already building organizational capabilities that competitors will struggle to replicate as they play catch-up. 

AI agents are the digital employees that can learn your business context, maintain governance, streamline your processes, and develop institutional knowledge that stays in-house. 

With agents handling more operational duties, human teams can focus on innovation, strategy, and relationship building. This gives you breathing room on growth, using the resources you already have. Organizations that embrace digital coworkers are building adaptive capacity for future challenges we can’t even anticipate (yet). 

Discover how AI leaders are preparing their organizations for the agent workforce future.

The post Digital coworkers: How AI agents are reshaping enterprise teams appeared first on DataRobot.

Page 1 of 561
1 2 3 561