Page 6 of 561
1 4 5 6 7 8 561

The agent workforce: Redefining how work gets done 

The real future of work isn’t remote or hybrid — it’s human + agent. 

Across enterprise functions, AI agents are taking on more of the execution of daily work while humans focus on directing how that work gets done. Less time spent on tedious admin means more time spent on strategy and innovation — which is what separates industry leaders from their competitors.

These digital coworkers aren’t your basic chatbots with brittle automations that break when someone changes a form field. AI agents can reason through problems, adapt to new situations, and help achieve major business outcomes without constant human handholding.

This new division of labor is enhancing (not replacing) human expertise, empowering teams to move faster and smarter with systems designed to support growth at scale.

What is an agent workforce, and why does it matter?

An “agent workforce” is a collection of AI agents that operate like digital employees within your organization. Unlike rule-based automation tools of the past, these agents are adaptive, reasoning systems that can handle complex, multi-step business processes with minimal supervision.

This shift matters because it’s changing the enterprise operating model: You can push through more work through fewer hands — and you can do it faster, at a lower cost, and without increasing headcount.

Traditional automation understands very specific inputs, follows predetermined steps (based on those initial inputs), and gives predictable outputs. The problem is that these workflows break the moment something happens that’s outside of their pre-programmed logic.

With an agentic AI workforce, you give your agents objectives, provide context about constraints and preferences, and they figure out how to get the job done. They adapt when circumstances and business needs change, escalate issues to human teams when they hit roadblocks, and learn from each interaction (good or bad). 

Legacy automation toolsAgentic AI workforce
FlexibilityRule-based, fragile tasks; breaks on edge casesOutcome-driven orchestration; plans, executes, and replans to hit targets
CollaborationSiloed bots tied to one tool or teamCross-functional swarms that coordinate across apps, data, and channels
UpkeepHigh upkeep, constant script fixes and change ticketsSelf-healing, adapts to UI/schema changes and retains learning
AdaptabilityDeterministic only, fails outside predefined pathsAmbiguity-ready, reasons through novel inputs and escalates with context
FocusProject mindset; outputs delivered, then parkedKPI mindset; continuous execution against revenue, cost, risk, or CX goals

But the real challenge isn’t defining a single agent — it’s scaling to a true workforce.

From one agent to a workforce

While individual agent capabilities can be impressive, the real value comes from orchestrating hundreds or thousands of these digital workers to transform entire business processes. But scaling from one agent to an entire workforce is complex, and that’s the point where most proofs-of-concept stall or fail

The key is to treat agent development as a long-term infrastructure investment, not a “project.” Enterprises that get stuck in pilot purgatory are those that start with a plan to finish, not a plan to scale

Scaling agents requires governance and oversight — similar to how HR manages a human workforce. Without the infrastructure to do so, everything gets harder: coordination, monitoring, and control all break down as you scale. 

One agent making decisions is manageable. Ten agents collaborating across a workflow needs structure. A hundred agents working across different business units? That takes ironed-out, enterprise-grade governance, security, and monitoring.

An agent-first AI stack is what makes it possible to scale your digital workforce with clear standards and consistent oversight. That stack includes: 

  • Compute resources that scale as needed
  • Storage systems that handle multimodal data flows
  • Orchestration platforms that coordinate agent collaboration
  • Governance frameworks that keep performance consistent and sensitive data secure

Scaling AI apps and agents to deliver business-wide impact is an organizational redesign, and should be treated as such. Recognizing this early gives you the time to invest in platforms that can manage agent lifecycles from development through deployment, monitoring, and continuous improvement. Remember, the goal is scaling through iteration and improvement, not completion.

Business outcomes over chatbots

Many of the AI agents in use today are really just dressed-up chatbots with a handful of use cases: They can answer basic questions using natural language, maybe trigger a few API calls, but they can’t move the business forward without a human in the loop.

Real enterprise agents deliver end-to-end business outcomes, not answers. 

They don’t just regurgitate information. They act autonomously, make decisions within defined parameters, and measure success the same way your business does: speed, cost, accuracy, and uptime.

Think about banking. The traditional loan approval workflow looks something like:

Human reviews application -> human checks credit score -> human validates documentation -> human makes approval decision 

This process takes days or (more likely) weeks, is error-prone, creates bottlenecks if any single piece of information is missing, and scales poorly during high-demand periods.

With an agent workforce, banks can shift to “lights-out lending,” where agents handle the entire workflow from intake to approval and run 24/7 with humans only stepping in to focus on exceptions and escalations.

The results?

  • Loan turnaround times drop from days to minutes.
  • Operational costs fall sharply.
  • Compliance and accuracy improve through consistent logic and audit trails.

In manufacturing, the same transformation is happening in self-fulfilling supply chains. Instead of humans constantly monitoring inventory levels, predicting demand, and coordinating with suppliers, autonomous agents handle the entire process. They can analyze consumption patterns, predict shortages before they happen, automatically generate purchase orders, and coordinate delivery schedules with supplier systems.

The payoff here for enterprises is significant: fewer stockouts, lower carrying costs, and production uptime that isn’t tied to shift hours.

Security, compliance, and responsible AI

Trust in your AI systems will determine whether they help your organization accelerate or stall. Once AI agents start making decisions that impact customers, finances, and regulatory compliance, the question is no longer “Is this possible?” but “Is this safe at scale?”

Agent governance and trust are make-or-break for scaling a digital workforce. That’s why it deserves board-level visibility, not an IT strategy footnote. 

As agents gain access to sensitive systems and act on regulated data, every decision they make traces back to the enterprise. There’s no delegating accountability: Regulators and customers will expect transparent evidence of what an agent did, why it did it, and which data informed its reasoning. Black-box decision-making introduces risks that most enterprises cannot tolerate.

Human oversight will never disappear completely, but it will change. Instead of humans doing the work, they’ll shift to supervising digital workers and stepping in when human judgment or ethical reasoning is needed. That layer of oversight is your safeguard for sustaining responsible AI as your enterprise scales.

Secure AI gateways and governance frameworks form the foundation for the trust in your enterprise AI, unifying control, enforcing policies, and helping maintain full visibility across agent decisions. However, you’ll need to design the governance frameworks before deploying agents. Designing with built-in agent governance and lifecycle control from the start helps avoid costly rework and compliance risks that come from trying to retrofit your digital workforce later. 

Enterprises that design with control in mind from the start build a more durable system of trust that empowers them to scale AI safely and operate confidently — even under regulatory scrutiny.

Shaping the future of work with AI agents

So, what does this mean for your competitive strategy? Agent workforces aren’t just tweaking your existing processes. They’re creating entirely new ways to compete. The advantage isn’t about faster automation, but about building an organization where:

  • Work scales faster without adding headcount or sacrificing accuracy. 
  • Decision cycles go from weeks to minutes. 
  • Innovation isn’t limited by human bandwidth.

Traditional workflows are linear and human-dependent: Person A completes Task A and passes to Person B, who completes Task B, and so on. Agent workforces let dynamic, parallel processing happen where multiple agents collaborate in real time to optimize outcomes, not just check specific tasks off a list.

This is already leading to new roles that didn’t exist even five years ago:

  • Agent trainers specialize in teaching AI systems domain-specific knowledge. 
  • Agent supervisors monitor performance and jump in when situations require human judgment. 
  • Orchestration leads structure collaboration across different agents to achieve business objectives.

For early adopters, this creates an advantage that’s difficult for latecomer competitors to match. 

An agent workforce can process customer requests 10x faster than human-dependent competitors, respond to market changes in real time, and scale instantly during demand spikes. The longer enterprises wait to deploy their digital workforce, the harder it becomes to close that gap.

Looking ahead, enterprises are moving toward:

  • Reasoning engines that can handle even more complex decision-making 
  • Multimodal agents that process text, images, audio, and video simultaneously
  • Agent-to-agent collaboration for sophisticated workflow orchestration without human coordination

Enterprises that build on platforms designed for lifecycle governance and secure orchestration will define this next phase of intelligent operations. 

Leading the shift to an agent-powered enterprise

If you’re convinced that agent workforces offer a strategic opportunity, here’s how leaders move from pilot to production:

  1. Get executive sponsorship early. Agent workforce transformation starts at the top. Your CEO and board need to understand that this will fundamentally change how work gets done (for the better).
  2. Invest in infrastructure before you need it. Agent-first platforms and governance frameworks can take months to implement. If you start pilot projects on temporary foundations, you’ll create technical debt that’s more expensive to fix later.
  3. Build in governance frameworks from Day 1. Put security, compliance, and monitoring frameworks in place before your first agent goes live. These guardrails make scaling possible and safeguard your enterprise from risk as you add more agents to the mix.
  4. Partner with proven platforms that specialize in agent lifecycle management. Building agentic AI applications takes expertise that most teams haven’t developed internally yet. Partnering with platforms designed for this purpose shortens the learning curve and reduces execution risk.

Enterprises that lead with vision, invest in foundations, and operationalize governance from day one will define how the future of intelligent work takes shape.

Explore how enterprises are building, deploying, and governing secure, production-ready AI agents with the Agent Workforce Platform. 

The post The agent workforce: Redefining how work gets done  appeared first on DataRobot.

Quantum simulations that once needed supercomputers now run on laptops

A team at the University at Buffalo has made it possible to simulate complex quantum systems without needing a supercomputer. By expanding the truncated Wigner approximation, they’ve created an accessible, efficient way to model real-world quantum behavior. Their method translates dense equations into a ready-to-use format that runs on ordinary computers. It could transform how physicists explore quantum phenomena.

Figure AI shows latest robot – Figure 03

Figure AI has just introduced its latest android robot, Figure 03. The new robot comes with a lot of improvements over the previous one, such as, redesigned hands including softer fingertips with improved grasp and new cameras in hands, wireless charging through electromagnetic induction, faster actuators with improved torque density, improved audio system, improved camera […]

Main Components of a Humanoid Robot

A humanoid robot has several main systems working together which are: Sensors: Cameras, microphones, tactile sensors, motion sensors and more…. AI: Through artificial intelligence, decision making, computer vision, navigation, motion planning, interactions with surroundings and all other systems are coordinated and managed. Power Supply and Management Systems: Optimization of energy usage and efficient usage of […]

Scientists create a magnetic lantern that moves like it’s alive

A team of engineers at North Carolina State University has designed a polymer “Chinese lantern” that can rapidly snap into multiple stable 3D shapes—including a lantern, a spinning top, and more—by compression or twisting. By adding a magnetic layer, they achieved remote control of the shape-shifting process, allowing the lanterns to act as grippers, filters, or expandable mechanisms.

Robot Talk Episode 128 – Making microrobots move, with Ali K. Hoshiar

Claire chatted to Ali K. Hoshiar from University of Essex about how microrobots move and work together.

Ali Hoshiar is a Senior Lecturer in Robotics at the University of Essex and Director of the Robotics for Under Millimetre Innovation (RUMI) Lab. He leads the EPSRC-funded ‘In-Target’ project and was awarded the university’s Best Interdisciplinary Research Award. His research focuses on microrobotics, soft robotics, and data-driven mechatronic systems for medical and agri-tech applications. He also holds an MBA, adding strategic and commercial insight to his technical work.

 

Robotic programming brings increased productivity and faster return on investment

"By optimizing robot programs, users can see a significant increase in productivity while generating more profit with their robot, using robots for short production runs, and delivering closest conformance to design." Carlos Marcovici, Robotmaster Authorized Partner, Brazil

Why GPS fails in cities. And how it was brilliantly fixed

Our everyday GPS struggles in “urban canyons,” where skyscrapers bounce satellite signals, confusing even advanced navigation systems. NTNU scientists created SmartNav, combining satellite corrections, wave analysis, and Google’s 3D building data for remarkable precision. Their method achieved accuracy within 10 centimeters during testing. The breakthrough could make reliable urban navigation accessible and affordable worldwide.

Scientists suggest the brain may work best with 7 senses, not just 5

Scientists at Skoltech developed a new mathematical model of memory that explores how information is encoded and stored. Their analysis suggests that memory works best in a seven-dimensional conceptual space — equivalent to having seven senses. The finding implies that both humans and AI might benefit from broader sensory inputs to optimize learning and recall.

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Zahra Ghorrati is developing frameworks for human activity recognition using wearable sensors. We caught up with Zahra to find out more about this research, the aspects she has found most interesting, and her advice for prospective PhD students.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am pursuing my PhD at Purdue University, where my dissertation focuses on developing scalable and adaptive deep learning frameworks for human activity recognition (HAR) using wearable sensors. I was drawn to this topic because wearables have the potential to transform fields like healthcare, elderly care, and long-term activity tracking. Unlike video-based recognition, which can raise privacy concerns and requires fixed camera setups, wearables are portable, non-intrusive, and capable of continuous monitoring, making them ideal for capturing activity data in natural, real-world settings.

The central challenge my dissertation addresses is that wearable data is often noisy, inconsistent, and uncertain, depending on sensor placement, movement artifacts, and device limitations. My goal is to design deep learning models that are not only computationally efficient and interpretable but also robust to the variability of real-world data. In doing so, I aim to ensure that wearable HAR systems are both practical and trustworthy for deployment outside controlled lab environments.

This research has been supported by the Polytechnic Summer Research Grant at Purdue. Beyond my dissertation work, I contribute to the research community as a reviewer for conferences such as CoDIT, CTDIAC, and IRC, and I have been invited to review for AAAI 2026. I was also involved in community building, serving as Local Organizer and Safety Chair for the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), and continuing as Safety Chair for AAMAS 2026.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, my research has focused on developing a hierarchical fuzzy deep neural network that can adapt to diverse human activity recognition datasets. In my initial work, I explored a hierarchical recognition approach, where simpler activities are detected at earlier levels of the model and more complex activities are recognized at higher levels. To enhance both robustness and interpretability, I integrated fuzzy logic principles into deep learning, allowing the model to better handle uncertainty in real-world sensor data.

A key strength of this model is its simplicity and low computational cost, which makes it particularly well suited for real-time activity recognition on wearable devices. I have rigorously evaluated the framework on multiple benchmark datasets of multivariate time series and systematically compared its performance against state-of-the-art methods, where it has demonstrated both competitive accuracy and improved interpretability.

Is there an aspect of your research that has been particularly interesting?

Yes, what excites me most is discovering how different approaches can make human activity recognition both smarter and more practical. For instance, integrating fuzzy logic has been fascinating, because it allows the model to capture the natural uncertainty and variability of human movement. Instead of forcing rigid classifications, the system can reason in terms of degrees of confidence, making it more interpretable and closer to how humans actually think.

I also find the hierarchical design of my model particularly interesting. Recognizing simple activities first, and then building toward more complex behaviors, mirrors the way humans often understand actions in layers. This structure not only makes the model efficient but also provides insights into how different activities relate to one another.

Beyond methodology, what motivates me is the real-world potential. The fact that these models can run efficiently on wearables means they could eventually support personalized healthcare, elderly care, and long term activity monitoring in people’s everyday lives. And since the techniques I’m developing apply broadly to time series data, their impact could extend well beyond HAR, into areas like medical diagnostics, IoT monitoring, or even audio recognition. That sense of both depth and versatility is what makes the research especially rewarding for me.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Moving forward, I plan to further enhance the scalability and adaptability of my framework so that it can effectively handle large scale datasets and support real-time applications. A major focus will be on improving both the computational efficiency and interpretability of the model, ensuring it is not only powerful but also practical for deployment in real-world scenarios.

While my current research has focused on human activity recognition, I am excited to broaden the scope to the wider domain of time series classification. I see great potential in applying my framework to areas such as sound classification, physiological signal analysis, and other time-dependent domains. This will allow me to demonstrate the generalizability and robustness of my approach across diverse applications where time-based data is critical.

In the longer term, my goal is to develop a unified, scalable model for time series analysis that balances adaptability, interpretability, and efficiency. I hope such a framework can serve as a foundation for advancing not only HAR but also a broad range of healthcare, environmental, and AI-driven applications that require real-time, data-driven decision-making.

What made you want to study AI, and in particular the area of wearables?

My interest in wearables began during my time in Paris, where I was first introduced to the potential of sensor-based monitoring in healthcare. I was immediately drawn to how discreet and non-invasive wearables are compared to video-based methods, especially for applications like elderly care and patient monitoring.

More broadly, I have always been fascinated by AI’s ability to interpret complex data and uncover meaningful patterns that can enhance human well-being. Wearables offered the perfect intersection of my interests, combining cutting-edge AI techniques with practical, real-world impact, which naturally led me to focus my research on this area.

What advice would you give to someone thinking of doing a PhD in the field?

A PhD in AI demands both technical expertise and resilience. My advice would be:

  • Stay curious and adaptable, because research directions evolve quickly, and the ability to pivot or explore new ideas is invaluable.
  • Investigate combining disciplines. AI benefits greatly from insights in fields like psychology, healthcare, and human-computer interaction.
  • Most importantly, choose a problem you are truly passionate about. That passion will sustain you through the inevitable challenges and setbacks of the PhD journey.

Approaching your research with curiosity, openness, and genuine interest can make the PhD not just a challenge, but a deeply rewarding experience.

Could you tell us an interesting (non-AI related) fact about you?

Outside of research, I’m passionate about leadership and community building. As president of the Purdue Tango Club, I grew the group from just 2 students to over 40 active members, organized weekly classes, and hosted large events with internationally recognized instructors. More importantly, I focused on creating a welcoming community where students feel connected and supported. For me, tango is more than dance, it’s a way to bring people together, bridge cultures, and balance the intensity of research with creativity and joy.

I also apply these skills in academic leadership. For example, I serve as Local Organizer and Safety Chair for the AAMAS 2025 and 2026 conferences, which has given me hands-on experience managing events, coordinating teams, and creating inclusive spaces for researchers worldwide.

About Zahra

Zahra Ghorrati is a PhD candidate and teaching assistant at Purdue University, specializing in artificial intelligence and time series classification with applications in human activity recognition. She earned her undergraduate degree in Computer Software Engineering and her master’s degree in Artificial Intelligence. Her research focuses on developing scalable and interpretable fuzzy deep learning models for wearable sensor data. She has presented her work at leading international conferences and journals, including AAMAS, PAAMS, FUZZ-IEEE, IEEE Access, System and Applied Soft Computing. She has served as a reviewer for CoDIT, CTDIAC, and IRC, and has been invited to review for AAAI 2026. Zahra also contributes to community building as Local Organizer and Safety Chair for AAMAS 2025 and 2026.

Friction-based landing gear enables drones to safely land on fast-moving vehicles

Drones have become a more common sight in our skies and are used for everything from consumer hobbies like aerial photography to industrial applications such as farming, surveillance and logistics. However, they are not without their shortcomings, and one of those is landings. Almost half of all drone accidents occur when these uncrewed aerial vehicles attempt to touch down, especially in challenging environments or on fast-moving objects. But that could be a thing of the past as researchers have developed a system that can land smoothly on vehicles traveling at speed.
Page 6 of 561
1 4 5 6 7 8 561