Q&A with Uncrewed & Autonomous Systems 2025
Introduction to ROS
China’s Out With a New Killer AI Creative Writer
Writers looking for AI preconceived to write brilliantly may want to check-out a new AI engine from China.
Dubbed Kimi K2 Thinking, the new tool promises the ability to navigate hundreds of steps on its way to auto-generating writing that is “shockingly good,” according to writer Grant Harvey.
Observes Harvey: “We co-wrote a YA novel called “The Salt Circus”—and the AI actually revised itself, scrapped bad ideas and showed genuine creative judgment.”
In other news and analysis on AI writing:
*Major Web Host Promises ‘Hour-a-Day’ Savings with AI-Powered Email: Hostinger is out with a new AI-powered email suite designed to save you serious time each day with your email.
Key features of the email suite include:
–AI email writer
–Automated smart email replies
–AI email summarizer
–AI writing stylizer
Warning: AI forged in China is often coded with the ability to forward the data you input to the Chinese Communist Party.
For more on saving time — while boosting writing prowess — with AI, check-out “Bringing in ChatGPT for Email,” by Joe Dysart.
*Use AI or You’re Fired: In another sign that the days of ‘AI is Your Buddy’ are fading fast, increasing numbers of businesses have turned to strong-arming employees when it comes to AI.
Observes Wall Street Journal writer Lindsay Ellis: “Rank-and-file employees across corporate America have grown worried over the past few years about being replaced by AI.
“Something else is happening now: AI is costing workers their jobs if their bosses believe they aren’t embracing the technology fast enough.”
*Auto-Write a Non-Fiction Book in an Hour: AI startup StoryOne says it has cracked-the-code on using AI to crank-out a full-length non-fiction book in about an hour.
StoryOne promises that anyone can use its software to transform ideas, podcasts, interviews, research or draft manuscripts into a high-quality, fact-based, non-fiction book in about an hour.
The software has been endorsed by Michael Reinartz, chief innovation officer, Vodafone Germany.
*ChatGPT-Maker Books One Millionth Business Customer: OpenAI recently booked its one millionth customer – making it the fastest-growing business app in history, according to the company.
Observes writer Mike Moore: “This goes along with its 800 million weekly users using ChatGPT in some form — which has helped make the platform synonymous with the constantly growing appetite for AI in our daily lives.
“The company has revealed a host of new tools in recent months to help boost adoption, including ‘company knowledge,’ where ChatGPT can reason across tools like Slack, SharePoint, Google Drive, GitHub and more to get answers.”
*AI Has a Hit Song: While AI’s ability to write and record songs has been an ongoing nag for music creators, the stakes just got much higher: AI now has its own hit song.
Dubbed “How Was I Supposed to Know?” the tune is currently charting at number thirty on Billboard’s Adult R&B Airplay Survey.
The powerhouse singer behind the hit is also AI-generated: Xania Monet, who was ‘signed’ as an artist at Hallwood Media.
*Microsoft Copilot Gets an AI Research Boost: Writers looking for yet another new option in AI research may want to check-out ‘Researcher with Computer Use.’
It’s a new feature embedded in Microsoft’s answer to ChatGPT – Copilot.
The new tool includes an AI agent that uses a secure, virtual computer to navigate public, gated and interactive Web content.
Plus, users also have the option to green-light the tool to access databases inside their enterprises as well.
*Study: AI Agents Virtually Useless at Completing Freelance Assignments: New research finds that much-ballyhooed AI agents are literally horrible at completing everyday assignments that are found on freelance brokerage sites like Fiverr and Upwork.
Observes writer Frank Landymore: “The top performer, they found, was an AI agent from the Chinese startup Manus with an automation rate of just 2.5 percent — meaning it was only able to complete 2.5 percent of the projects it was assigned at a level that would be acceptable as commissioned work in a real-world freelancing job, the researchers said.
“Second place was a tie, at 2.1 percent, between Elon Musk’s Grok 4 and Anthropic’s Claude Sonnet 4.5.”
*AI’s New Gig: Writing Official Quarterly and Annual Reports: Writer Mark Maurer reports that increasing numbers of official financial reports from public companies are being written in large part by AI.
Observes Maurer: “The efforts are the latest sign of finance executives’ growing ease with AI for public-facing work that was long handled solely by humans.”
*AI BIG PICTURE: Fed Chairman Confirms: AI is Eating Jobs: U.S. Federal Reserve Chairman Jerome Powell just made it official: AI is often sucking up jobs at businesses where the new technology has been embraced.
Observes writer Mike Kaput: “At a recent press conference, Powell noted that once you strip-out statistical over-counting, job creation is pretty close to zero.
“He then confirmed what many CEOs are now openly telling the Fed and investors: AI is allowing them to do more work with fewer people.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post China’s Out With a New Killer AI Creative Writer appeared first on Robot Writers AI.
IT as the new HR: Managing your AI workforce
Your organization is already hiring digital workers. Now, the question is whether IT is actually managing these “people-like” systems as part of the workforce, or as just another application in the tech stack.
Far from just another AI tool, AI agents are becoming digital coworkers that need the same lifecycle management as human employees: onboarding, supervision, performance reviews, and eventually, responsible decommissioning.
Many companies are already deploying agents to handle customer inquiries, process invoices, and make recommendations. The mistake is treating agents like software instead of managing them like team members.
IT is the natural leader to take on this “human resources for AI agents” role, managing agents’ lifecycle proactively versus inheriting a mismanaged system later. That’s how organizations move beyond pilots and manage agent lifecycles responsibly — with IT leading in partnership with business and compliance teams.
This is Post 3 in our Agent Workforce series, exploring how IT is well-positioned to manage agents as workforce assets, not just technology deployments.
Why IT is becoming the new HR for AI agents
AI agents are already steering IT into an expanded role. Just as HR oversees the employee lifecycle, IT is beginning to take ownership of managing the complete journey of AI agents:
- Recruiting the right talent (selecting appropriate agents)
- Onboarding (integrating with enterprise systems)
- Supervising performance (monitoring accuracy and behavior)
- Training and development (retraining and updates)
- Offboarding (decommissioning and knowledge transfer)
HR doesn’t just hire people and walk away. It creates policies, sets cultural norms, and enforces accountability frameworks. IT must do the same thing for agents, balancing developer autonomy with governance requirements, much like HR balances employee freedom with company policy.
The stakes of getting it wrong are comparable, too. HR works to prevent unvetted hires that could damage the business and brand. IT must prevent deployment that introduces uncontrolled risk. When business units spin up their own agents without oversight or approval, it’s like bringing on a new hire without a background check.
When IT owns agent lifecycle management, organizations can curb shadow AI, embed governance from day one, and measure ROI more effectively. IT becomes the single source of truth (SSOT) for enterprise-wide consistency across digital workers.
But governance is only part of the job. IT’s larger mandate is to build trust between humans and digital coworkers, ensuring clarity, accountability, and confidence in every agent decision.
How IT manages the digital coworker lifecycle
IT isn’t just tech support anymore. With a growing digital workforce, managing AI agents requires the same structure and oversight HR applies to employees. When agents misbehave or underperform, the financial and reputational costs can be significant.
Recruiting the right agents
Think of agent deployment as hiring: Just like you’d interview candidates to determine their capabilities and readiness for the role, IT needs to evaluate accuracy, cost, latency, and role fit before any agent is deployed.
It’s a balance between technical flexibility and enterprise governance. Developers need room to experiment and iterate, but IT still owns consistency and control. Frameworks should enable innovation within governance standards.
When business teams build or deploy agents without IT alignment, visibility and governance start to slip, turning small experiments into enterprise-level risks. This “shadow AI” can quickly erode consistency and accountability.
Without a governed path to deployment, IT will inherit the risk. An agent catalog solves this with pre-approved, enterprise-ready agents that business units can deploy quickly and safely. It’s self-service that maintains control and prevents shadow AI from becoming a cleanup project later on.
Supervising and upskilling agents
Monitoring is the performance review portion of the agent lifecycle, tracking task adherence, accuracy, cost efficiency, and business alignment — the same metrics HR uses for people.
Retraining cycles mirror employee development programs. Agents need regular updates to maintain performance and adapt to changing requirements, just as people need ongoing training to stay current (and relevant).
Proactive feedback loops matter:
- Identify high-value interactions
- Document failure modes
- Track improvement over time
This historical knowledge becomes invaluable for managing your broader agent workforce.
Performance degradation is often gradual, like an employee becoming slowly disengaged over time. Regular check-ins with agents (reviewing their decision patterns, accuracy trends, and resource consumption) can help IT spot potential issues before they become bigger problems.
Offboarding and succession planning
When a long-tenured employee leaves without proper knowledge transfer, it’s hard to recoup those lost insights. The same risks apply to agents. Decision patterns, learned behaviors, and accumulated context should be preserved and transferred to successor systems to make them even better.
Like employee offboarding and replacement, agent retirement is the final step of agentic workforce planning and management. It involves archiving decision history, compliance records, and operational context.
Continuity depends on IT’s discipline in documentation, version control, and transition planning. Handled well, this leads to succession planning, ensuring each new generation of agents starts smarter than the last.
How IT establishes control: The agent governance framework
Proactive governance starts at onboarding, not after the first failure. Agents should immediately integrate into enterprise systems, workflows, and policies with controls already in place from day one. This is the “employee handbook” moment for digital coworkers. CIOs set the expectations and guardrails early, or risk months of remediation later.
Provisioning and access controls
Identity management for agents needs the same rigor as human accounts, with clear permissions, audit trails, and role-based access controls. For example, an agent handling financial data needs different permissions than one managing customer inquiries.
Access rights should align to each agent’s role. For example:
- Customer service agents can access CRMs and knowledge bases, but not financial systems.
- Procurement agents can read supplier data, but can’t modify contracts without human approval.
- Analytics agents can query specific databases, but not personally identifiable information.
The principle of least privilege applies equally to digital and human workers. Start off extra restrictive, then expand access based on proven need and performance.
Workflow integration
Map workflows and escalation paths that define when agents act independently and when they collaborate with humans. Establish clear triggers, document decision boundaries, and build feedback loops for continuous improvement.
For example, an artificial intelligence resume screener might prioritize and escalate top candidates to human recruiters using defined handoff rules and audit trails. Ultimately, agents should enhance human capabilities, not blur the lines of accountability.
Retraining schedules
Ongoing training plans for agents should mirror employee development programs. Monitor for drift, schedule regular updates, and document improvements.
Much like employees need different types of training (technical skill sets, soft skills, compliance), agents need different updates as well, like accuracy improvements, new capability additions, security patches, and behavioral adjustments.
Retirement or decommissioning
Criteria for offboarding agents should include obsolescence, performance decline, or strategic changes. Archive decision history to preserve institutional knowledge, maintain compliance, and inform future deployments.
Retirement planning isn’t just turning a system off. You need to preserve its value, maintain compliance, and capture what it’s learned. Each retiring agent should leave behind insights that shape smarter, more capable systems in the future.
Tackling AI lifecycle management challenges
Like HR navigating organizational change, IT faces both technical and cultural hurdles in managing the AI agent lifecycle. Technical complexity, skills gaps, and governance delays can easily stall deployment initiatives.
Standardization is the foundation of scale. Establish repeatable processes for agent evaluation, deployment, and monitoring, supported by shared templates for common use cases. From there, build internal expertise through training and cross-team collaboration.
The DataRobot Agent Workforce Platform enables enterprise-scale orchestration and governance across the agent lifecycle, automating deployment, oversight, and succession planning for a scalable digital workforce.
But ultimately, CIO leadership drives adoption. Just as HR transformations rely on executive sponsorship, agent workforce initiatives demand transparent, sustained commitment, including budget, skills development, and cultural change management.
The skills gap is real, but manageable. Partner with HR to identify and train champions who can lead agent operations, model good governance, and mentor peers. Building internal champions isn’t optional; it’s how culture scales alongside technology.
From monitoring systems to managing digital talent
IT owns the rhythm of agent performance (setting goals, monitoring outcomes, and coordinating retraining cycles). But what’s truly transformative is scale.
For the first time, IT can oversee hundreds of digital coworkers in real time, spotting trends and performance shifts as they happen. This continuous visibility turns performance management from a reactive task into a strategic discipline, one that drives measurable business value.
With clear insight into which agents deliver the most impact, IT can make sharper decisions about deployment, investment, and capability development, treating performance data as a competitive advantage, not just an operational metric.
Getting AI agents to operate ethically (and with compliance)
The reputational stakes for CIOs are enormous. Biased agents, privacy breaches, or compliance failures directly reflect on IT leadership. AI governance frameworks aren’t optional. They’re a required part of the enterprise infrastructure.
Just as HR teams define company values and behavioral standards, IT must establish ethical norms for digital coworkers. That means setting policies that ensure fairness, transparency, and accountability from the start.
Three pillars define digital workforce governance:
- Fairness
Prevent discrimination and systemic bias in agent behavior. HR upholds equitable hiring practices; IT must ensure agents don’t exhibit bias in their decision-making. Regular audits, diverse testing scenarios, and bias detection tools should be standard. - Compliance
Compliance mapping to GDPR, CCPA, and industry-specific regulations requires the same rigor as human employee compliance training. Agents handling personal data need privacy safeguards; financial and healthcare agents require sector-specific oversight. - Explainability
Every agent decision should be documented and auditable. Clear reasoning builds trust, supports accountability, and enables continuous improvement. As HR manages employee performance and conduct issues, IT needs parallel processes for digital workers.
When people understand how agents operate — and how they’re governed — trust grows, resistance falls, and adoption accelerates.
Preparing today’s IT leaders to manage tomorrow’s AI teams
A strong ROI comes from treating agents as workforce investments, not technology projects. Performance metrics, compliance frameworks, and lifecycle management then become competitive differentiators, rather than overhead costs.
AI agents are the newest members of the enterprise workforce. Managed well, they help IT and business leaders:
- Scale without proportional headcount increases
- Enforce consistency across global operations
- Streamline routine tasks to focus on innovation
- Gain agility to respond to market changes
AI agents are the future of work. And it’s IT’s stewardship that will define how the future unfolds.
The post IT as the new HR: Managing your AI workforce appeared first on DataRobot.
Humans have remote touch ‘seventh sense’ like sandpipers, research shows
Turning a flaw into a superpower: Researchers redefine how robots move
Mission-Critical Automation: Ensuring Uninterrupted Operations Through Robust Power Management
Robot Talk Episode 132 – Collaborating with industrial robots, with Anthony Jules
Claire chatted to Anthony Jules from Robust.AI about their autonomous warehouse robots that work alongside humans.
Anthony Jules is the CEO and co-founder of Robust.AI, a leader in AI-driven warehouse automation. The company’s flagship product Carter
, is built to work with people in their existing environments, without disrupting their workflows. Anthony has a career spanning over 30 years at the intersection of robotics, AI, and business. An MIT-trained roboticist, he was part of the founding team at Sapient, held leadership roles at Activision, and has built multiple startups, bringing a unique blend of technical depth and operational scale to human-centered automation.
Linear Actuators vs Rotary Actuators: The Core Choice for Humanoid Robot Joints
Flexible mapping technique can help search-and-rescue robots navigate unpredictable environments
Advances in heavy-duty robotics and intelligent control support future fusion reactor maintenance
‘Brain-free’ robots that move in sync are powered entirely by air
Artificial neurons that behave like real brain cells
Teaching robots to map large environments
The artificial intelligence-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map, like of an office cubicle, while estimating the robot’s position in real-time. Image courtesy of the researchers.
By Adam Zewe
A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.
Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.
To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.
The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.
Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.
Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.
“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.
Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.
Mapping out a solution
For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.
Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.
While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.
To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.
“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.
Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.
Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.
“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.
A more flexible approach
Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.
Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.
“Once Dominic had the intuition to bridge these two worlds — learning-based approaches and traditional optimization methods — the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.
Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.
The average error in these 3D reconstructions was less than 5 centimeters.
In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.
“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.
This work is supported, in part, by the U.S. National Science Foundation, U.S. Office of Naval Research, and the National Research Foundation of Korea. Carlone, currently on sabbatical as an Amazon Scholar, completed this work before he joined Amazon.

