Page 1 of 599
1 2 3 599

Identity-first AI governance: Securing the agentic workforce

AI agents are now operating inside production systems, querying Snowflake, updating Salesforce, and executing business logic autonomously. In many enterprises, they authenticate using static API keys or shared credentials rather than distinct identities in the corporate IDP. 

Authenticating autonomous systems through shared credentials introduces real governance risk.

When an agent executes an action, logs often attribute it to a developer key or service account instead of a clearly defined autonomous actor. Attribution becomes ambiguous. Least privilege weakens. Revocation may require rotating credentials or modifying code rather than disabling a governed identity. In a non-deterministic environment, that delay slows investigation and containment.

Shared credentials turn autonomous systems into “shadow identities”: actors operating inside production without a distinct, governed identity in the enterprise directory.

Most organizations have monitoring and guardrails in place. The issue is structural. Autonomous systems are operating outside first-class identity governance within the same control plane that secures human users. Closing this gap requires aligning agents with the identity model that governs your workforce, ensuring every autonomous actor is traceable, permission scoped, and centrally revocable.

The hidden risk: Modern agentic AI is non-deterministic

Traditional enterprise software follows predefined logic. Given the same input, it produces the same output.

Agentic AI systems operate differently. Instead of executing a fixed script, they use probabilistic models to:

  • Evaluate context
  • Retrieve information dynamically
  • Construct action paths in real time 

If you instruct an agent to optimize a supply chain route, it may reference weather forecasts, fuel cost data, and historical performance before determining a route. That flexibility enables agents to solve complex, multi-system problems that traditional software cannot address.

However, non-deterministic systems introduce new governance considerations:

  • Execution paths may vary from one request to the next.
  • Retrieved data sources may differ depending on context.
  • Outputs can contain reasoning errors or inaccurate conclusions.
  • Actions may extend beyond what a developer explicitly scripted.

When a system can continuously access company data and execute actions autonomously, it cannot be governed like a static application. It requires clear identity attribution, tightly scoped permissions, continuous monitoring, and centralized revocation authority.

Why credential-based security breaks in agentic environments

Most enterprises still secure AI agents using static API keys or shared service credentials. That model worked when software executed predictable logic. It breaks down when autonomous systems operate across production environments.

When an agent authenticates with a shared credential, activity is logged but not clearly attributed. A Salesforce update or Snowflake query may appear to originate from a developer key rather than from a distinct autonomous system. Attribution becomes blurred. Least privilege is harder to enforce. Containment depends on rotating credentials or modifying code instead of disabling a governed identity.

The problem is identity governance, not monitoring visibility.

Traditional security assumes credentials map to accountable users or services. Shared credentials break that assumption. In a non-deterministic environment, that ambiguity slows investigation and increases exposure.

The strategic shift: Identity-first governance

The governance gap created by shadow identities cannot be solved with additional monitoring. It requires a structural shift in how autonomous systems are governed.

When a system can dynamically retrieve data, generate probabilistic outputs, and execute actions across enterprise platforms, it is no longer just an application. It is an operational actor. Governance must reflect that.

Identity-first governance treats autonomous systems as first-class identities within the same directory that governs human users. Each agent receives a distinct identity, clearly scoped permissions, and auditable activity attribution.

This changes the control model. Access is tied to identity rather than static credentials. Actions are logged to a specific actor. Permissions can be adjusted without modifying code. Revocation occurs at the identity layer, not inside application logic.

The result is a unified identity plane for human and autonomous actors. Instead of building parallel AI security stacks, organizations extend existing identity controls. Policy remains consistent. Incident response remains centralized. Innovation scales without fragmenting governance.

A practical example: Identity backed agents in practice

One architectural response to the identity governance gap is to provision autonomous systems as first-class identities inside the corporate directory, rather than authenticating them through static API keys.

This approach requires coordination between agent orchestration and enterprise identity infrastructure. Through a deep integration between DataRobot and Okta, organizations can now provision agents built in the DataRobot Agentic Workforce Platform as governed, first-class identities directly inside Okta. Agents deployed within the DataRobot Agentic Workforce Platform can be provisioned as governed identities inside Okta instead of relying on shared credentials.

In this model, each agent receives a directory backed identity. Authentication occurs through short lived, policy controlled tokens rather than long lived credentials embedded in code. Actions are logged to a specific autonomous actor. Permissions are scoped using existing least privilege controls.

This directly addresses the attribution and revocation challenges described earlier. When an agent is deployed, its identity is created within the corporate IDP. When permissions change, governance workflows apply. If behavior deviates from expectation, security teams can restrict or disable the agent at the identity layer, immediately adjusting its access across integrated systems such as Salesforce or Snowflake.

The impact is operational. Autonomous systems become visible actors inside the same identity plane that secures human users. Rather than introducing a parallel AI security stack, organizations extend the controls they already operate and audit.

blog diagram

Three governance principles for agentic AI

As autonomous systems move into production environments, governance must become explicit. At minimum, three principles are essential.

1. Eliminate static credentials

Autonomous systems should not authenticate through long lived API keys or shared service accounts. Production agents must use short lived, policy controlled credentials tied to a governed identity. If an autonomous system can access enterprise systems, it must authenticate as a distinct actor within the identity provider.

2. Audit the actor, not the platform

Security logs should attribute actions to specific autonomous identities, not to generic services or developer keys. In non-deterministic systems, platform level visibility is insufficient. Governance requires actor level attribution to support investigation, anomaly detection, and access review.

3. Centralize revocation authority

Security teams must be able to restrict or disable an autonomous system through the primary identity control plane. Containment should not depend on code changes, credential rotation, or redeployment. Identity must function as an operational control surface.

Non-deterministic systems are not inherently unsafe. But when autonomous systems operate without identity level governance, exposure increases. Clear identity boundaries convert autonomy from a governance liability into a manageable extension of enterprise operations.

AI governance is workforce governance

Agentic systems now operate inside core workflows, access regulated data, and execute actions with real consequence. Governance models designed for deterministic software are not sufficient for autonomous systems.

If a system can act, it must exist as a governed identity within the same control plane that secures your workforce. Identity becomes the foundation for attribution, least privilege, monitoring, and centralized revocation. When agents operate inside the corporate directory rather than outside it, oversight scales with innovation.

This model is taking shape through closer integration between agent orchestration platforms and enterprise identity providers, including the collaboration between DataRobot and Okta. Rather than building parallel AI security stacks, organizations can extend the identity infrastructure they already operate to autonomous systems. To see how identity-backed agents can operate securely inside enterprise environments, explore The Enterprise Guide to Agentic AI or schedule a demo to learn how DataRobot and Okta integrate agent orchestration with enterprise identity governance.

The post Identity-first AI governance: Securing the agentic workforce appeared first on DataRobot.

The foundation for a governed agent workforce: DataRobot and NVIDIA RTX PRO 4500

Moving AI agents from experimental pilots to a full-scale enterprise workforce requires more than just a model; it requires a hardware foundation that balances high-performance inference with industry-leading cost and power performance.

DataRobot has technically validated the NVIDIA RTX PRO 4500 as an inference engine with a Blackwell architecture for the DataRobot Agent Workforce Platform. This combination provides the compute power and control necessary for mission-critical autonomous agents.

Performance without over-provisioning

For the modern AI Factory, the NVIDIA RTX PRO 4500 occupies a strategic middle ground in the NVIDIA lineup. With 32GB of high-speed GDDR7 memory, 800 GB/s bandwidth, FP4 precision, and a 2nd-Gen Transformer Engine it sits between the entry-level L4 (24GB) and the high-end L40S (48GB).

This 32GB VRAM buffer is specifically optimized for agentic workflows:

  • Local Execution: Enough headroom to host sophisticated LLMs alongside multi-agent orchestration layers.
  • Low Latency: Reduces the delay in complex reasoning tasks, essential for real-time applications.
  • Data Privacy: Supports on-premises deployment for sensitive enterprise data.

Validated use cases for the enterprise

The price-to-performance ratio of the NVIDIA RTX PRO 4500 excels in two high-impact areas:

1. Real-time logistics and business planning: By leveraging NVIDIA cuOpt, agents can solve complex routing and scheduling problems. The NVIDIA RTX PRO 4500 provides the parallel processing power to run these heavy optimization engines in concert with the agent’s reasoning LLM on a single node.

2. Production-grade RAG pipelines: Retrieval-Augmented Generation (RAG) is the backbone of reliable agents. Combined with NeMo Retriever NIM, including multimodal document understanding models that extract structured content from tables, charts, and complex page elements, this hardware excels at the embedding, indexing, and retrieval steps, ensuring agents maintain context across diverse data formats without performance bottlenecks.

From infrastructure to orchestration

Hardware provides the raw horsepower, but the DataRobot Agent Workforce Platform provides the ability to leverage that compute to build useful customer applications in a secure, governed manner. As organizations transition to autonomous agents, DR provides a runtime and build environments to fully utilize the GPU power.

Runtime

1/ Seamless scalable and cost effective inferencing

2/ Embedded governance and monitoring in agents and apps

3/ Out-of-the-box security and identity

Build

1/ Comprehensive set of builder tools

2/ Extensive evaluation

3/ Embedded hooks to make deployment easy

Completing the stack with dataRobot

Hardware is the engine, and DataRobot’s Agent Workforce Platform makes it work for the business. While the NVIDIA RTX PRO 4500 provides the compute, DataRobot provides the platform to  build and manage mission-critical agents with guardrails, observability, and governance.

By combining NVIDIA’s market-leading hardware with DataRobot’s end-to-step platform, organizations can finally transition from experimental AI to a governed, scalable agent workforce. Whether you are running on-premises today or looking toward a hybrid cloud future, this stack is the definitive blueprint for the AI-driven enterprise.

The post The foundation for a governed agent workforce: DataRobot and NVIDIA RTX PRO 4500 appeared first on DataRobot.

4D printing technology uses waste sulfur to enable self-actuating soft robots

A joint research team led by Dr. Dong-Gyun Kim of the Korea Research Institute of Chemical Technology (KRICT), Professor Jeong Jae Wie of Hanyang University, and Professor Yong Seok Kim of Sejong University report the world's first 4D printing technology based on sulfur-rich polymers that respond to heat, light, and magnetic fields. The study was published in Advanced Materials.

Graphene-based sensor to improve robot touch

Schematic showing the materials used in the sensor and the sensing array on a robotic manipulator. Figure from Multiscale-structured miniaturized 3D force sensors. Reproduced under a CC BY 4.0 licence.

Robots are becoming increasingly capable in vision and movement, yet touch remains one of their major weaknesses. Now, researchers have developed a miniature tactile sensor that could give robots something much closer to a human sense of touch.

The technology, developed by researchers at the University of Cambridge, is based on liquid metal composites and graphene – a two-dimensional form of carbon. The ‘skin’ allows robots to detect not just how hard they are pressing on an object, but also the direction of applied forces, whether an object is slipping, and even how rough a surface is, at a scale small enough to rival the spatial resolution of human fingertips. Their results are reported in the journal Nature Materials.

Human fingers rely on multiple types of mechanoreceptors to sense pressure, force, vibration, and texture simultaneously. Reproducing this level of multidimensional tactile perception in artificial systems is a significant challenge, especially in devices that are both small and durable enough for practical use.

“Most existing tactile sensors are either too bulky, too fragile, too complex to manufacture or unable to accurately distinguish between normal and tangential forces,” said Professor Tawfique Hasan from the Cambridge Graphene Centre, who led the research. “This has been a major barrier to achieving truly dexterous robotic manipulation.”

To overcome this, the research team developed a soft, flexible composite material, combining graphene sheets, deformable metal microdroplets, and nickel particles, embedded in a silicone matrix.

Inspired by the microstructures found in human skin, the researchers shaped the material into tiny pyramids, some as small as 200 micrometres across. These pyramid structures concentrate stress at their tips, enabling the sensor to detect extremely small forces while maintaining a wide measurement range.

The result is a tactile sensor sensitive enough to detect a grain of sand. Compared with existing flexible tactile sensors, the new device improves size and detection limits by roughly an order of magnitude.

The sensor can also distinguish shear forces from normal pressure, a capability that allows it to detect when an object begins to slip. By measuring signals from four electrodes beneath each pyramid, the sensor can mathematically reconstruct the full three-dimensional force vector in real time.

In demonstrations, the team integrated the sensors into robotic grippers. The robots were able to grasp fragile objects, such as thin paper tubes, without crushing them. Unlike conventional force sensors, which rely on prior information about an object’s properties, the new system adapts in real time through slip detection.

At even smaller scales, microsensor arrays could identify the mass, geometry, and material density of tiny metal spheres by analysing force magnitude and direction. This opens the door to applications in minimally invasive surgery or microrobotics, where conventional force sensors are far too large.

Beyond robotics, the technology could have significant implications for prosthetics. Advanced artificial limbs increasingly rely on tactile feedback to provide users with a sense of touch. Highly sensitive, miniaturised 3D force sensors could enable more natural interactions with objects, improving control, safety, and user confidence.

“Our approach shows that bulky mechanical structures or complex optics are not required to achieve high-resolution 3D tactile sensing,” said lead author Dr Guolin Yun, a former Royal Society Newton International Fellow at Cambridge, and now Professor at the University of Science and Technology of China. “By combining smart materials with skin-inspired structures, we achieve performance that comes remarkably close to human touch.”

Looking ahead, the researchers believe the sensors could be miniaturised even further, potentially below 50 micrometres, approaching the density of mechanoreceptors in human skin. Future versions may also integrate temperature and humidity sensing, moving closer to a fully multimodal artificial skin.

As robots increasingly move out of controlled factory environments and into homes, hospitals, and unpredictable real-world settings, such advances in touch could be transformative — allowing machines not just to see and act, but to truly feel.

A patent application has been filed through Cambridge Enterprise, the University’s innovation arm. The research was supported by the Royal Society, the Henry Royce Institute, and the Advanced Research and Invention Agency (ARIA). Tawfique Hasan is a Fellow of Churchill College, Cambridge.

Reference

Multiscale-structured miniaturized 3D force sensors, Guolin Yun, Zesheng Chen, Zhuo Chen, Jinrui Chen, Binghan Zhou, Mingfei Xiao, Michael Stevens, Manish Chhowalla & Tawfique Hasan, Nature Materials (2026).

SAP AI Agents: How Enterprises Are Deploying Agentic AI on SAP?

SAP AI Agents: How Enterprises Are Deploying Agentic AI on SAP?

The Problem That Brought You Here

Your SAP environment runs the core of the business — procurement, inventory, production planning, finance. And now leadership is asking what AI can actually do on top of it. Not a demo. Not a proof of concept. Something that runs in production and solves a real bottleneck.

SAP AI agents are the answer a growing number of enterprise IT and operations teams are landing on. This article explains what they are, where they are being deployed today, and what it takes to put one into a live SAP environment.

USM Business Systems is a specialized SAP AI delivery partner based in Ashburn, VA. We place SAP BTP AI developers, AI Core engineers, and enterprise LLM integration specialists inside enterprises and system integrators executing SAP AI programs.

What Is a SAP AI Agent?

An AI agent is software that perceives its environment, reasons about a goal, takes actions, and checks results — without a human directing each step. When that environment is SAP, the agent reads SAP data, calls SAP APIs or workflows, interprets the output, and acts again.

SAP has built AI agent infrastructure directly into its platform. SAP Joule, the AI copilot embedded across S/4HANA, BTP, and SAP Analytics Cloud, uses an agentic architecture under the hood. Developers can extend it using SAP AI Core, the managed AI runtime where custom models and agents are deployed and governed at enterprise scale.

The practical result is an agent that can, for example, monitor a supplier’s delivery performance in SAP, flag an anomaly, cross-reference historical data, draft a purchase order adjustment, and route it for approval — without a procurement analyst touching it.

Where Enterprises Are Deploying SAP AI Agents Today?

  • Procurement and Supplier Intelligence

Agents monitor supplier delivery windows, contract compliance, and pricing variances inside SAP Ariba and S/4HANA. When a pattern signals risk — a supplier consistently shipping 4 days late on a specific SKU category — the agent flags it, pulls the relevant contract terms, and surfaces a recommended action. Procurement teams report 60-70% reductions in manual monitoring time after deploying these agents [Gartner, 2024 Supply Chain AI Survey].

  • Production Scheduling and Capacity Planning

In manufacturing environments, agents integrated with SAP PP (Production Planning) adjust schedules dynamically based on real-time inventory levels, machine availability, and demand signals from SAP IBP. The agent doesn’t replace the planner — it does the 45 minutes of data gathering and cross-referencing that used to happen before every planning decision.

  • Finance and Accounts Payable Automation

Agents working in SAP Finance match invoices against purchase orders, flag discrepancies above a defined threshold, and route exceptions to the right reviewer. Companies using this pattern report 80%+ straight-through processing rates on standard invoices within 90 days of deployment [McKinsey, 2024 Finance AI Report].

  • Inventory and Demand Signal Processing

Agents read point-of-sale signals, seasonal demand patterns, and supplier lead times from SAP, then recommend reorder quantities and safety stock adjustments. This is particularly high-value in food production and retail distribution where demand volatility is high and the cost of stockouts is immediate.

  • What is the difference between SAP Joule and a custom SAP AI agent?

SAP Joule is SAP’s native AI copilot — it works within SAP’s defined interaction patterns and covers general tasks across S/4HANA, SAP SuccessFactors, and other SAP applications. A custom SAP AI agent is built to solve a specific workflow problem in your environment, using SAP AI Core or SAP BTP as the infrastructure. Custom agents handle tasks Joule does not cover natively and can integrate with non-SAP data sources inside the same workflow.

  • Do SAP AI agents require a full BTP implementation to deploy?

Not necessarily. Agents that work purely within S/4HANA APIs can be deployed with targeted BTP services rather than a full BTP platform rollout. The right architecture depends on where your data lives, what your agent needs to access, and your existing SAP landscape. A scoping conversation typically takes 30 minutes to map this out.

What Makes SAP AI Agent Deployments Fail?

Most SAP AI agent projects that stall do so for one of three reasons:

  • The agent was built without a clean data feed. Agents that read SAP master data often encounter inconsistent coding, missing fields, or legacy data structures that were never cleaned because no one needed them to be. The agent surfaces the problem immediately.
  • The workflow boundary was too broad at the start. ‘Automate procurement’ is not an agent design. ‘Monitor supplier on-time delivery for the top 50 SKUs and flag variance above 10%’ is. Scoping matters more here than in almost any other AI project type.
  • The team building it did not have SAP AI Core experience. Standard ML engineering skills do not transfer cleanly to SAP’s AI infrastructure. SAP AI Core has its own API patterns, lifecycle management approach, and governance requirements. Engineers who have not worked inside it add 4-8 weeks of ramp time to every deployment.

What a SAP AI Agent Deployment Actually Looks Like

A typical first agent deployment for a mid-to-large SAP environment follows this sequence:

  • Week 1-2: Workflow scoping. Identify the specific process, the SAP modules involved, the data fields the agent needs to read, and the action it will take on completion.
  • Week 3-4: Data readiness assessment. Confirm that the relevant SAP master data and transactional data are clean enough for the agent to reason accurately. Identify gaps.
  • Week 5-8: Build and test in SAP AI Core. Deploy the agent model, connect to SAP APIs, build the agentic loop, run on historical data.
  • Week 9-10: Controlled live run. Agent runs in parallel with the existing manual process. Outputs are compared. Confidence thresholds are tuned.
  • Week 11-12: Production deployment with monitoring. Agent goes live. A dashboard tracks decision volume, exception rate, and accuracy. A human review loop handles edge cases.

Why USM Business Systems?

USM Business Systems is a CMMi Level 3, Oracle Gold Partner AI and IT services firm headquartered in Ashburn, VA. With 1,000+ engineers, 2,000+ delivered applications, and 27 years of enterprise delivery experience, USM specialises in AI implementation for supply chain, pharma, manufacturing, and SAP environments. Our SAP AI practice places specialized engineers inside enterprise programs within days — on contract, as dedicated delivery pods, or on a project basis.

Ready to put SAP AI into production? Book a 30-minute scoping call with our SAP AI team at usmsystems.com.

FAQ

What SAP modules are most commonly used with AI agents?

SAP S/4HANA, SAP Ariba, SAP IBP, SAP PP, SAP Finance, and SAP Datasphere are the most active areas. The agent infrastructure runs on SAP AI Core and BTP regardless of which module the agent is reading or acting on.

How long does a first SAP AI agent deployment take?

A well-scoped first agent typically reaches production in 10-14 weeks. Projects that try to automate too broad a workflow or that start with messy master data take longer.

Do we need to train a model from scratch?

Most SAP AI agent deployments use pre-trained LLMs or SAP’s foundation models as the reasoning layer, fine-tuned or prompted for the specific workflow. Training from scratch is rarely necessary and significantly extends timelines.

Can SAP AI agents work with non-SAP systems in the same workflow?

Yes. SAP AI Core supports external API connections, so an agent can read a SAP data source, call a third-party logistics API, and write a result back to SAP in the same workflow loop.

What governance controls exist for SAP AI agents?

SAP AI Core includes lifecycle management, model versioning, audit logging, and role-based access. Agents deployed in regulated industries like pharma can be configured to require human approval above defined thresholds before taking action.

Get In Touch!

[contact-form-7]

Sorry, No Fleshbags

Social Network for AI Agents Only Snapped-up by Mark Zuckerberg

Meta CEO Mark Zuckerberg has acquired a social network designed for AI agents only – no humans allowed.

Essentially, AI agents interact, talk and commiserate with one another on the text-based network – dubbed Moltbook – much like humans do on other social networks.

As for Moltbook’s human inventors: They got a lucky break with the sale.

Observes Reuters: “The deal will bring Moltbook co-founders Matt Schlicht and Ben Parr into Meta Superintelligence Labs.”

In other news and analysis on AI writing:

*ChatGPT Promising to Add AI Sora Video Maker: Long considered one of the most advanced video makers on the planet, the Sora video maker is promised to show up as a new feature for ChatGPT soon.

Observes writer Viktor Eriksson: “Sora is impressive. Not only is it more realistic with advanced movements and physics, but last October it gained the ability to ‘insert people’ into its videos.”

*AI Filmmaking: With the Latest Tools, You’re Writer, Director and Cinematographer: Hollywood’s fears that AI will someday render movie studios irrelevant seem more urgent than ever.

These days, the latest tools enable someone with a fresh imagination to become writer, director and cinematographer — and do it on the cheap.

TV producer Matt Zien, for example, says he recently cranked-out a 12-minute short film using AI tools. It cost in the low thousands of dollars to create – rather than the millions that a Hollywood studio would have charged.

*Photoshop Gets an AI Assistant: Photoshop novices just got a leg-up with the roll-out of the tool’s new AI assistant: You can now use natural language in Photoshop to add special effects, make an easy crop, punch-up shadows and more.

Observes writer Ivan Mehta: “Adobe said that paid users of Photoshop will be able to create unlimited generations with the AI assistant through April 9 — and free users will get 20 generations to start with.”

Looks like creating supplemental images for your blog or other digital property just got a whole lot easier.

*Zoom’s Answer to Boring Meetings: Send Your AI Avatar Instead: Video meeting service provider Zoom is promising to add AI avatars to its solution, which you’ll be able to send to all those insufferable online meetings in your place.

Observes writer Ivan Mehta: “The AI avatars, announced last year, are the long-anticipated photorealistic avatars that can mimic your appearance, expressions, and lip and eye movements.

“Designed to mime your actions when you’re not “camera-ready,” Zoom says the avatars will work in online meetings as well as in its asynchronous video messaging product.”

*LegalZoom Legal Advice Now Available in ChatGPT: Long-time legal advisor LegalZoom is now available within ChatGPT for users looking for business advice backed by a deep understanding of the law.

Observes Jeff Stibel, CEO, LegalZoom: “LegalZoom provides the expertise and clarity to help small business owners go from idea to action.

“Backed by attorney expertise, we’re making legal guidance and accountability even more accessible, when and where they need it.”

*Gemini Gets Tighter Integration with Google Workspace Suite: Google is out with a new upgrade to Gemini designed to ensure the ChatGPT competitor is more tightly integrated with Google Docs, Sheets, Slides and Drive.

Observes Yulie Kwon Kim, VP product/workspace: “Today we are re-imagining how people create content.”

Click here for the blow-by-blow that backs-up Kim’s statement.

*Microsoft Copilot Adds New AI Agent Module, Cowork: Seems like every time you turn around, Microsoft is giving its Copilot chatbot an agentic upgrade.

This time, it’s adding ‘Copilot Cowork’ to its bag of tricks, which promises to trigger AI agent work on Copilot to be more proactive and independent.

The key benefit with the upgrade: The ability of ‘Copilot Cowork’ to work with many Microsoft apps simultaneously – rather than being tied to just one app at a time.

*Oops: Grammarly Deep-Sixes ‘Expert Review’ After Fierce Backlash: Turns-out, more than a few authors and writers were livid after discovering that Grammarly was poaching their thinking and writing styles to offer ‘expert reviews’ of writing put together by Grammarly users.

Observes Analytics Insight: “The feature provided users with writing advice as if it were coming from well-known experts, quickly raising concerns about misrepresentation and identity misuse.

“Grammarly said it is reviewing the feature’s design and considering changes.”

*AI Big Picture: Get AI to Do Your Taxes? Maybe Not: While AI may indeed cure cancer one day, for now, better not unleash it on your taxes.

A recent test of the top AI chatbots on the planet by The New York Times found that the AIs were simply no good at doing taxes.

Equally disappointing were Gemini, ChatGPT, Claude and Grok.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Sorry, No Fleshbags appeared first on Robot Writers AI.

Scientists discover AI can make humans more creative

Artificial intelligence is often portrayed as a tool that replaces human work, but new research from Swansea University suggests a far more exciting role: creative collaborator. In a large study with more than 800 participants designing virtual cars, researchers found that AI-generated design galleries sparked deeper engagement, longer exploration, and better results.

New chip lets robots see in 4D by tracking distance and speed simultaneously

Current vision systems for robots and drones rely on 3D sensors that, although powerful, do not always keep up with the fast-paced, unpredictable movement of the real world. These systems often struggle to measure speed instantly or are too bulky and expensive for everyday use. Now, in a paper published in the journal Nature, scientists report how they have developed a 4D imaging sensor on a chip that creates 3D maps of an environment while simultaneously tracking the speed of moving objects.

Canine companion insights help robots locate objects with an 89% success rate

Whether in the kitchen or on a workshop floor, robot assistants that can fetch items for people could be extremely useful. Now, a team of Brown University researchers has developed a way of making robots better at figuring out exactly which items a user might want them to retrieve.

Robot Talk Episode 148 – Ethical robot behaviour, with Alan Winfield

Claire chatted to Alan Winfield from the University of the West of England about developing new standards for ethics and transparency in robotics.

Alan Winfield is Professor of Robot Ethics at the University of the West of England (UWE), Visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Alan co-founded the Bristol Robotics Laboratory, where his research is focussed on the science, engineering and ethics of cognitive robotics. Alan is an advocate for robot ethics; he chairs the advisory board of the Responsible Technology Institute at the University of Oxford and has co-drafted new standards on ethical risk assessment and transparency.

Mosrac – Efficient Motion Control Products Direct Drive PMSM & Encoders

Mosrac is an ISO 9001 company that provides a full range of motion-control products from a single source. We design and manufacture both customer-specific (OEM) and our own-branded (OEM) products based on our product and service offerings. With nearly 15 years of experience, our products are built to last and deliver superior precision, accuracy, consistency, and efficiency. Whether you need a standard or custom component or motion solution, we have exactly what you're looking for.

Identifying Interactions at Scale for LLMs

different_tests

Understanding the behavior of complex machine learning systems, particularly Large Language Models (LLMs), is a critical challenge in modern artificial intelligence. Interpretability research aims to make the decision-making process more transparent to model builders and impacted humans, a step toward safer and more trustworthy AI. To gain a comprehensive understanding, we can analyze these systems through different lenses: feature attribution, which isolates the specific input features driving a prediction (Lundberg & Lee, 2017; Ribeiro et al., 2022); data attribution, which links model behaviors to influential training examples (Koh & Liang, 2017; Ilyas et al., 2022); and mechanistic interpretability, which dissects the functions of internal components (Conmy et al., 2023; Sharkey et al., 2025).

Read More

Scientists built the hardest AI test ever and the results are surprising

As AI systems began acing traditional tests, researchers realized those benchmarks were no longer tough enough. In response, nearly 1,000 experts created Humanity’s Last Exam, a massive 2,500-question challenge covering highly specialized topics across many fields. The exam was engineered so that any question solvable by current AI models was removed. Early results show even the most advanced systems still struggle — revealing a surprisingly large gap between AI performance and true expert-level knowledge.
Page 1 of 599
1 2 3 599