Category robots in business

Page 1 of 460
1 2 3 460

Awkward. Humans are still better than AI at reading the room

Humans are better than current AI models at interpreting social interactions and understanding social dynamics in moving scenes. Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.

Scientists are changing number of experiments run by employing coordinated team of AI-powered robots

To build the experimental stations of the future, scientists at the National Synchrotron Light Source II (NSLS-II), a U.S. Department of Energy (DOE) Office of Science user facility at DOE's Brookhaven National Laboratory, are learning from some of the challenges that face them today. As light source technologies and capabilities continue to advance, researchers must navigate increasingly complex workflows and swiftly evolving experimental demands.

Flying robots unlock new horizons in construction

An international team has explored how in future aerial robots could process construction materials precisely in the air -- an approach with great potential for difficult-to-access locations or work at great heights. The flying robots are not intended to replace existing systems on the ground, but rather to complement them in a targeted manner for repairs or in disaster areas, for instance.

Engineering a robot that can jump 10 feet high — without legs

Inspired by the movements of a tiny parasitic worm, engineers have created a 5-inch soft robot that can jump as high as a basketball hoop. Their device, a silicone rod with a carbon-fiber spine, can leap 10 feet high even though it doesn't have legs. The researchers made it after watching high-speed video of nematodes pinching themselves into odd shapes to fling themselves forward and backward.

Why LLM hallucinations are key to your agentic AI readiness

TL;DR 

LLM hallucinations aren’t just AI glitches—they’re early warnings that your governance, security, or observability isn’t ready for agentic AI. Instead of trying to eliminate them, use hallucinations as diagnostic signals to uncover risks, reduce costs, and strengthen your AI workflows before complexity scales.


LLM hallucinations are like a smoke detector going off.

You can wave away the smoke, but if you don’t find the source, the fire keeps smoldering beneath the surface.

These false AI outputs aren’t just glitches. They’re early warnings that show where control is weak and where failure is most likely to occur.

But too many teams are missing those signals. Nearly half of AI leaders say observability and security are still unmet needs. And as systems grow more autonomous, the cost of that blind spot only gets higher.

To move forward with confidence, you need to understand what these warning signs are revealing—and how to act on them before complexity scales the risk.

Seeing things: What are AI hallucinations?


Hallucinations happen when AI generates answers that sound right—but aren’t. They might be subtly off or entirely fabricated, but either way, they introduce risk.

These errors stem from how large language models work: they generate responses by predicting patterns based on training data and context. Even a simple prompt can produce results that seem credible, yet carry hidden risk. 

While they may seem like technical bugs, hallucinations aren’t random. They point to deeper issues in how systems retrieve, process, and generate information.

And for AI leaders and teams, that makes hallucinations useful. Each hallucination is a chance to uncover what’s misfiring behind the scenes—before the consequences escalate.

Common sources of LLM hallucination issues and how to solve for them


When LLMs generate off-base responses, the issue isn’t always with the interaction itself. It’s a flag that something upstream needs attention.

Here are four common failure points that can trigger hallucinations, and what they reveal about your AI environment:

Vector database misalignment

What’s happening: Your AI pulls outdated, irrelevant, or incorrect information from the vector database.

What it signals: Your retrieval pipeline isn’t surfacing the right context when your AI needs it. This often shows up in RAG workflows, where the LLM pulls from outdated or irrelevant documents due to poor indexing, weak embedding quality, or ineffective retrieval logic.

Mismanaged or external VDBs — especially those fetching public data — can introduce inconsistencies and misinformation that erode trust and increase risk.

What to do: Implement real-time monitoring of your vector databases to flag outdated, irrelevant, or unused documents. Establish a policy for regularly updating embeddings, removing low-value content and adding documents where prompt coverage is weak.

Concept drift

What’s happening: The system’s “understanding” shifts subtly over time or becomes stale relative to user expectations, especially in dynamic environments.

What it signals: Your monitoring and recalibration loops aren’t tight enough to catch evolving behaviors.

What to do: Continuously refresh your model context with updated data—either through fine-tuning or retrieval-based approaches—and integrate feedback loops to catch and correct shifts early. Make drift detection and response a standard part of your AI operations, not an afterthought.

Intervention failures

What’s happening: AI bypasses or ignores safeguards like business rules, policy boundaries, or moderation controls. This can happen unintentionally or through adversarial prompts designed to break the rules.

What it signals: Your intervention logic isn’t strong or adaptive enough to prevent risky or noncompliant behavior.

What to do: Run red-teaming exercises to proactively simulate attacks like prompt injection. Use the results to strengthen your guardrails, apply layered, dynamic controls, and regularly update guards as new ones become available.

Traceability gaps

What’s happening: You can’t clearly explain how or why an AI-driven decision was made.

What it signals: Your system lacks end-to-end lineage tracking—making it hard to troubleshoot errors or prove compliance.

What to do: Build traceability into every step of the pipeline. Capture input sources, tool activations, prompt-response chains, and decision logic so issues can be quickly diagnosed—and confidently explained.


These aren’t just causes of hallucinations. They’re structural weak points that can compromise agentic AI systems if left unaddressed.

What hallucinations reveal about agentic AI readiness


Unlike standalone generative AI applications, agentic AI orchestrates actions across multiple systems, passing information, triggering processes, and making decisions autonomously. 

That complexity raises the stakes.

A single gap in observability, governance, or security can spread like wildfire through your operations.

Hallucinations don’t just point to bad outputs. They expose brittle systems. If you can’t trace and resolve them in relatively simpler environments, you won’t be ready to manage the intricacies of AI agents: LLMs, tools, data, and workflows working in concert.

The path forward requires visibility and control at every stage of your AI pipeline. Ask yourself:

  • Do we have full lineage tracking? Can we trace where every decision or error originated and how it evolved?

  • Are we monitoring in real time? Not just for hallucinations and concept drift, but for outdated vector databases, low-quality documents, and unvetted data sources.

  • Have we built strong intervention safeguards? Can we stop risky behavior before it scales across systems?

These questions aren’t just technical checkboxes. They’re the foundation for deploying agentic AI safely, securely, and cost-effectively at scale. 

The cost of CIOs mismanaging AI hallucinations


Agentic AI raises the stakes for cost, control, and compliance. If AI leaders and their teams can’t trace or manage hallucinations today, the risks only multiply as agentic AI workflows grow more complex.

Unchecked, hallucinations can lead to:

  • Runaway compute costs. Excessive API calls and inefficient operations that quietly drain your budget.

  • Security exposure. Misaligned access, prompt injection, or data leakage that puts sensitive systems at risk.

  • Compliance failures.  Without decision traceability, demonstrating responsible AI becomes impossible, opening the door to legal and reputational fallout.

  • Scaling setbacks. Lack of control today compounds challenges tomorrow, making agentic workflows harder to safely expand. 


Proactively managing hallucinations isn’t about patching over bad outputs. It’s about tracing them back to the root cause—whether it’s data quality, retrieval logic, or broken safeguards—and reinforcing your systems before those small issues become enterprise-wide failures. 

That’s how you protect your AI investments and prepare for the next phase of agentic AI.

LLM hallucinations are your early warning system


Instead of fighting hallucinations, treat them as diagnostics. They reveal exactly where your governance, observability, and policies need reinforcement—and how prepared you really are to advance toward agentic AI.

Before you move forward, ask yourself:

  • Do we have real-time monitoring and guards in place for concept drift, prompt injections, and vector database alignment?

  • Can our teams swiftly trace hallucinations back to their source with full context?

  • Can we confidently swap or upgrade LLMs, vector databases, or tools without disrupting our safeguards?

  • Do we have clear visibility into and control over compute costs and usage?

  • Are our safeguards resilient enough to stop risky behaviors before they escalate?

If the answer isn’t a clear “yes,” pay attention to what your hallucinations are telling you. They’re pointing out exactly where to focus, so your next step toward agentic AI is confident, controlled, and secure.

ake a deeper look at managing AI complexity with DataRobot’s agentic AI platform.

The post Why LLM hallucinations are key to your agentic AI readiness appeared first on DataRobot.

Current AI risks more alarming than apocalyptic future scenarios

Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity. A new study reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously.

Material? Robot? It’s a metabot

The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material's physical structure rather than its chemical composition. In this case, the researchers built their metamaterial using a combination of simple plastics and custom-made magnetic composites. Using a magnetic field, the researchers changed the metamaterial's structure, causing it to expand, move and deform in different directions, all remotely without touching the metamaterial.
Page 1 of 460
1 2 3 460