News

Page 2 of 582
1 2 3 4 582

Plumbing the AI Revolution: Lenovo’s Strategic Pivot to Modernize the Enterprise Backbone

While the headlines of the ongoing AI revolution are often dominated by large language models and generative software, the silent war is being fought in the data center. The hardware required to feed, train, and infer upon these models is […]

The post Plumbing the AI Revolution: Lenovo’s Strategic Pivot to Modernize the Enterprise Backbone appeared first on TechSpective.

Ground robots teaming with soldiers in the battlefield

Modern militaries are steadily integrating ground robots—often called robotic combat vehicles (RCVs) or autonomous ground systems (AGS)—as force multipliers that enhance the reach, endurance, and situational awareness of human units. These platforms handle hazardous or burdensome tasks, allowing squads and platoons to operate more safely and focus on mission execution. Ukraine’s extensive use of unmanned […]

The brewing GenAI data science revolution

If you lead an enterprise data science team or a quantitative research unit today, you likely feel like you are living in two parallel universes.

In one universe, you have the “GenAI” explosion. Chatbots now write code and create art, and boardrooms are obsessed with how large language models (LLMs) will change the world. In the other universe, you have your day job: the “serious” work of predicting churn, forecasting demand, and detecting fraud using structured, tabular data. 

For years, these two universes have felt completely separate. You might even feel that the GenAI hype rocketship has left your core business data standing on the platform.

But that separation is an illusion, and it is disappearing fast.

From chatbots to forecasts: GenAI arrives at tabular and time-series modeling

Whether you are a skeptic or a true believer, you have most certainly interacted with a transformer model to draft an email or a diffusion model to generate an image. But while the world was focused on text and pixels, the same underlying architectures have been quietly learning a different language: the language of numbers, time, and tabular patterns. 

Take for instance SAP-RPT-1 and LaTable. The first uses a transformer architecture, and the second is a diffusion model; both are used for tabular data prediction.

We are witnessing the emergence of data science foundation models.

These are not just incremental improvements to the predictive models you know. They represent a paradigm shift. Just as LLMs can “zero-shot” a translation task they weren’t explicitly trained for, these new models can look at a sequence of data, for example, sales figures or server logs, and generate forecasts without the traditional, labor-intensive training pipeline.

The pace of innovation here is staggering. By our count, since the beginning of 2025 alone, we have seen at least 14 major releases of foundation models specifically designed for tabular and time-series data. This includes impressive work from the teams behind Chronos-2, TiRex, Moirai-2, TabPFN-2.5, and TempoPFN (using SDEs for data generation), to name just a few frontier models.

Models have become model-producing factories

Traditionally, machine learning models were treated as static artifacts: trained once on historical data and then deployed to produce predictions.

AI Model Training
Figure 1: Classical machine learning: Train on your data to build a predictive model

That framing no longer holds. Increasingly, modern models behave less like predictors and more like model-generating systems, capable of producing new, situation-specific representations on demand. 

foundation models
Figure 2: The foundation model instantly interprets the given data based on its experience

We are moving toward a future where you won’t just ask a model for a single point prediction; you will ask a foundation model to generate a bespoke statistical representation—effectively a mini-model—tailored to the specific situation at hand. 

The revolution isn’t coming; it’s already brewing in the research labs. The question now is: why isn’t it in your production pipeline yet?

The reality check: hallucinations and trend lines

If you’ve scrolled through the endless examples of grotesque LLM hallucinations online, including lawyers citing fake cases and chatbots inventing historical events, the thought of that chaotic energy infiltrating your pristine corporate forecasts is enough to keep you awake at night.

Your concerns are entirely justified.

Classical machine learning is the conservative choice for now

While the new wave of data science foundation models (our collective term for tabular and time-series foundation models) is promising, it is still very much in the early days. 

Yes, model providers can currently claim top positions on academic benchmarks: all top-performing models on the time-series forecasting leaderboard GIFT-Eval and the tabular data leaderboard TabArena are now foundation models or agentic wrappers of foundation models. But in practice? The reality is that some of these “top-notch” models currently struggle to identify even the most basic trend lines in raw data. 

They can handle complexity, but sometimes trip over the basics that a simple regression would nail it–check out the honest ablation studies in the TabPFN v2 paper, for instance.

Why we remain confident: the case for foundation models

While these models still face early limitations, there are compelling reasons to believe in their long-term potential. We have already discussed their ability to react instantly to user input, a core requirement for any system operating in the age of agentic AI. More fundamentally, they can draw on a practically limitless reservoir of prior information.

Think about it: who has a better chance at solving a complex prediction problem?

  • Option A: A classical model that knows your data, but only your data. It starts from zero every time, blind to the rest of the world.
  • Option B: A foundation model that has been trained on a mind-boggling number of relevant problems across industries, decades, and modalities—often augmented by vast amounts of synthetic data—and is then exposed to your specific situation.

Classical machine learning models (like XGBoost or ARIMA) do not suffer from the “hallucinations” of early-stage GenAI, but they also do not come with a “helping prior.” They cannot transfer wisdom from one domain to another. 

The bet we are making, and the bet the industry is moving toward, is that eventually, the model with the “world’s experience” (the prior) will outperform the model that is learning in isolation.

Data science foundation models have a shot at becoming the next massive shift in AI. But for that to happen, we need to move the goalposts. Right now, what researchers are building and what businesses actually need remains disconnected. 

Leading tech companies and academic labs are currently locked in an arms race for numerical precision, laser-focused on topping prediction leaderboards just in time for the next major AI conference. Meanwhile, they are paying relatively little attention to solving complex, real-world problems, which, ironically, pose the toughest scientific challenges.

The blind spot: interconnected complexity

Here is the crux of the problem: none of the current top-tier foundation models are designed to predict the joint probability distributions of several dependent targets.

That sounds technical, but the business implication is massive. In the real world, variables rarely move in isolation.

  • City Planning: You cannot predict traffic flow on Main Street without understanding how it impacts (and is impacted by) the flow on 5th Avenue.
  • Supply Chain: Demand for Product A often cannibalizes demand for Product B.
  • Finance: Take portfolio risk. To understand true market exposure, a portfolio manager doesn’t simply calculate the worst-case scenario for every instrument in isolation. Instead, they run joint simulations. You cannot just sum up individual risks; you need a model that understands how assets move together.

The world is a messy, tangled web of dependencies. Current foundation models tend to treat it like a series of isolated textbook problems. Until these models can grasp that complexity, outputting a model that captures how variables dance together, they won’t replace existing solutions.

So, for the moment, your manual workflows are safe. But mistaking this temporary gap for a permanent safety net could be a grave mistake. 

Today’s deep learning limits are tomorrow’s solved engineering problems

The missing pieces, such as modeling complex joint distributions, are not impossible laws of physics; they are simply the next engineering hurdles on the roadmap. 

If the speed of 2025 has taught us anything, it is that “impossible” engineering hurdles have a habit of vanishing overnight. The moment these specific issues are addressed, the capability curve won’t just inch upward. It will spike.

Conclusion: the tipping point is closer than it appears

Despite the current gaps, the trajectory is clear and the clock is ticking. The wall between “predictive” and “generative” AI is actively crumbling.

We are rapidly moving toward a future where we don’t just train models on historical data; we consult foundation models that possess the “priors” of a thousand industries. We are heading toward a unified data science landscape where the output isn’t just a number, but a bespoke, sophisticated model generated on the fly.

The revolution is not waiting for perfection. It is iterating toward it at breakneck speed. The leaders who recognize this shift and begin treating GenAI as a serious tool for structured data before a perfect model reaches the market will be the ones who define the next decade of data science. The rest will be playing catch-up in a game that has already changed.

We are actively researching these frontiers at DataRobot to bridge the gap between generative capabilities and predictive precision. This is just the start of the conversation. Stay tuned—we look forward to sharing our insights and progress with you soon. 

In the meantime, you can learn more about DataRobot and explore the platform with a free trial

The post The brewing GenAI data science revolution appeared first on DataRobot.

Robotic arm successfully learns 1,000 manipulation tasks in one day

Over the past decades, roboticists have introduced a wide range of systems that can effectively tackle some real-world problems. Most of these robots, however, often perform poorly on tasks that they were not trained on, particularly those that entail manipulating previously unseen objects or handling objects that were encountered before in new ways.

DataRobot Q4 update: driving success across the full agentic AI lifecycle

The shift from prototyping to having agents in production is the challenge for AI teams as we look toward 2026 and beyond. Building a cool prototype is easy: hook up an LLM, give it some tools, see if it looks like it’s working. The production system, now that’s hard. Brittle integrations. Governance nightmares. Infrastructure wasn’t built for the complexities and nuances of agents. 

For AI developers, the challenge has shifted from building an agent to orchestrating, governing, and scaling it in a production environment. DataRobot’s latest release introduces a robust suite of tools designed to streamline this lifecycle, offering granular control without sacrificing speed.

New capabilities accelerating AI agent production with DataRobot

New features in DataRobot 11.2 and 11.3 help you close the gap with dozens of updates spanning observability, developer experience, and infrastructure integrations.

Together, these updates focus on one goal: reducing the friction between building AI agents and running them reliably in production. 

The most impactful areas of these updates include:

  • Standardized connectivity through MCP on DataRobot
  • Secure agentic retrieval through Talk to My Docs (TTMDocs) 
  • Streamlined agent build and deploy through CLI tooling
  • Prompt version control through Prompt Management Studio
  • Enterprise governance and observability through resource monitoring
  • Multi-model access through the expanded LLM Gateway
  • Expanded ecosystem integrations for enterprise agents

The sections that follow focus on these capabilities in detail, starting with standardized connectivity, which underpins every production-grade agent system.

MCP on DataRobot: standardizing agent connectivity

Agents break when tools change. Custom integrations become technical debt. The Model Context Protocol (MCP) is emerging as the standard to solve this, and we’re making it production-ready. 

We’ve added an MCP server template to the DataRobot community GitHub.

  • What’s new: An MCP server template you can clone, test locally, and deploy directly to your DataRobot cluster. Your agents get reliable access to tools, prompts, and resources without reinventing the integration layer every time. Easily convert your predictive models as tools that are discoverable by agents.
  • Why it matters: With our MCP template, we’re giving you the open standard with enterprise guardrails already built in. Test on your laptop in the morning, deploy to production by afternoon.
MCP Server Template

Talk to My Docs: Secure, agentic knowledge retrieval

Everyone is building RAG. Almost nobody is building RAG with RBAC, audit trails, and the ability to swap models without rewriting code. 

The “Talk to My Docs” application template brings natural language chat-style productivity across all your documents and is secured and governed for the enterprise.

  • What’s new: A secure, governed chat interface that connects to Google Drive, Box, SharePoint, and local files. Unlike basic RAG, it handles complex formats from tables, spreadsheets, multi-doc synthesis while maintaining enterprise-grade access control.
  • Why it matters: Your team needs ChatGPT-style productivity. Your security team needs proof that sensitive documents stay restricted. This does both, out of the box.
Talk to My Docs

Agentic application starter template and CLI: Streamlined build and deployment

Getting an agent into production should not require days of scaffolding, wiring services together, or rebuilding containers for every small change. Setup friction slows experimentation and turns simple iterations into heavyweight engineering work.

To address this, DataRobot is introducing an agentic application starter template and CLI, both designed to reduce setup overhead across both code-first and low-code workflows.

  • What’s new: An agentic application starter template and CLI that let developers configure agent components through a single interactive command. Out-of-the-box components include an MCP server, a FastAPI backend, and a React frontend. For teams that prefer a low-code approach, integration with NVIDIA’s NeMo Agent Toolkit enables agent logic and tools to be defined entirely through YAML. Runtime dependencies can now be added dynamically, eliminating the need to rebuild Docker images during iteration.
  • Why it matters: By minimizing setup and rebuild friction, teams can iterate faster and move agents into production more reliably. Developers can focus on agent logic rather than infrastructure, while platform teams maintain consistent, production-ready deployment patterns.
CLI

Prompt management studio: DevOps for prompts

As prompts move from experiments to production assets, ad hoc editing quickly becomes a liability. Without versioning and traceability, teams struggle to reproduce results or safely iterate.

To address this, DataRobot introduces the Prompt Management Studio, bringing software-style discipline to prompt engineering.

  • What’s new: A centralized registry that treats prompts as version-controlled assets. Teams can track changes, compare implementations, and revert to stable versions as prompts move through development and deployment.
  • Why it matters: By applying DevOps practices to prompts, teams gain reproducibility and control, making it easier to transition from prototyping to production without introducing hidden risk.

Multi-tenant governance and resource monitoring: Operational control at scale

As AI agents scale across teams and workloads, visibility and control become non-negotiable. Without clear insight into resource usage and enforceable limits, performance bottlenecks and cost overruns quickly follow.

  • What’s new: The enhanced Resource Monitoring tab provides detailed visibility into CPU and memory utilization, helping teams identify bottlenecks and manage trade-offs between performance and cost. In parallel, Multi-tenant AI Governance introduces token-based access with configurable rate limits to ensure fair resource consumption across users and agents.
  • Why it matters: Developers gain clear insight into how agent workloads behave in production, while platform teams can enforce guardrails that prevent noisy neighbors and uncontrolled resource usage as systems scale.
Governance and Resource Monitoring

Expanded LLM Gateway: Multi-model access without credential sprawl

As teams experiment with agent behavior and reasoning, access to multiple foundation models becomes essential. Managing separate credentials, rate limits, and integrations across providers quickly introduces operational overhead.

  • What’s new: The expanded LLM Gateway adds support for Cerebras and Together AI alongside Anthropic, providing access to models such as Gemma, Mistral, Qwen, and others through a single, governed interface. All models are accessed using DataRobot-managed credentials, eliminating the need to manage individual API keys.
  • Why it matters: Teams can evaluate and deploy agents across multiple model providers without increasing security risk or operational complexity. Platform teams maintain centralized control, while developers gain flexibility to choose the right model for each workload.

New supporting ecosystem integrations

Jira and Confluence connectors: To power your vector databases, DataRobot provides a cohesive ecosystem for building enterprise-ready, knowledge-aware agents.

NVIDIA NIM Integration: Deploy Llama 4, Nemotron, GPT-OSS, and 50+ GPU-optimized models without the MLOps complexity. Pre-built containers, production-ready from day one.

Milvus Vector Database: Direct integration with the leading open-source VDB, plus the ability to select distance metrics that actually matter for your classification and clustering tasks.

Azure Repos & Git Integration: Seamless version control for Codespaces development with Azure Repos or self-hosted Git providers. No manual authentication required. Your code stays centralized where your team already works.

Get hands-on with DataRobot’s Agentic AI 

If you’re already a customer, you can spin up the GenAI Test Drive in seconds. No new account. No sales call. Just 14 days of full access inside your existing SaaS environment to test these features with your actual data.  

Not a customer yet? Start a 14-day free trial and explore the full platform.

For more information, please visit our Version 11.2 and Version 11.3 release notes in the DataRobot docs.

The post DataRobot Q4 update: driving success across the full agentic AI lifecycle appeared first on DataRobot.

How U.S. Manufacturing VPs Can Close the Execution Gap — The New Playbook for Operational Excellence

Operational excellence used to mean efficiency. Now, it means consistency. In a volatile manufacturing environment, the winners aren’t those with the best machines or biggest budgets — they’re the ones who can execute the same playbook flawlessly, every day, on every line.

AI-powered robotic hands learn dexterity by mimicking human movements and anatomy

Step inside the Soft Robotics Lab at ETH Zurich, and you find yourself in a space that is part children's nursery, part high-tech workshop and part cabinet of curiosities. The lab benches are strewn with foam blocks, stuffed animals—including a cuddly squid—and other colorful toys used to train robotic dexterity. Piled up on every surface are sensors, cables and measurement devices. Skeletal fingers, on show in display cases or attached to powerful robotic arms, seem to reach out to grab you from every corner.

AI-powered robotic hands learn dexterity by mimicking human movements and anatomy

Step inside the Soft Robotics Lab at ETH Zurich, and you find yourself in a space that is part children's nursery, part high-tech workshop and part cabinet of curiosities. The lab benches are strewn with foam blocks, stuffed animals—including a cuddly squid—and other colorful toys used to train robotic dexterity. Piled up on every surface are sensors, cables and measurement devices. Skeletal fingers, on show in display cases or attached to powerful robotic arms, seem to reach out to grab you from every corner.

KNF – Automation Technology Requires Reliable and Durable Pumps

KNF vacuum pumps for automation applications are designed for a long service life, with micro gas pumps used as cobot pumps achieving more than 20,000 hours. The latest generation of KNF brushless DC motors has an innovative bearing design that withstands high mechanical loads. This technical strength protects the vacuum pump's longevity, especially with fast switching cycles.

UPS buys hundreds of robots to unload trucks in automation push

United Parcel Service Inc. will invest $120 million in 400 robots used to unload trucks, according to people familiar with the matter, revealing new details on the logistics giant's $9 billion automation plan that aims to boost profits by decreasing labor costs.

‘Robot, make me a chair’: AI-driven system designs, builds multicomponent objects from user prompts

Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don't lend themselves to brainstorming or rapid prototyping.

The Right 3D Vision Scanner for Robotic Programming: Laser Profilers vs Structured Light Scanners in Industrial Automation

By combining flexible vision technology with automated processing, manufacturers and system integrators can shorten deployment cycles, reduce reliance on fixtures, and achieve the adaptability needed for high-mix, high-precision production.
Page 2 of 582
1 2 3 4 582