Category robots in business

Page 2 of 441
1 2 3 4 441

3D-printed robots: Soft-jointed swarms tackle tough terrains and tasks

Imagine a swarm of tiny robots, each about the size of the palm of your hand, spreading out over a wildfire-ravaged community, mapping areas contaminated by toxic materials, searching for survivors, identifying areas of rapid wildfire spread. Or picture the robots being used to clear battlefields of mines, conduct search and rescue missions after earthquakes, or deployed on farms to fend against pests and track soil conditions.

The Kiri-Spoon: Research turns robotic hardware into flatware for assisted eating

More than 2 million adults living in the United States rely on a caregiver's assistance to eat daily meals. In addition to human caregivers, technology has been developed to provide assistance. For example, tabletop and wheelchair-mounted robotic arms have been programmed to pick up foods and bring them to the human operator.

Apple engineers create expressive Pixar-like table lamp with AI capabilities

A team of engineers at Apple Computer has developed an expressive table lamp that interacts with a user rather than simply carrying out instructions. The group has posted a paper on the arXiv preprint server describing the factors that went into the development of the lamp and its current features. They have also posted several videos showing the robot lamp in action.

Agentic AI: Real-world business impact, enterprise-ready solutions

Building and operating production-grade agentic AI applications requires more than just great foundation models (FMs). AI teams must manage complex workflows, infrastructure and the full AI lifecycle – from prototyping to production. 

Yet, fragmented tooling and rigid infrastructure force teams to spend more time managing complexity than delivering innovation. 

With the acquisition of Agnotiq and their open-source distributed computing platform, Covalent, DataRobot accelerates agentic AI development and deployment by unifying AI-driven decision-making, governance, lifecycle management, and compute orchestration – enabling AI developers to focus on application logic instead of infrastructure management.

In this blog, we’ll explore how these expanded capabilities help AI practitioners build and deploy agentic AI applications in production faster and more seamlessly.

How DataRobot empowers agentic AI

  • Business process specific AI-driven workflows. Mechanisms to translate business use cases  into business context aware agentic AI workflows and enable multi-agent frameworks to dynamically decide which functions, agents, or tools to call.
  • The broadest suite of AI tools and models. Build, compare, and deploy the best agentic AI workflows.
  • Best-in-class governance and monitoring. Governance (with AI registry) and monitoring for AI models, applications, and autonomous agents.

How Agnostiq enhances the stack

  • Heterogeneous compute execution.  Agents run where data and applications reside, ensuring compatibility across diverse environments instead of being confined to a single location.
  • Optimized compute flexibility. Customers can leverage all available compute options—on-prem, accelerated clouds, and hyperscalers — to optimize for availability, latency, and cost.
  • Orchestrator of orchestrators. Works seamlessly with popular frameworks like Run.ai, Kubernetes, and SLURM to unify workload execution across infrastructures.

The hidden complexity of building and managing production-grade agentic AI applications 

Today, many AI teams can develop simple prototypes and demos, but getting agentic AI applications into production is a far greater challenge. Two hurdles stand in the way. 

1. Building the application 

Developing a production-grade agentic AI application requires more than just writing code. Teams must:

  • Translate business needs into workflows.

  • Experiment with different strategies using a combination of LLMs, embedding models,  Retrieval Augmented Generation (RAG), fine-tuning techniques, guardrails, and prompting methods. 

  • Ensure solutions meet strict quality, latency, and cost objectives for specific business use cases. 

  • Navigate infrastructure constraints by custom-coding workflows to run across cloud, on-prem, and hybrid environments. 

This demands not only a broad set of generative AI tools and models that work together seamlessly with enterprise systems but also infrastructure flexibility to avoid vendor lock-in and bottlenecks. 

2. Deploying and operating at scale

Production AI applications require:

  • Provisioning and managing  GPUs and other infrastructure.

  • Monitoring performance, ensuring reliability, and adjusting models dynamically. 

  • Enforcement of governance, access controls, and compliance reporting.

Even with existing solutions, it can take months to move an application from development to production. 

Existing AI solutions fall short

Most teams rely on one of the two strategies – each with trade-offs

  • Custom “build your own” (BYO) AI stacks: Offer more control but require significant manual effort to integrate tools, configure infrastructure, and manage systems – making it resource-intensive and unsustainable at scale. 
  • Hyperscaler AI platforms: Offer an ensemble of tools for different parts of the AI lifecycle, but these tools aren’t inherently designed to work together. AI teams must integrate, configure, and manage multiple services manually, adding complexity and reducing flexibility. In addition, they tend to lack governance, observability, and usability while locking teams into proprietary ecosystems with limited model and tool flexibility.

A faster, smarter way to build and deploy agentic AI applications

AI teams need a seamless way to build, deploy, and manage agentic AI applications without infrastructure complexity. With DataRobot’s expanded capabilities, they can streamline model experimentation and deployment, leveraging built-in tools to support real-world business needs.

Key benefits for AI teams

  • Turnkey, use-case specific AI apps: Customizable AI apps enable fast deployment of agentic AI applications, allowing teams to tailor workflows to fit specific business needs. 
  • Iterate rapidly with the broadest suite of AI tools. Experiment with custom and open-source generative AI models. Use fully managed RAG, Nvidia NeMo guardrails, and built-in evaluation tools to refine agentic AI workflows. 
  • Optimize AI workflows with built-in evaluation. Select the best agentic AI approach for your use case with LLM-as-a-Judge, human-in-the-loop evaluation, and operational monitoring (latency, token usage, performance metrics). 
  • Deploy and scale with adaptive infrastructure. Set criteria like cost, latency, or availability and let the system allocate workloads across on-prem and cloud environments. Scale on-premises and expand to the cloud as demand grows without manual reconfiguration.
  • Unified observability and compliance. Monitor all models – including third-party – from a single pane of glass, track AI assets in the AI registry, and automate compliance with audit-ready reporting. 

With these capabilities, AI teams no longer have to choose between speed and flexibility. They can build, deploy, and scale agentic AI applications with less friction and greater control. 

Let’s walk through an example of how these capabilities come together to enable faster, more efficient agentic AI development. 

Orchestrating multi-agent AI workflows at scale

Sophisticated multi-agent workflows are pushing the boundaries of AI capability. While several open-source and proprietary frameworks exist for building multi-agent systems, one key challenge remains overlooked: orchestrating the heterogeneous compute and governance, and operational requirements of each agent.

Each member of a multi-agent workflow may require different backing LLMs — some fine-tuned on domain-specific data, others multi-modal, and some vastly different in size. For example:

  • A report consolidation agent might only need Llama 3.3 8B, requiring a single Nvidia A100 GPU.

  • A primary analyst agent might need Llama 3.3 70B or 405B, demanding multiple A100 or even H100 GPUs.

Provisioning, configuring environments, monitoring, and managing communication across multiple agents with varying compute requirements is already complex. In addition, operational and governance constraints can determine where certain jobs must run. For instance, if data is required to reside in certain data centers or countries.

Here’s how it works in action.

Screenshot 2025 02 09 175427

Use case: A multi-agent stock investment strategy analyzer

Financial analysts need real-time insights to make informed investment decisions, but manually analyzing vast amounts of financial data, news, and market signals is slow and inefficient. 

A multi-agent AI system can automate this process, providing faster, data-driven recommendations.

In this example, we build a Stock Investment Strategy Analyzer, a multi-agent workflow that:

  • Generates a structured investment report with data-driven insights and a buy rating.
  • Tracks market trends by gathering and analyzing real-time financial news.
  • Evaluates financial performance, competitive landscape, and risk factors using dynamic agents.

How dynamic agent creation works

Unlike static multi-agent workflows, this system creates agents on-demand based on the real-time market data. The primary financial analyst agent dynamically generates a cohort of specialized agents, each with a unique role.

Screenshot 2025 02 09 175443

Workflow breakdown

  1. The primary financial analyst agent gathers and processes initial news reports on a stock of interest.

  2. It then generates specialized agents, assigning them roles based on real-time data insights.

  3. Specialized agents analyze different factors, including:
    – Financial performance (balance sheets, earnings reports)
    – Competitive landscape (industry positioning, market threats)
    – External market signals (web searches, news sentiment analysis)

  4. A set of reporting agents compiles insights into a structured investment report with a buy/sell recommendation.

This dynamic agent creation allows the system to adapt in real time, scaling resources efficiently while ensuring specialized agents handle relevant tasks.

Infrastructure orchestration with Covalent

The combined power of DataRobot and Agnostiq’s Covalent platform eliminates the need to manually build and deploy Docker images. Instead, AI practitioners can simply define their package dependencies, and Covalent handles the rest.

Step 1: Define the compute environment

Step 1 Define compute environment
  • No manual setup required. Simply list dependencies and Covalent provisions the necessary environment.

Step 2: Provision compute resources in a software-defined manner

Each agent requires different hardware, so we define compute resources accordingly:

Step 2 Provision compute resources

Covalent automates compute provisioning, allowing AI developers to define compute needs in Python while handling resource allocation across multiple cloud and on-prem environments. 

Acting as an “orchestrator of orchestrators” it bridges the gap between agentic logic and scalable infrastructure, dynamically assigning workloads to the best available compute resources. This removes the burden of manual infrastructure management, making multi-agent applications easier to scale and deploy. 

Combined with DataRobot’s governance, monitoring, and observability, it gives teams the flexibility to manage agentic AI more efficiently. 

  • Flexibility: Agents using large models (e.g., Llama 3.3 70B) can be assigned to multi-GPU A100/H100 instances, while running lightweight agents on CPU-based infrastructure.

  • Automatic scaling: Covalent provisions resources across clouds and on-prem as needed, eliminating manual provisioning.

Once compute resources are provisioned, agents can seamlessly interact through a deployed inference endpoint for real-time decision-making. 

Step 3: Deploy an AI inference endpoint

For real-time agent interactions, Covalent makes deploying inference endpoints seamless. Here’s an inference service set-up for our primary financial analyst agent using Llama 3.3 8B: 

Step 3 Deploy an AI inference endpoint
  • Persistent inference service enables multi-agent interactions in real time. 
  • Supports lightweight and large-scale models. Simply adjust the execution environment as needed. 

Want to run a 405B parameter model that requires 8x H100s? Just define another executor and deploy it in the same workflow.

Step 4: Tearing down infrastructure

Once the workflow completes, shutting down resources is effortless.

Step 4 Tearing down infrastructure
  • No wasted compute. Resources deallocate instantly after teardown. 
  • Simplified management. No manual cleanup required.

Scaling AI without automation

Before jumping into the implementation, consider what it would take to build and deploy this application manually. Managing dynamic, semi-autonomous agents at scale requires constant oversight — teams must balance capabilities with guardrails, prevent unintended agent proliferation, and ensure a clear chain of responsibility.

Without automation, this is a massive infrastructure and operational burden. Covalent removes these challenges, enabling teams to orchestrate distributed applications across any environment — without vendor lock-in or specialized infra teams.

Give it a try.

Explore and customize the full working implementation in this detailed documentation. 

A look inside Covalent’s orchestration engine

Compute infra abstraction

Covalent lets AI practitioners define compute requirements in Python — without manual containerization, provisioning, or scheduling. Instead of dealing with raw infrastructure, users specify abstracted compute concepts similar to serverless frameworks.

  • Run AI pipelines anywhere, from an on-prem GPU cluster to AWS P5.24xl instances — with minimal code changes.

  • Developers can access cloud, on-prem, and hybrid compute resources through a single Python interface.

Cloud-agnostic orchestration: Scaling across distributed environments

Covalent operates as an orchestrator of the orchestrator layer above traditional orchestrators like Kubernetes, Run:ai and SLURM, enabling cross-cloud and multi-data center orchestration.

  • Abstracts clusters, not just VMs. The first generation of orchestrators abstracted VMs into clusters. Covalent takes it further by abstracting clusters themselves.
  • Eliminates DevOps overhead. AI teams get cloud flexibility without vendor lock-in, while Covalent automates provisioning and scaling.

Workflow orchestration for agentic AI pipelines

Covalent includes native workflow orchestration built for high-throughput, parallel AI workloads.

  • Optimizes execution across hybrid compute environments. Ensures seamless coordination between different models, agents, and compute instances.

  • Orchestrates complex AI workflows. Ideal for multi-step, multi-model agentic AI applications.

Designed for evolving AI workloads

Originally built for quantum and HPC applications, Covalent now unifies diverse computing paradigms with a modular architecture and plug-in ecosystem.

  • Extensible to new HPC technologies & hardware. Ensures applications remain future-proof as new AI hardware enters the market.
DataRobot Covalent AI stack architecture



By integrating Covalent’s pluggable compute orchestrator, the DataRobot extends its capabilities as an infrastructure-agnostic AI platform, enabling the deployment of AI applications that require large-scale, distributed GPU workloads while remaining adaptable to emerging HPC technologies & hardware vendors. 

Bringing agentic AI to production without the complexity

Agentic AI applications introduce new levels of complexity—from managing multi-agent workflows to orchestrating diverse compute environments. With Covalent now part of DataRobot, AI teams can focus on building, not infrastructure.

Whether deploying AI applications across cloud, on-prem, or hybrid environments, this integration provides the flexibility, scalability, and control needed to move from experimentation to production—seamlessly.

Big things are ahead for agentic AI. This is just the beginning of simplifying orchestration, governance, and scalability. Stay tuned for new capabilities coming soon and sign up for a free trial to explore more.

The post Agentic AI: Real-world business impact, enterprise-ready solutions appeared first on DataRobot.

Too Good to Be True?

Increasing Numbers of Organizations Ban DeepSeek

Wary of code implanted in DeepSeek that enables the AI chatbot to send user data to the Chinese government, increasing numbers of countries and organizations are simply banning it.

Italy, Taiwan and Australia have already given the cold shoulder to the app — China’s wunderkind answer to ChatGPT.

Other government entitites joining the ban include Texas, NASA, the U.S. Navy and the Pentagon.

Observes writer Kyle Wiggers: “Corporations have banned DeepSeek, too — by the hundreds.”

In other news and analysis on AI writing:

*In-Depth Guide: One Writer’s Take: Grammarly Beats Apple’s AI Writing Tools: For writing veteran Adam Engst, there’s no competition in a shoot-out between Grammarly and Apple: Grammarly wins, hands down.

Ernst’s biggest beef with Apple’s AI Writing Tools: “While Grammarly integrates seamlessly into your text and clearly shows what will happen if you accept a change in nearly all situations, Apple’s Writing Tools require constant activation and provide significantly less feedback about their changes”

For an in-depth comparison of the two, this is the place to click.

*OpenAI’s New ‘Deep Research’: A Game-Changer for Writers?: Writer Azeem Azhar believes OpenAI’s Deep Research — an AI tool capable of extremely in-depth Web research that can also auto-generate an in-depth, written report of its analysis — represents yet another inflection point in the advancement of AI for writers and other researchers.

Observes Azhar: “DeepResearch is a milestone in how we access and manipulate knowledge.

“I have run several queries through DeepResearch. Each time I pass a request to DeepResearch it evaluates it and, like a good researcher, asks for clarifications.

“In one of these, I asked it to research the comparative environmental costs — from energy, water, waste, and emissions — of a range of mainstream activities.

“Once I have responded to the question, DeepResearch disappeared off to do the work. In this case, the bot worked for 73 minutes and consulted more than 29 sources.

“The output was a table covering 11 different activities with six different dimensions of environmental impact. The full text is 1,900 words, excluding the dozens of footnote hyperlinks.

“For 73 minutes’ work, this is excellent. I certainly could not have done this in an hour. “

Currently, DeepResearch is only available to users of ChatGPT Pro — a $200/month version of ChatGPT.

*Google Adds Enhanced Brainstorming to its ChatGPT Competitor, Gemini: Google is out with a souped-up version of its AI Chatbot — dubbed ‘Gemini 2.0 Flash Thinking Experimental.’

Observes writer Eric Hal Schwartz: “This combines the speed of the original 2.0 model with improved reasoning abilities.

“So, it can think fast but will think things through before it speaks. For anyone who has ever wished their AI assistant could process more complex ideas without slowing its response time, this update is a promising step forward.”

*OpenAI Launches Major Expansion Into Japan: ChatGPT’s maker is teaming-up with investor SoftBank Group to expand aggressively into Japan.

Observes writer Kosaku Narioka: “The 50-50 joint venture will begin offering the services first in Japan and establish a model for global adoption.

“As the first case, the Japanese technology investment company will spend $3 billion annually to use OpenAI’s technology across its group businesses.”

*AI Proofreading/Editing Tools Enjoy Steady, Increasing Demand: Consumer appetite for automated proofreading tools looks healthy through 2031, which should grow annually at a 16% clip, according to Market Research Intellect.

Key players in the market, according to MRI, are:

~Grammarly
~ProWritingAid
~Ginger Software
~WhiteSmoke
~GlobalVision
~Intelligent Editing RussTek
~Litera
~Druide
~ClaimMaster
~LanguageTool
~WebSpellChecker
~Linguix
~Proofread Bot
~Plagiarismchecker

*OpenAI 03: A Deep Dive Into the ChatGPT-Maker’s Most Powerful AI Reasoning Engine: Writer Michael Kerner offers a comprehensive guide to what to expect from this advanced AI engine, specially designed for writers and others working in the hard sciences.

Observes Kerner: “While GPT-4 excels at general language tasks, the o-series focuses specifically on reasoning capabilities.

“Unlike traditional AI models, o3 is specifically designed to excel at tasks requiring deep analytical thinking, problem-solving and complex reasoning.”

*Fearless: AI Chipmaker Nvida Reportedly Sanguine About Sensation DeepSeek: Despite roiling markets earlier this month after apparently proving that major AI advances can be made for pennies-on-the-dollar — and without the most advanced versions of Nvidia’s renowned AI chips — DeepSeek has not rattled the Nvidia, according to writer Raffael Huang.

Observes Huang: “Some investors interpreted the advance as undercutting the market in the West for Nvidia’s top-of-the-line products.

“Yet Nvidia knew that risk came with what it was doing in China.”

*Run DeepSeek — China’s Answer to ChatGPT — on Your Laptop: YouTuber NetworkChuck has figured-out a way to run AI chatbot sensation DeepSeek on an everyday laptop — and shows you how in this 12-minute video.

Even better: NetworkChuck insists using DeepSeek on an laptop can be safer — in terms of data privacy — than using DeepSeek on the Web.

You be the judge.

*AI Big Picture: U.S. Legislators Mobilize to Ban DeepSeek: Talk about a persona non grata: A number of U.S legislators are coalescing behind a bipartisan bill that would ban DeepSeek from government-owned.

Wary that the chatbot — China’s answer to ChatGPT — will be used for spying and data-gathering by the Chinese government, many supporters see the ban as a no-brainer.

Observes writer Natalie Andrews: The chatbot app “has intentionally hidden code that could send user login information to China Mobile, a state-owned telecommunications company that has been banned from operating in the U.S.”

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Too Good to Be True? appeared first on Robot Writers AI.

Israeli Withdrawal from Netzarim Corridor: A Step Toward Stability

A Strategic Move for Lasting Peace The Israeli Defense Forces (IDF) have fully withdrawn from the Netzarim Corridor, a significant step in honoring the ongoing ceasefire agreement with Hamas. This decision reflects Israel’s commitment to reducing hostilities and fostering stability in the region. The withdrawal opens up movement for Palestinians while ensuring Israel maintains security...

The post Israeli Withdrawal from Netzarim Corridor: A Step Toward Stability appeared first on 1redDrop.

Civilization VII VR: A Game-Changer for Strategy Fans?

Civilization Enters the VR Era The Civilization franchise has long been the gold standard of turn-based strategy gaming, captivating players with its deep mechanics, historical immersion, and endless replayability. Now, with the upcoming launch of Sid Meier’s Civilization VII – VR on the Meta Quest 3 and 3S, Firaxis Games and 2K are set to...

The post Civilization VII VR: A Game-Changer for Strategy Fans? appeared first on 1redDrop.

PlayStation Network’s 24-Hour Outage: A Crisis in Communication?

The Unexpected Blackout For over 24 hours, PlayStation Network (PSN) users worldwide faced a frustrating and unexplained outage. Reports of server issues first emerged on Friday evening at approximately 6:30 p.m. EST, sending gamers into a frenzy as they were unable to access online services, including multiplayer games, the PlayStation Store, and even offline gaming...

The post PlayStation Network’s 24-Hour Outage: A Crisis in Communication? appeared first on 1redDrop.

Community College Student and Drone Team Qualify for Finals of Prestigious AI Autonomous Drone Race in Abu Dhabi

Being one of the 12 international teams to qualify for the finals and compete for the $1 million prize, the Hornet Drone Team represents a groundbreaking collaboration between Fullerton College, students from the UCI, and top-tier pilots from Cyclone Drone Racing.

Robot Talk Episode 108 – Giving robots the sense of touch, with Anuradha Ranasinghe

Claire chatted to Anuradha Ranasinghe from Liverpool Hope University about haptic (touch) sensors for wearable tech and robotics.

Anuradha Ranasinghe earned her PhD in robotics from King’s College London in 2015, focusing on haptic-based human control in low-visibility conditions. She is now a senior lecturer in robotics at Liverpool Hope University, researching haptics, miniaturized sensors, and perception. Her work has received national and international media attention, including features by EPSRC, CBS Radio, Liverpool Echo, and Techxplore. She has published in leading robotics conferences and journals, and she has presented her findings at various international conferences.

ASUS Zenfone 12 Ultra: A Premium AI Powerhouse or Just Another Flagship?

The smartphone industry is experiencing a rapid shift toward AI-powered features, and ASUS has jumped headfirst into this evolution with the launch of the Zenfone 12 Ultra. Announced on February 6, 2025, this premium device is designed to blend modern aesthetics with cutting-edge AI capabilities, redefining what a flagship phone should offer. But does it...

The post ASUS Zenfone 12 Ultra: A Premium AI Powerhouse or Just Another Flagship? appeared first on 1redDrop.

ASUS Zenfone 12 Ultra: A Premium AI Powerhouse or Just Another Flagship?

The smartphone industry is experiencing a rapid shift toward AI-powered features, and ASUS has jumped headfirst into this evolution with the launch of the Zenfone 12 Ultra. Announced on February 6, 2025, this premium device is designed to blend modern aesthetics with cutting-edge AI capabilities, redefining what a flagship phone should offer. But does it...

The post ASUS Zenfone 12 Ultra: A Premium AI Powerhouse or Just Another Flagship? appeared first on 1redDrop.

Maximum process reliability for the protein source of the future

For more than 30 years, the SCHUNK PGN-plus-P gripper has been the most versatile gripper on the market, and is constantly finding new applications. A current and unusual application example is the use in the automated production of animal feed from insect larvae in Austria.
Page 2 of 441
1 2 3 4 441