Category Robotics Classification

Page 1 of 454
1 2 3 454

Insect-sized jumping robot can traverse challenging terrains and carry heavy payloads

Insect-scale robots can squeeze into places their larger counterparts can't, like deep into a collapsed building to search for survivors after an earthquake. However, as they move through the rubble, tiny crawling robots might encounter tall obstacles they can't climb over or slanted surfaces they will slide down. While aerial robots could avoid these hazards, the amount of energy required for flight would severely limit how far the robot can travel into the wreckage before it needs to return to base and recharge.

A new robotic gripper made of measuring tape is sizing up fruit and veggie picking

It's a game a lot of us played as children—and maybe even later in life: unspooling measuring tape to see how far it would extend before bending. But to engineers at the University of California San Diego, this game was an inspiration, suggesting that measuring tape could become a great material for a robotic gripper.

The enterprise path to agentic AI

TL;DR:

CIOs face mounting pressure to adopt agentic AI — but skipping steps leads to cost overruns, compliance gaps, and complexity you can’t unwind. This post outlines a smarter, staged path to help you scale AI with control, clarity, and confidence.


AI leaders are under immense pressure to implement solutions that are both cost-effective and secure. The challenge lies not only in adopting AI but also in keeping pace with advancements that can feel overwhelming. 

This often leads to the temptation to dive headfirst into the latest innovations to stay competitive.

However, jumping straight into complex multi-agent systems without a solid foundation is akin to constructing the upper floors of a building before laying its base, resulting in a structure that’s unstable and potentially hazardous.​

In this post, we walk through how to guide your organization through each stage of agentic AI maturity — securely, efficiently, and without costly missteps.

Understanding key AI concepts


Before delving into the stages of AI maturity, it’s essential to establish a clear understanding of key concepts:

Deterministic systems

Deterministic systems are the foundational building blocks of automation.

  • Follow a fixed set of predefined rules where the outcome is fully predictable. Given the same input, the system will always produce the same output. 
  • Does not incorporate randomness or ambiguity. 
  • While all deterministic systems are rule-based, not all rule-based systems are deterministic. 
  • Ideal for tasks requiring consistency, traceability, and control.
  • Examples: Basic automation scripts, legacy enterprise software, and scheduled data transfer processes.
Deterministic AI system

Rule-based systems

A broader category that includes deterministic systems but can also introduce variability (e.g., stochastic behavior).

  • Operate based on a set of predefined conditions and actions — “if X, then Y.” 
  • May incorporate: deterministic systems or stochastic elements, depending on design.
  • Powerful for enforcing structure. 
  • Lack autonomy or reasoning capabilities.
  • Examples: Email filters, Robotic Process Automation (RPA) ) and complex infrastructure protocols like internet routing. 
Rule based system

Process AI

A step beyond rule-based systems. 

  • Powered by Large Language Models (LLMs) and Vision-Language Models (VLMs)
  • Trained on extensive datasets to generate diverse content (e.g., text, images, code) in response to input prompts.
  • Responses are grounded in pre-trained knowledge and can be enriched with external data via techniques like Retrieval-Augmented Generation (RAG).
  • Does not make autonomous decisions — operates only when prompted.
  • Examples: Generative AI chatbots, summarization tools, and content-generation applications powered by LLMs.
Process AI system

Single-agent systems

Introduce autonomy, planning, and tool usage, elevating foundational AI into more complex territory.

  • AI-driven programs designed to perform specific tasks independently. 
  • Can integrate with external tools and systems (e.g., databases or APIs) to complete tasks.
  • Do not collaborate with other agents — operate alone within a task framework.
  • Not to be confused with RPA: RPA is ideal for highly standardized, rules-based tasks where logic doesn’t require reasoning or adaptation.
  • Examples: AI-driven assistants for forecasting, monitoring, or automated task execution that operate independently.
Single agent system

Multi-agent systems

The most advanced stage, featuring distributed decision-making, autonomous coordination, and dynamic workflows.

  • Comprised of multiple AI agents that interact and collaborate to achieve complex objectives.
  • Agents dynamically decide which tools to use, when, and in what sequence.
  • Capabilities include planning, reflection, memory utilization, and cross-agent collaboration.
  • Examples: Distributed AI systems coordinating across departments like supply chain, customer service, or fraud detection.
Multi agent system

What makes an AI system truly agentic?


To be considered truly agentic, an AI system typically demonstrates core capabilities that enable it to operate with autonomy and adaptability:

  • Planning. The system can break down a task into steps and create a plan of execution.

  • Tool calling. The AI selects and uses tools (e.g., models, functions) and initiates API calls to interact with external systems to complete tasks.

  • Adaptability. The system can adjust its actions in response to changing inputs or environments, ensuring effective performance across varying contexts.

  • Memory. The system retains relevant information across steps or sessions.

These characteristics align with widely accepted definitions of agentic AI, including frameworks discussed by AI leaders such as Andrew Ng.​

With these definitions in mind, let’s explore the stages required to progress toward implementing multi-agent systems.

Understanding agentic AI maturity stages 


For the purposes of simplicity, we’ve delineated the path to more complex agentic flows into three stages. Each stage presents unique challenges and opportunities concerning cost, security, and governance

Stage 1: Process AI


What this stage looks like

In the Process AI stage, organizations typically pilot generative AI through isolated use cases like chatbots, document summarization, or internal Q&A. These efforts are often led by innovation teams or individual business units, with limited involvement from IT.

Deployments are built around a single LLM and operate outside core systems like ERP or CRM, making integration and oversight difficult.

Infrastructure is often pieced together, governance is informal, and security measures may be inconsistent. 


Supply chain example for process AI

In the Process AI stage, a supply chain team might use a generative AI-powered chatbot to summarize shipment data or answer basic vendor queries based on internal documents. This tool can pull in data through a RAG workflow to provide insights, but it does not take any action autonomously.

For example, the chatbot could summarize inventory levels, predict demand based on historical trends, and generate a report for the team to review. However, the team must then decide what action to take (e.g., place restock orders or adjust supply levels).

The system simply provides insights — it doesn’t make decisions or take actions.


Common obstacles

While early AI initiatives can show promise, they often create operational blind spots that stall progress, drive up costs, and increase risk if left unaddressed.

  • Data integration and quality. Most organizations struggle to unify data across disconnected systems, limiting the reliability and relevance of generative AI output.

  • Scalability challenges. Pilot projects often stall when teams lack the infrastructure, access, or strategy to move from proof of concept to production.

  • Inadequate testing and stakeholder alignment. Generative outputs are frequently released without rigorous QA or business user acceptance, leading to trust and adoption issues.

  • Change management friction. As generative AI reshapes roles and workflows, poor communication and planning can create organizational resistance.

  • Lack of visibility and traceability. Without model tracking or auditability, it’s difficult to understand how decisions are made or pinpoint where errors occur.

  • Bias and fairness risks. Generative models can reinforce or amplify bias in training data, creating reputational, ethical, or compliance risks.

  • Ethical and accountability gaps. AI-generated content can blur ethical lines or be misused, raising questions around responsibility and control.

  • Regulatory complexity. Evolving global and industry-specific regulations make it difficult to ensure ongoing compliance at scale.


Tool and infrastructure requirements

Before advancing to more autonomous systems, organizations must ensure their infrastructure is equipped to support secure, scalable, and cost-effective AI deployment.

  • Fast, flexible vector database updates to manage embeddings as new data becomes available.

  • Scalable data storage to support large datasets used for training, enrichment, and experimentation.

  • Sufficient compute resources (CPUs/GPUs) to power training, tuning, and running models at scale.

  • Security frameworks with enterprise-grade access controls, encryption, and monitoring to protect sensitive data.

  • Multi-model flexibility to test and evaluate different LLMs and determine the best fit for specific use cases.

  • Benchmarking tools to visualize and compare model performance across assessments and testing.

  • Realistic, domain-specific data to test responses, simulate edge cases, and validate outputs.

  • A QA prototyping environment that supports quick setup, user acceptance testing, and iterative feedback.

  • Embedded security, AI, and business logic for consistency, guardrails, and alignment with organizational standards.

  • Real-time intervention and moderation tools for IT and security teams to monitor and control AI outputs in real time.

  • Robust data integration capabilities to connect sources across the organization and ensure high-quality inputs.

  • Elastic infrastructure to scale with demand without compromising performance or availability.

  • Compliance and audit tooling that enables documentation, change tracking, and regulatory adherence.


Preparing for the next stage

To build on early generative AI efforts and prepare for more autonomous systems, organizations must lay a solid operational and organizational foundation.

  • Invest in AI-ready data. It doesn’t need to be perfect, but it must be accessible, structured, and secure to support future workflows.

  • Use vector database visualizations. This helps teams identify knowledge gaps and validate the relevance of generative responses.

  • Apply business-driven QA/UAT. Prioritize acceptance testing with the end users who will rely on generative output, not just technical teams.

  • Stand up a secure AI registry. Track model versions, prompts, outputs, and usage across the organization to enable traceability and auditing.

  • Implement baseline governance. Establish foundational frameworks like role-based access control (RBAC), approval flows, and data lineage tracking.

  • Create repeatable workflows. Standardize the AI development process to move beyond one-off experimentation and enable scalable output.

  • Build traceability into generative AI usage. Ensure transparency around data sources, prompt construction, output quality, and user activity.

  • Mitigate bias early. Use diverse, representative datasets and regularly audit model outputs to identify and address fairness risks.

  • Gather structured feedback. Establish feedback loops with end users to catch quality issues, guide improvements, and refine use cases.

  • Encourage cross-functional oversight. Involve legal, compliance, data science, and business stakeholders to guide strategy and ensure alignment.


Key takeaways

Process AI is where most organizations begin — but it’s also where many get stuck. Without strong data foundations, clear governance, and scalable workflows, early experiments can introduce more risk than value.

To move forward, CIOs need to shift from exploratory use cases to enterprise-ready systems — with the infrastructure, oversight, and cross-functional alignment required to support safe, secure, and cost-effective AI adoption at scale.

Stage 2: Single-agent systems


What this stage looks like

At this stage, organizations begin tapping into true agentic AI — deploying single-agent systems that can act independently to complete tasks. These agents are capable of planning, reasoning, and calling tools like APIs or databases to get work done without human involvement.

Unlike earlier generative systems that wait for prompts, single-agent systems can decide when and how to act within a defined scope.

This marks a clear step into autonomous operations—and a critical inflection point in an organization’s AI maturity.


Supply chain example for single-agent systems

Let’s revisit the supply chain example. With a single-agent system in place, the team can now autonomously manage inventory. The system monitors real-time stock levels across regional warehouses, forecasts demand using historical trends, and places restock orders automatically via an integrated procurement API—without human input.

Unlike the process AI stage, where a chatbot only summarizes data or answers queries based on prompts, the single-agent system acts autonomously. It makes decisions, adjusts inventory, and places orders within a predefined workflow.

However, because the agent is making independent decisions, any errors in configuration or missed edge cases (e.g., unexpected demand spikes) could result in issues like stockouts, overordering, or unnecessary costs.

This is a critical shift. It’s not just about providing information anymore; it’s about the system making decisions and executing actions, making governance, monitoring, and guardrails more crucial than ever.


Common obstacles

As single-agent systems unlock more advanced automation, many organizations run into practical roadblocks that make scaling difficult.

  • Legacy integration challenges. Many single-agent systems struggle to connect with outdated architectures and data formats, making integration technically complex and resource-intensive.

  • Latency and performance issues. As agents perform more complex tasks, delays in processing or tool calls can degrade user experience and system reliability.

  • Evolving compliance requirements. Emerging regulations and ethical standards introduce uncertainty. Without robust governance frameworks, staying compliant becomes a moving target.

  • Compute and talent demands. Running agentic systems requires significant infrastructure and specialized skills, putting pressure on budgets and headcount planning.

  • Tool fragmentation and vendor lock-in. The nascent agentic AI landscape makes it hard to choose the right tooling. Committing to a single vendor too early can limit flexibility and drive up long-term costs.

  • Traceability and tool call visibility. Many organizations lack the necessary level of observability and granular intervention required for these systems. Without detailed traceability and the ability to intervene at a granular level, systems can easily run amok, leading to unpredictable outcomes and increased risk. 


Tool and infrastructure requirements

At this stage, your infrastructure needs to do more than just support experimentation—it needs to keep agents connected, running smoothly, and operating securely at scale.

  • Integration platform with tools that facilitate seamless connectivity between the AI agent and your core business systems, ensuring smooth data flow across environments.

  • Monitoring systems designed to track and analyze the agent’s performance and outcomes, flag issues, and surface insights for ongoing improvement.

  • Compliance management tools that help enforce AI policies and adapt quickly to evolving regulatory requirements.

  • Scalable, reliable storage to handle the growing volume of data generated and exchanged by AI agents.

  • Consistent compute access to keep agents performing efficiently under fluctuating workloads.

  • Layered security controls that protect data, manage access, and maintain trust as agents operate across systems.

  • Dynamic intervention and moderation that can understand processes aren’t adhering to policies, intervene in real-time and send alerts for human intervention. 


Preparing for the next stage

Before layering on additional agents, organizations need to take stock of what’s working, where the gaps are, and how to strengthen coordination, visibility, and control at scale.

  • Evaluate current agents. Identify performance limitations, system dependencies, and opportunities to improve or expand automation.

  • Build coordination frameworks. Establish systems that will support seamless interaction and task-sharing between future agents.

  • Strengthen observability. Implement monitoring tools that provide real-time insights into agent behavior, outputs, and failures at the tool level and the agent level.

  • Engage cross-functional teams. Align AI goals and risk management strategies across IT, legal, compliance, and business units.

  • Embed automated policy enforcement. Build in mechanisms that uphold security standards and support regulatory compliance as agent systems expand.


Key takeaways

Single-agent systems offer significant capability by enabling autonomous actions that enhance operational efficiency. However, they often come with higher costs compared to non-agentic RAG workflows, like those in the process AI stage, as well as increased latency and variability in response times.

Since these agents make decisions and take actions on their own, they require tight integration, careful governance, and full traceability.

If foundational controls like observability, governance, security, and auditability aren’t firmly established in the process AI stage, these gaps will only widen, exposing the organization to greater risks around cost, compliance, and brand reputation.

Stage 3: Multi-agent systems


What this stage looks like 

In this stage, multiple AI agents work together — each with its own task, tools, and logic — to achieve shared goals with minimal human involvement. These agents operate autonomously, but they also coordinate, share information, and adjust their actions based on what others are doing.

Unlike single-agent systems, decisions aren’t made in isolation. Each agent acts based on its own observations and context, contributing to a system that behaves more like a team, planning, delegating, and adapting in real time.

This kind of distributed intelligence unlocks powerful use cases and massive scale. But as one can imagine, it also introduces significant operational complexity: overlapping decisions, system interdependencies, and the potential for cascading failures if agents fall out of sync. 

Getting this right demands strong architecture, real-time observability, and tight controls.


Supply chain example for multi-agent systems

In earlier stages, a chatbot was used to summarize shipments and a single-agent system was deployed to automate inventory restocking. 

In this example, a network of AI agents are deployed, each specializing in a different part of the operation, from forecasting and video analysis to scheduling and logistics.

When an unexpected shipment volume is forecasted, agents kick into action:

  • A forecasting agent projects capacity needs.
  • A computer vision agent analyzes live warehouse footage to find underutilized space. 
  • A delay prediction agent taps time series data to anticipate late arrivals. 

These agents communicate and coordinate in real time, adjusting workflows, updating the warehouse manager, and even triggering downstream changes like rescheduling vendor pickups.

This level of autonomy unlocks speed and scale that manual processes can’t match. But it also means one faulty agent — or a breakdown in communication — can ripple across the system.

At this stage, visibility, traceability, intervention, and guardrails become non-negotiable.


Common obstacles

The shift to multi-agent systems isn’t just a step up in capability — it’s a leap in complexity. Each new agent added to the system introduces new variables, new interdependencies, and new ways for things to break if your foundations aren’t solid.

  • Escalating infrastructure and operational costs. Running multi-agent systems is expensive—especially as each agent drives additional API calls, orchestration layers, and real-time compute demands. Costs compound quickly across multiple fronts:

    • Specialized tooling and licenses. Building and managing agentic workflows often requires niche tools or frameworks, increasing costs and limiting flexibility.

    • Resource-intensive compute. Multi-agent systems demand high-performance hardware, like GPUs, that are costly to scale and difficult to manage efficiently.

    • Scaling the team. Multi-agent systems require niche expertise across AI, MLOps, and infrastructure — often adding headcount and increasing payroll costs in an already competitive talent market.

  • Operational overhead. Even autonomous systems need hands-on support. Standing up and maintaining multi-agent workflows often requires significant manual effort from IT and infrastructure teams, especially during deployment, integration, and ongoing monitoring.

  • Deployment sprawl. Managing agents across cloud, edge, desktop, and mobile environments introduces significantly more complexity than predictive AI, which typically relies on a single endpoint. In comparison, multi-agent systems often require 5x the coordination, infrastructure, and support to deploy and maintain.

  • Misaligned agents. Without strong coordination, agents can take conflicting actions, duplicate work, or pursue goals out of sync with business priorities.

  • Security surface expansion. Each additional agent introduces a new potential vulnerability, making it harder to protect systems and data end-to-end.

  • Vendor and tooling lock-in. Emerging ecosystems can lead to heavy dependence on a single provider, making future changes costly and disruptive.

  • Cloud constraints. When multi-agent workloads are tied to a single provider, organizations risk running into compute throttling, burst limits, or regional capacity issues—especially as demand becomes less predictable and harder to control.

  • Autonomy without oversight. Agents may exploit loopholes or behave unpredictably if not tightly governed, creating risks that are hard to contain in real time.

  • Dynamic resource allocation. Multi-agent workflows often require infrastructure that can reallocate compute (e.g., GPUs, CPUs) in real time—adding new layers of complexity and cost to resource management.

  • Model orchestration complexity. Coordinating agents that rely on diverse models or reasoning strategies introduces integration overhead and increases the risk of failure across workflows.

  • Fragmented observability. Tracing decisions, debugging failures, or identifying bottlenecks becomes exponentially harder as agent count and autonomy grow.

  • No clear “done.” Without strong task verification and output validation, agents can drift off-course, fail silently, or burn unnecessary compute.


Tool and infrastructure requirements

Once agents start making decisions and coordinating with each other, your systems need to do more than just keep up — they need to stay in control. These are the core capabilities to have in place before scaling multi-agent workflows in production.

  • Elastic compute resources. Scalable access to GPUs, CPUs, and high-performance infrastructure that can be dynamically reallocated to support intensive agentic workloads in real time.

  • Multi-LLM access and routing. Flexibility to test, compare, and route tasks across different LLMs to control costs and optimize performance by use case.

  • Autonomous system safeguards. Built-in security frameworks that prevent misuse, protect data integrity, and enforce compliance across distributed agent actions.

  • Agent orchestration layer. Workflow orchestration tools that coordinate task delegation, tool usage, and communication between agents at scale.

  • Interoperable platform architecture. Open systems that support integration with diverse tools and technologies, helping you avoid lock-in and enabling long-term flexibility.

  • End-to-end dynamic observability and intervention. Monitoring, moderation, and traceability tools that not only surface agent behavior, detect anomalies, and support real-time intervention, but also adapt as agents evolve. These tools can identify when agents attempt to exploit loopholes or create new ones, triggering alerts or halting processes to re-engage human oversight


Preparing for the next stage

There’s no playbook for what comes after multi-agent systems, but organizations that prepare now will be the ones shaping what comes next. Building a flexible, resilient foundation is the best way to stay ahead of fast-moving capabilities, shifting regulations, and evolving risks.

  • Enable dynamic resource allocation. Infrastructure should support real-time reallocation of GPUs, CPUs, and compute capacity as agent workflows evolve.

  • Implement granular observability. Use advanced monitoring and alerting tools to detect anomalies and trace agent behavior at the most detailed level.

  • Prioritize interoperability and flexibility. Choose tools and platforms that integrate easily with other systems and support hot-swapping components and streamlined CI/CD workflows so you’re not locked into one vendor or tech stack.

  • Build multi-cloud fluency. Ensure your teams can work across cloud platforms to distribute workloads efficiently, reduce bottlenecks, avoid provider-specific limitations, and support long-term flexibility.

  • Centralize AI asset management. Use a unified registry to govern access, deployment, and versioning of all AI tools and agents.

  • Evolve security with your agents. Implement adaptive, context-aware security protocols that respond to emerging threats in real time.

  • Prioritize traceability. Ensure all agent decisions are logged, explainable, and auditable to support investigation and continuous improvement.

  • Stay current with tools and strategies. Build systems and workflows that can continuously test and integrate new models, prompts, and data sources.


Key takeaways

Multi-agent systems promise scale, but without the right foundation, they’ll amplify your problems, not solve them. 

As agents multiply and decisions become more distributed, even small gaps in governance, integration, or security can cascade into costly failures.

AI leaders who succeed at this stage won’t be the ones chasing the flashiest demos—they’ll be the ones who planned for complexity before it arrived.

Advancing to agentic AI without losing control


AI maturity doesn’t happen all at once. Each stage — from early experiments to multi-agent systems— brings new value, but also new complexity. The key isn’t to rush forward. It’s to move with intention, building on strong foundations at every step.

For AI leaders, this means scaling AI in ways that are cost-effective, well-governed, and resilient to change. 

You don’t have to do everything right now, but the decisions you make now shape how far you’ll go.

Want to evolve through your AI maturity safely and efficiently? Request a demo to see how our Agentic AI Apps Platform ensures secure, cost-effective growth at each stage.

The post The enterprise path to agentic AI appeared first on DataRobot.

Magnetic microrobot swarm enables 3D imaging of vascular networks

Angiography is a widely used medical imaging technique that allows medical researchers and doctors to capture the vascular network (i.e., blood vessels) using contrast agents, substances that enhance the visibility of specific structures inside the body when exposed to X-rays or other imaging approaches. Conventional angiography techniques rely on contrast agents that are distributed through blood vessels, leveraging the natural flow of blood in the body.

FLUID: 3D-printed open-source robot offers accessible solution for materials synthesis

A team of researchers led by Professor Keisuke Takahashi at the Faculty of Science, Hokkaido University, have created FLUID (Flowing Liquid Utilizing Interactive Device), an open-source robotic system constructed using a 3D printer and off-the-shelf electronic components.

Tiny, soft robot flexes its potential as a life saver

A tiny, soft, flexible robot that can crawl through earthquake rubble to find trapped victims or travel inside the human body to deliver medicine may seem like science fiction, but an international team is pioneering such adaptable robots by integrating flexible electronics with magnetically controlled motion.

Tiny, soft robot flexes its potential as a lifesaver

A tiny, soft, flexible robot that can crawl through earthquake rubble to find trapped victims or travel inside the human body to deliver medicine may seem like science fiction, but an international team led by researchers at Penn State are pioneering such adaptable robots by integrating flexible electronics with magnetically controlled motion.

Repurposing Protein Folding Models for Generation with Latent Diffusion


PLAID is a multimodal generative model that simultaneously generates protein 1D sequence and 3D structure, by learning the latent space of protein folding models.

The awarding of the 2024 Nobel Prize to AlphaFold2 marks an important moment of recognition for the of AI role in biology. What comes next after protein folding?

In PLAID, we develop a method that learns to sample from the latent space of protein folding models to generate new proteins. It can accept compositional function and organism prompts, and can be trained on sequence databases, which are 2-4 orders of magnitude larger than structure databases. Unlike many previous protein structure generative models, PLAID addresses the multimodal co-generation problem setting: simultaneously generating both discrete sequence and continuous all-atom structural coordinates.

Read More

Top Ten Stories in AI Writing: Q1, 2025

This year’s first quarter served-up a number of watershed moments in the breakneck development of AI writers / chatbots.

ChatGPT continued to turn heads with the announcement by its maker – OpenAI – that 400 million people now visit the ChatGPT Web site every week.

And ChatGPT also unveiled a number of new upgrades – including major advances in AI imaging, editing and overall writing performance for writers.

Meanwhile, a dark horse AI writer/chatbot from China – DeepSeek – stunned the world by releasing a chatbot alternative that was nearly as good as ChatGPT, but only cost pennies-on-the-dollar to make.

Here’s a look at all the stories for Q1 that convinced many writers – as well as those across a wide spectrum of industries — that we are now living in an ‘AI First’ world:

 *New Study Finds AI-Powered Writing a Big Hit Among Many White Collar Pros: Stanford University researchers have found that AI writing is being heavily embraced by many white collar workers.

Observes writer Matthias Bastian: “The impact is particularly noticeable in press releases, where up to 24% of content now comes from generative AI systems, or shows significant AI modification.

“The researchers suspect that actual AI adoption rates are higher than their analysis suggests.

“It likely missed heavily human-edited content and text from advanced AI models that closely mimic human writing.

“The study also didn’t examine other potential AI writing use cases, such as social media content creation.”

*Give ChatGPT a Standardized Personality – Including One that Edits: ChatGPT has come out with a new feature that enables you to create a standardized personality for the AI.

Essentially, you can now program ChatGPT to assume the personality and skills of a witty copy editor with deep knowledge of AI and a penchant for detail, for example — and rest assured that ChatGPT will assume that personality each time you log-on.

Before the new feature, users already had the ability to create the same personality for ChatGPT – but the prompt for the personality needed to be loaded into ChatGPT’s message box before each use.

*’Tweaked’ AI Writing Can Now Be Copyrighted: In a far-reaching decision, the U.S. Copyright Office has ruled that AI-generated content — modified by humans — can now be copyrighted.

The move has incredibly positive ramifications for writers who polish output from ChatGPT and similar AI to create blog posts, articles, books, poetry and more.

Observes writer Jacqueline So: “The U.S. Copyright Office processes approximately 500,000 copyright applications each year, with an increasing number being requests to copyright AI-generated works.”

“Most copyright decisions are made on a case-to-case basis.”

*ChatGPT’s Online Editor Gets an Upgrade: Released just a few months ago, ChatGPT’s online editor ‘Canvas’ just got a performance boost.

The tool — great for polishing-up text created with ChatGPT — now runs on ChatGPT-o1, an AI engine that has been hailed for its advanced reasoning capabilities.

Observes writer Eric Hal Schwartz: “You can enable the o1 model in Canvas by selecting it from the model picker or typing the command: /canvas.”

For a comprehensive tour of ChatGPT’s editor, check out: “Ultimate Guide: New ChatGPT Editor, Canvas.”

*ChatGPT Sets New Record: 400 Million Weekly Users: Despite impressive challenges from competitors, ChatGPT still dominates the AI landscape — currently serving 400 million users each week.

Even better: ChatGPT use in business has also doubled in less than six months and is currently used at more than two million enterprises, according to writer Michael Nunez.

Observes Nunez: “The surge in enterprise adoption represents a crucial validation of OpenAI’s strategy to position ChatGPT as not just a chatbot for casual queries, but as a serious productivity tool for businesses.”

*The Number One Users of ChatGPT: Students: ChatGPT-Maker’s CEO Sam Altman just disclosed an eye-opening revelation in the Wall Street Journal: Most of the people using ChatGPT are students.

Given that 400 million people now visit the ChatGPT Web site every week, that means approximately 300-350 million of the people using ChatGPT are students (most).

The takeaway: The statistic explains that while ChatGPT can reduce writing time for simple tasks like email by as much as 90% or more, students are the people who have picked-up and run with that realization – not business pros.

That’s a problem for the lion’s share of business people who ‘get’ that AI writing is not simply coming – it’s here – but have yet to add AI to their toolbox.

Essentially: Colleges in the U.S. alone release 4 million new graduates each year into the U.S. workforce.

And you can bet that since 2023 — when ChatGPT became a force to be reckoned with across the globe — most U.S. college graduates walked into their first jobs already knowing how to automate their business writing with AI.

Something tells me their older brothers and sisters have gotten the memo, too.

*AI as Writing Instructor: K-12 Teachers Continue the Experiment: Despite fears that AI will undermine the learning of critical thinking, increasing numbers of teachers are embedding the tech in their day-to-day courses.

Observes writer Kayla Jimenez: “English teachers told USA TODAY they use AI tools to create homework assignments and quizzes. Others said the technology can take the place of a private tutor for their students — which reduces their workloads.”

Overall, 40% of U.S. English teachers have used AI in the classroom, according to a survey of 12,000 teachers and principals conducted by RAND American Educator Panels.

*Apple Kills Its AI News Summary Service: Smarting from glaring mistakes made by its AI news summary service, Apple has pulled the plug on the AI — at least for now.

One of the highest profile news media outlets disenchanted with Apple’s service is the BBC.

Earlier this month, Apple’s AI news summary service mistakenly reported that alleged CEO killer Luigi Mangione had shot himself — wrongly citing the BBC as the source of its summary.

Observes writer Tripp Mickle: “In a note to developers, Apple said it was working to improve summaries of notifications for news and entertainment apps.

“It plans to make the feature available again in a future software update.”

*ChatGPT’s New AI Image-Maker: ‘Astounding:’ ChatGPT’s new AI-image generator – perfect for writers looking to add supplemental images to their copy — has become a viral sensation across the Web.

Simultaneously embraced by millions of users as AI imaging’s ‘Next Big Thing,’ the new tool has been described as an ‘astounding’ leap forward by Al Samson, a graphic artist with 15+ years experience.

Essentially, the new tool features stunning imaging, extreme detail and much more control over the final image users are looking to create, according to Samson.

A few of the near-infinite number of use cases available with the AI imager include:
~precise image rendering in a photo-realistic or illustration style
~the ability to tweak an image of yourself to make yourself
look ‘more handsome,’ ‘more beautiful’ – or more or less exude any number of other qualities
~the ability to drop a reliable image of your product into any scene you can imagine
~instant-rendering of any image in your brand colors
~instantly recognizable caricatures of celebrities and the famous
~instant creation of a comic-strip in your desired style

While not perfect, Samson says the new imaging tool – which replaces ChatGPT imaging that used to run on the DALL-E AI imaging engine has grabbed the crown as “the best image-generation tool on the market.”

(Fans of DALL-E can still find that imaging tool in ChatGPT’s “GTPs” section.)

For an extremely informed and nuanced overview of everything ChatGPT’s new imaging tool has to offer, check-out Samson’s in-depth, extremely insightful, 29-minute video on the upgrade.

*How DeepSeek Outsmarted the Market and Built a Highly Competitive AI Writer/Chatbot: New York Times writer Cade Metz offers an insightful look in this piece into how newcomer DeepSeek built its AI for pennies-on-the-dollar.

The chatbot stunned AI researchers — and roiled the stock market in February — after showing the world it could develop advanced AI for six million dollars.

DeepSeek’s secret: Moxie. Facing severely restricted access to the bleeding-edge chips needed to develop advanced AI, DeepSeek made-up for that deficiency by writing code that was much smarter and much more efficient than that of many competitors.

The bonus for consumers: “Because the Chinese start-up has shared its methods with other AI researchers, its technological tricks are poised to significantly reduce the cost of building AI.”

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Top Ten Stories in AI Writing: Q1, 2025 appeared first on Robot Writers AI.

How can science benefit from AI? Risks?

Researchers from chemistry, biology, and medicine are increasingly turning to AI models to develop new hypotheses. However, it is often unclear on which basis the algorithms come to their conclusions and to what extent they can be generalized. A publicationnow warns of misunderstandings in handling artificial intelligence. At the same time, it highlights the conditions under which researchers can most likely have confidence in the models.

Artificial intelligence has potential to aid physician decisions during virtual urgent care

Do physicians or artificial intelligence (AI) offer better treatment recommendations for patients examined through a virtual urgent care setting? A new study shows physicians and AI models have distinct strengths. The study compared initial AI treatment recommendations to final recommendations of physicians who had access to the AI recommendations but may or may not have reviewed them.
Page 1 of 454
1 2 3 454