Page 4 of 559
1 2 3 4 5 6 559

Friction-based landing gear enables drones to safely land on fast-moving vehicles

Drones have become a more common sight in our skies and are used for everything from consumer hobbies like aerial photography to industrial applications such as farming, surveillance and logistics. However, they are not without their shortcomings, and one of those is landings. Almost half of all drone accidents occur when these uncrewed aerial vehicles attempt to touch down, especially in challenging environments or on fast-moving objects. But that could be a thing of the past as researchers have developed a system that can land smoothly on vehicles traveling at speed.

The agentic AI shift: From static products to dynamic systems

Agents are here. And they are challenging many of the assumptions software teams have relied on for decades, including the very idea of what a “product” is.

There is a scene in Interstellar where the characters are on a remote, water-covered planet. In the distance, what looks like a mountain range turns out to be enormous waves steadily building and towering over them. With AI, it has felt much the same. A massive wave has been building on the horizon for years.

Interstellar wave

Generative AI and Vibe Coding have already shifted how design and development happen. Now, another seismic shift is underway: agentic AI

The question isn’t if this wave will hit — it already has. The question is how it will reshape the landscape enterprises thought they knew. From the vantage point of the production design team at DataRobot, these changes are reshaping not just how design is done, but also long-held assumptions about what products are and how they are built.

What makes agentic AI different from generative AI

Unlike predictive or generative AI, agents are autonomous. They make decisions, take action, and adapt to new information without constant human prompts. That autonomy is powerful, but it also clashes with the deterministic infrastructure most enterprises rely on.

Deterministic systems expect the same input to deliver the same output every time. Agents are probabilistic: the same input might trigger different paths, decisions, or outcomes. That mismatch creates new challenges around governance, monitoring, and trust.

These aren’t just theoretical concerns; they’re already playing out in enterprise environments.

To help enterprises run agentic systems securely and at scale, DataRobot co-engineered the Agent Workforce Platform with NVIDIA, building on their AI Factory design. In parallel, we co-developed business agents embedded directly into SAP environments.

Together, these efforts enable organizations to operationalize agents securely, at scale, and within the systems they already rely on.

Moving from pilots to production

Enterprises continue to struggle with the gap between experimentation and impact. MIT research recently found that 95% of generative AI pilots fail to deliver measurable results — often stalling when teams try to scale beyond proofs of concept.

Moving from experimentation to production involves significant technical complexity. Rather than expecting customers to build everything from the ground up, DataRobot shifted its approach. 

To use a food analogy: instead of handing customers a pantry of raw ingredients like components and frameworks, the company now delivers meal kits: agent and application templates with prepped components and proven recipes that work out of the box. 

These templates codify best practices across common enterprise use cases. Practitioners can clone them, then swap or extend components using the platform or their preferred tools via API.

The impact: production-ready dashboards and applications in days, not months.

agentic application templates
Agent Workforce Platform: Use case–specific templates, AI infrastructure, and front-end integrations.

Changing how practitioners use the platform

This approach is also reshaping how AI practitioners interact with the platform. One of the biggest hurdles is creating front-end interfaces that consume the agents and models: apps for forecasting demand, generating content, retrieving knowledge, or exploring data.

Larger enterprises with dedicated development teams can handle this. But smaller organizations often rely on IT teams or AI experts, and app development is not their core skill. 

To bridge that gap, DataRobot provides customizable reference apps as starting points. These work well when the use case is a close match, but they can be difficult to adapt for more complex or unique requirements.

Practitioners sometimes turn to open-source frameworks like Streamlit, but those often fall short of enterprise requirements for scale, security, and user experience.

To address this, DataRobot is exploring agent-driven approaches, such as supply chain dashboards that use agents to generate dynamic applications. These dashboards include rich visualizations and advanced interface components tailored to specific customer needs, powered by the Agent Workforce Platform on the back end. 

The result is not just faster builds, but interfaces that practitioners without deep app-dev skills can create – while still meeting enterprise standards for scale, security, and user experience.

Agent-driven dashboards bring enterprise-grade design within reach for every team

Balancing control and automation

Agentic AI raises a paradox familiar from the AutoML era. When automation handles the “fun” parts of the work, practitioners can feel sidelined. When it tackles the tedious parts, it unlocks massive value.

DataRobot has seen this tension before. In the AutoML era, automating algorithm selection and feature engineering helped democratize access, but it also left experienced practitioners feeling control was taken away. 

The lesson: automation succeeds when it accelerates expertise by removing tedious tasks, while preserving practitioner control over business logic and workflow design.

This experience shaped how we approach agentic AI: automation should accelerate expertise, not replace it.

Control in practice

This shift towards autonomous systems raises a fundamental question: how much control should be handed to agents, and how much should users retain? At the product level, this plays out in two layers: 

  1. The infrastructure practitioners use to create and govern workflows
  2. The front-end applications people use to consume them. 

Increasingly, customers are building both layers simultaneously, configuring the platform scaffolding while generative agents assemble the React-based applications on top.

Different user expectations

This tension plays out differently for each group:

  • App developers are comfortable with abstraction layers, but still expect to debug and extend when needed.
  • Data scientists want transparency and intervention. 
  • Enterprise IT teams want security, scalability, and systems that integrate with existing infrastructure.
  • Business users just want results. 

Now a new user type has emerged: the agents themselves. 

They act as collaborators in APIs and workflows, forcing a rethink of feedback, error handling, and communication. Designing for all four user types (developers, data scientists, business users, and now agents) means governance and UX standards must serve both humans and machines.

Practitioner archetypes

Reality and risks

These are not prototypes; they are production applications already serving enterprise customers. Practitioners who may not be expert app developers can now create customer-facing software that handles complex workflows, visualizations, and business logic. 

Agents manage React components, layout, and responsive design, while practitioners focus on domain logic and user workflows.

The same trend is showing up across organizations. Field teams and other non-designers are building demos and prototypes with tools like V0, while designers are starting to contribute production code. This democratization expands who can build, but it also raises new challenges.

Now that anyone can ship production software, enterprises need new mechanisms to safeguard quality, scalability, user experience, brand, and accessibility. Traditional checkpoint-based reviews won’t keep up; quality systems themselves must scale to match the new pace of development.

Talent forecast
Example of a field-built app using the agent-aware design system documentation at DataRobot.

Designing systems, not just products

Agentic AI doesn’t just change how products are built; it changes what a “product” is. Instead of static tools designed for broad use cases, enterprises can now create adaptive systems that generate specific solutions for specific contexts on demand.

This shifts the role of product and design teams. Instead of delivering single products, they architect the systems, constraints, and design standards that agents use to generate experiences. 

To maintain quality at scale, enterprises must prevent design debt from compounding as more teams and agents generate applications.

At DataRobot, the design system has been translated into machine-readable artifacts, including Figma guidelines, component specifications, and interaction principles expressed in markdown. 

By encoding design standards upstream, agents can generate interfaces that remain consistent, accessible, and on-brand with fewer manual reviews that slow innovation.  

agent aware artifacts
Turning design files into agent-aware artifacts ensures every generated application meets enterprise standards for quality and brand consistency.

Designing for agents as users

Another shift: agents themselves are now users. They interact with platforms, APIs, and workflows, sometimes more directly than humans. This changes how feedback, error handling, and collaboration are designed. Future-ready platforms will not only optimize for human-computer interaction, but also for human–agent collaboration.

Lessons for design leaders

As boundaries blur, one truth remains: the hard problems are still hard. Agentic AI does not erase these challenges — it makes them more urgent. And it raises the stakes for design quality. When anyone can spin up an app, user experience, quality, governance, and brand alignment become the real differentiators.

The enduring hard problems

  • Understand context: What unmet needs are really being solved?
  • Design for constraints: Will it work with existing architectures?
  • Tie tech to value: Does this address problems that matter to the business?


Principles for navigating the shift

  • Build systems, not just products: Focus on the foundations, constraints, and contexts that allow good experiences to emerge.

Exercise judgment: Use AI for speed and execution, but rely on human expertise and craft to decide what’s right.

Blurring boundaries
The blurring boundaries of the product triad.

 Riding the wave

Like Interstellar, what once looked like distant mountains are actually massive waves. Agentic AI is not on the horizon anymore—it is here. The enterprises that learn to harness it will not just ride the wave. They will shape what comes next.

Learn more about the Agent Workforce Platform and how DataRobot helps enterprises move fro1m AI pilots to production-ready agentic systems.

The post The agentic AI shift: From static products to dynamic systems appeared first on DataRobot.

Scientists accidentally create a tiny “rainbow chip” that could supercharge the internet

Researchers at Columbia have created a chip that turns a single laser into a “frequency comb,” producing dozens of powerful light channels at once. Using a special locking mechanism to clean messy laser light, the team achieved lab-grade precision on a small silicon device. This could drastically improve data center efficiency and fuel innovations in sensing, quantum tech, and LiDAR.

Developing an autonomous crack segmentation and exploration system for civil infrastructure

Identifying cracks is critical for the monitoring of civil infrastructure. To enhance inspection efficiency, a proposed autonomous crack segmentation and exploration system enables the agent to navigate itself without human operation, and the agent successfully captures more than 85% of cracks in the training dataset and achieves 82% crack coverage in the testing dataset.

‘FlyingToolbox’ drone system achieves accurate mid-air tool exchange despite airflow interference

Flying manipulator robots have shown themselves to be useful in many applications, such as industrial maintenance or construction. Their utility in hard to reach or hazardous locations makes them particularly promising in applications that put humans at risk. While these machines have been continuously improving over the years, they are still lacking in certain areas.

Women in robotics you need to know about 2025

October 1 was International Women in Robotics Day, and we’re delighted to introduce this year’s edition of “Women in Robotics You Need to Know About”! Robotics is no longer confined to factories or research labs. Today, it’s helping us explore space, care for people, grow food, and connect across the globe. Behind these breakthroughs are women who lead research groups, launch startups, set safety standards, and inspire the next generation. Too often, their contributions remain under-recognized, and this list is one way we can make that work visible.

This year’s list highlights 20 women in robotics you need to know about in 2025. They are professors, engineers, founders, communicators, and project leaders. Some are early in their careers, others have already shaped the field for decades. Their work ranges from tactile sensing that gives robots a human-like sense of touch, to swarm robotics for medicine and the environment, to embodied AI that may one day live in our homes. They come from across the world, including Australia, Brazil, Canada, China, Germany, Spain, Switzerland, the United Kingdom, and the United States. Together, they show us just how wide the world of robotics really is.

We publish this list not only to celebrate their achievements, but also to counter the persistent invisibility of women in robotics. Representation matters. When women’s contributions are overlooked, we risk reinforcing the false perception that robotics is not their domain. The honorees you’ll meet here prove the opposite. They are making discoveries, leading teams, starting companies, writing the standards, and pushing the boundaries of what robots can do.

The 2025 honorees

  • Heba Khamis
    Heba Khamis is lecturer at UNSW and co-founder of Contactile, start-up making tactile sensors that give robots a human sense of touch to enable them to perform difficult material handling tasks.
  • Kelen Teixeira Vivaldini
    Kelen Teixeira Vivaldini is a professor at UFSCar, Brazil, researching autonomous robots, intelligent systems, and mission planning, with applications in environmental monitoring and inspection.
  • Natalie Panek
    Natalie Panek is a senior engineer in systems design and works in the robotics and automation division of the space technology company, MDA. She was named on Forbes 2015 30 under 30 and one of WXN’s 2014 Top 100 award winners.
  • Joelle Pineau
    Joelle Pineau is a Canadian AI researcher and robotics leader and a professor at McGill. She also served as Meta AI’s vice‑president until 2025. She co‑directs the Reasoning & Learning Lab, co‑founded SmartWheeler and Nursebot and champions reproducible research.
  • Hallie Siegel
    Hallie Siegel is a science communicator and former Robohub editor, building global networks that connect researchers and the public.
  • Xiaorui Zhu
    Xiaorui Zhu co-founded DJI and RoboSense and directs Galaxy AI & Robotics, with award-winning research in UAVs, autonomous driving and mobile robotics.
  • Lijin Aryananda
    Lijin Aryananda is a robotics researcher with 15+ years’ experience in humanoids, automation & medical devices. At ZHAW she develops AI methods for tomography, bridging academia & industry with inclusive leadership.
  • Georgia Chalvatzaki
    Georgia Chalvatzaki, professor at TU Darmstadt and head of the PEARL Lab, advances human-centric robot learning. Her work blends AI and robotics to give mobile manipulators the ability to collaborate safely and intelligently with people.
  • Mar Masulli
    Mar Masulli is CEO & co-founder of BitMetrics, using AI to give robots & machines vision and reasoning. She also serves on the Spanish Robotics Association board.
  • Alona Kharchenko
    Alona Kharchenko is co-founder & CTO of Devanthro, building embodied AI for homes, and was recognized by Forbes as 30 Under 30.
  • Nicole Robinson
    Nicole Robinson is the co-founder of Lyro Robotics, deploying AI pick-and-pack robots for industry.
  • Dimitra Gkatzia
    Dimitra Gkatzia is an associate professor at Edinburgh Napier, advancing natural language generation for human-robot interaction.
  • Sabine Hauert
    Sabine Hauert is a professor at the University of Bristol, co-founder of Robohub, and a pioneer in swarm robotics for nanomedicine and the environment.
  • Monica Anderson
    Monica Anderson is a professor at the University of Alabama, researching distributed autonomy and inclusive human-robot teaming.
  • Shilpa Gulati
    Shilpa Gulati is an experienced engineering leader with over 15 years of experience in building and scaling teams to solve complex problems in Robotics using state-of-the-art technologies.
  • Shuran Song
    Shuran Song is a Stanford professor and robotics researcher, building low-cost systems for robot perception and releasing influential open datasets.
  • Kathryn Zealand
    Kathryn Zealand is co-founder of Skip, building powered clothing, “e-bikes for walking”. She spun the project out of X and has a background in theoretical physics.
  • Ann Virts
    Ann Virts is a NIST project leader developing test methods for mobile and wearable robots, recognized with a U.S. DOC Bronze Medal.
  • Carole Franklin
    Carole Franklin directs standards development for robotics at the Association for Advancing Automation (A3), leading ANSI & ISO robot safety work. With a background at Booz Allen & Ford, she champions safe, effective deployment of robots.
  • Meghan Daley
    Meghan Daley is a NASA project manager who leads teams to develop and integrate simulations for robotic operations to prepare astronauts on the ISS and beyond.

We’ll be spotlighting five honorees each week throughout October, so stay tuned for deeper profiles and stories of their work.

Why it matters

Robotics is not just about technology; it’s about people. By showcasing these individuals, we hope to inspire the next generation, connect the community, and advance the values of diversity and inclusion in STEM

📢 Join the conversation on social media with #WomenInRobotics and help us celebrate the people making robotics better for everyone.


The article was cross-posted from Women in Robotics. Read the original here.

New Claude Sonnet 4.5:

61% Reliability In Agent Mode

Anthropic is out with an upgrade to its flagship AI
that offers 61% reliability
when used as an agent for everyday computing tasks.

Essentially, that means when you use the Sonnet 4.5 as an agent to complete an assignment featuring multi-step tasks like opening apps, editing files, navigating Web pages and filling out forms, it will complete those assignments for you 61% of the time.

One caveat: That reliability metric – known as the OSWorld-Verified Benchmark – is based on Sonnet 4.5’s performance in a sandbox environment, where researchers pit the AI against a set of pre-programmed, digital encounters that never change.

Out on the Web – where things can get unpredictable
very quickly — performance could be worse.

Bottom line: If an AI agent that finishes three-out-of-every-five tasks turns your crank, this could be the AI you’ve been looking for.

In other news and analysis on AI writing:

*LinkedIn’s CEO: ‘I Write Virtually All My Emails With AI Now:” Crediting AI for making him sound ‘super smart’ when it comes to emails, LinkedIn CEO Ryan Roslanksy says he writes nearly all of his emails using AI now.

Observes writer Sherin Shibu: “Roslansky, who has led LinkedIn for the past five years, said that using AI is like tapping into ‘a second brain’ personalized just for him.

*Another ‘AI Writing Humanizer’ Tool Launches: JustDone has just rolled-out an ‘AI humanizer” tool that transforms the sometimes robotic writing of chatbots like ChatGPT into more human-sounding text.

Sounds good in theory.

But truth-be-told, you can do your own ‘humanizing’ with ChatGPT simply by including writing style directions in your prompt.

For example: Simply add phrases like, “write in a warm, witty, conversational style” or “write at the level of a college freshman, but be sure to inject plenty of deadpan humor in your writing.”

Essentially: Simply experiment with describing the precise kind of writing you’d like from ChatGPT, and you won’t need to pay for a ‘humanizer.’

That said, for best results, write — and humanize your writing — using ChatGPT-4.0.

The reason: ChatGPT-5 and other chatbots often resist or water down prompting that attempts to alter writing style.

*New Microsoft 365 ‘Premium” Tier Promising Advanced AI: Microsoft has rolled out a ‘luxury’ version of its productivity suite, billed at $20/month, that offers:

–Higher usage limits with AI

–GPT-4 image generation from OpenAI

–Deep research, vision and actions

–Standard apps that have been with 365 for years, such as
Word, Excel, Powerpoint and Outlook

*OpenAI Launches New Social Media Video App: Video fans just got another text-to-video tool from ChatGPT’s maker – which is designed to compete with the likes of TikTok, Instagram Reels and YouTube Shorts.

The feature setting users’ imaginations ablaze: The ability to drop an image of yourself – or anyone else – into any video the app creates.

Even better: The social media app uses Sora 2, OpenAI’s new video creator, which offers enhanced precision in the creation of complex movement, sound, dialogue and effects for short videos.

*AI Chat, Talking Avatar Style: If chatting with an AI–powered animated character is on your bucket list, Microsoft has the solution.

It’s just rolled out 40 experimental characters you can chat with under its $20/month, Copilot Pro subscription.

Observes writer Lance Whitney: “You can choose from among 40 portraits, all with different genders, races, and nationalities.”

*’Instant Checkout’ Opens for Business in ChatGPT: Now you can buy goods and services while remaining in the ChatGPT app, thanks to a new checkout service from the AI.

Just underway – currently, you can only shop at Etsy in ChatGPT – the AI’s maker is promising to soon onboard Shopify to the new feature, which features a million-plus merchants.

Observes writer Chance Townsend: “OpenAI also revealed that the underlying technology will be open source to help bring agentic commerce to more merchants and developers.”

*Now AI Reports on Police Bodycam Footage, Too: While scores of police agencies have been using AI to write-up standard reports, some have also begun using the tech to report on bodycam footage.

Observes DigWatch: “The tool, Draft One, analyzes Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing and DUI incidents.”

*No Good at AI?: Hasta La Vista, Baby: Early AI adopter Accenture, a consulting firm, has issued a stern warning to staff – get with the AI program, or get another job.

Observes writer Joe Wilkins: “If Accenture workers fail to appease their overlords, the CEO says they’ll be dumped like yesterday’s trash.

“In their place, the IT firm will hire people who already have the AI ‘skills’ necessary to appease stockholders.”

*AI BIG PICTURE: Trump To Taiwan: Produce 50% of Chips in U.S., or You’re on Your Own: In a move bringing new definition to the phrase ‘heavy-handed,’ U.S. President Donald Trump has told Taiwan needs to move half of its chip production to the U.S. if it wants U.S. help against a Chinese invasion.

Observes writer Ashley Belanger: “To close the deal with Taiwan, (U.S. Commerce Secretary Howard) Lutnick suggested that the U.S. would offer some kind of security guarantee so that they can expect that moving their supply chain into the U.S. won’t eliminate Taiwan’s so-called silicon shield where countries like the U.S. are willing to protect Taiwan because we need their silicon, their chips, so badly.”

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post New Claude Sonnet 4.5: appeared first on Robot Writers AI.

The Ambient Brain: Why Amazon’s Alexa+ Is the AI We’ve Been Waiting For

For the better part of a decade, digital assistants have been stuck in a state of arrested development. Devices like Alexa, Siri, and Google Assistant have become glorified voice-activated egg timers and music players, adept at simple, one-shot commands but […]

The post The Ambient Brain: Why Amazon’s Alexa+ Is the AI We’ve Been Waiting For appeared first on TechSpective.

Robot Talk Episode 127 – Robots exploring other planets, with Frances Zhu

Claire chatted to Frances Zhu from the Colorado School of Mines about intelligent robotic systems for space exploration.

Frances Zhu has a degree in Mechanical and Aerospace Engineering and a Ph.D. in Aerospace Engineering from Cornell University. She was previously a NASA Space Technology Research Fellow and an Assistant Research Professor in the Hawaii Institute of Geophysics and Planetology at the University of Hawaii, specialising in machine learning, dynamics, systems, and controls engineering. Since 2025, she has been an Assistant Professor at the Colorado School of Mines in the Department of Mechanical Engineering, affiliated with the Robotics program and Space Resources Program.

Page 4 of 559
1 2 3 4 5 6 559