Interactive robots should not just be passive companions—but active partners—like therapy horses, say researchers
Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

The AAMAS 2025 best paper and demo awards were presented at the 24th International Conference on Autonomous Agents and Multiagent Systems, which took place from 19-23 May 2025 in Detroit. The Distinguished Dissertation Award was also recently announced. The winners in the various categories are as follows:
Best Paper Award
Winner
- Soft Condorcet Optimization for Ranking of General Agents, Marc Lanctot, Kate Larson, Michael Kaisers, Quentin Berthet, Ian Gemp, Manfred Diaz, Roberto-Rafael Maura-Rivero, Yoram Bachrach, Anna Koop, Doina Precup
Finalists
- Azorus: Commitments over Protocols for BDI Agents, Amit K. Chopra, Matteo Baldoni, Samuel H. Christie V, Munindar P. Singh
- Curiosity-Driven Partner Selection Accelerates Convention Emergence in Language Games, Chin-Wing Leung, Paolo Turrini, Ann Nowe
- Reinforcement Learning-based Approach for Vehicle-to-Building Charging with Heterogeneous Agents and Long Term Rewards, Fangqi Liu, Rishav Sen, Jose Paolo Talusan, Ava Pettet, Aaron Kandel, Yoshinori Suzue, Ayan Mukhopadhyay, Abhishek Dubey
- Ready, Bid, Go! On-Demand Delivery Using Fleets of Drones with Unknown, Heterogeneous Energy Storage Constraints, Mohamed S. Talamali, Genki Miyauchi, Thomas Watteyne, Micael Santos Couceiro, Roderich Gross
Pragnesh Jay Modi Best Student Paper Award
Winners
- Decentralized Planning Using Probabilistic Hyperproperties, Francesco Pontiggia, Filip Macák, Roman Andriushchenko, Michele Chiari, Milan Ceska
- Large Language Models for Virtual Human Gesture Selection, Parisa Ghanad Torshizi, Laura B. Hensel, Ari Shapiro, Stacy Marsella
Runner-up
- ReSCOM: Reward-Shaped Curriculum for Efficient Multi-Agent Communication Learning, Xinghai Wei, Tingting Yuan, Jie Yuan, Dongxiao Liu, Xiaoming Fu
Finalists
- Explaining Facial Expression Recognition, Sanjeev Nahulanthran, Leimin Tian, Dana Kulic, Mor Vered
- Agent-Based Analysis of Green Disclosure Policies and Their Market-Wide Impact on Firm Behavior, Lingxiao Zhao, Maria Polukarov, Carmine Ventre
Blue Sky Ideas Track Best Paper Award
Winner
- Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition, François Olivier, Zied Bouraoui
Finalist
- Towards Foundation-model-based multiagent system to Accelerate AI for social impact, Yunfan Zhao, Niclas Boehmer, Aparna Taneja, Milind Tambe
Best Demo Award
Winner
- Serious Games for Ethical Preference Elicitation, Jayati Deshmukh, Zijie Liang, Vahid Yazdanpanah, Sebastian Stein, Sarvapali Ramchurn
Victor Lesser Distinguished Dissertation Award
The Victor Lesser Distinguished Dissertation Award is given for dissertations in the field of autonomous agents and multiagent systems that show originality, depth, impact, as well as quality of writing, supported by high-quality publications.
Winner
- Jannik Peters. Thesis title: Facets of Proportionality: Selecting Committees, Budgets, and Clusters
Runner-up
- Lily Xu. Thesis title: High-stakes decisions from low-quality data: AI decision-making for planetary health
Robot morphs midair to switch from flying to rolling on terrain
Mid-air transformation helps flying, rolling robot to transition smoothly
VR could help train employees working with robots
Algorithm improves acoustic sensor accuracy for cheaper underwater robotics
Soft robots can walk themselves out of a 3D printer
Designing Pareto-optimal GenAI workflows with syftr
You’re not short on tools. Or models. Or frameworks.
What you’re short on is a principled way to use them — at scale.
Building effective generative AI workflows, especially agentic ones, means navigating a combinatorial explosion of choices.
Every new retriever, prompt strategy, text splitter, embedding model, or synthesizing LLM multiplies the space of possible workflows, resulting in a search space with over 10²³ possible configurations.
Trial-and-error doesn’t scale. And model-level benchmarks don’t reflect how components behave when stitched into full systems.
That’s why we built syftr — an open source framework for automatically identifying Pareto-optimal workflows across accuracy, cost, and latency constraints.
The complexity behind generative AI workflows
To illustrate how quickly complexity compounds, consider even a relatively simple RAG pipeline like the one shown in Figure 1.
Each component—retriever, prompt strategy, embedding model, text splitter, synthesizing LLM—requires careful selection and tuning. And beyond those decisions, there’s an expanding landscape of end-to-end workflow strategies, from single-agent workflows like ReAct and LATS to multi-agent workflows like CaptainAgent and Magentic-One.

What’s missing is a scalable, principled way to explore this configuration space.
That’s where syftr comes in.
Its open source framework uses multi-objective Bayesian Optimization to efficiently search for Pareto-optimal RAG workflows, balancing cost, accuracy, and latency across configurations that would be impossible to test manually.
Benchmarking Pareto-optimal workflows with syftr
Once syftr is applied to a workflow configuration space, it surfaces candidate pipelines that achieve strong tradeoffs across key performance metrics.
The example below shows syftr’s output on the CRAG (Comprehensive RAG) Sports benchmark, highlighting workflows that maintain high accuracy while significantly reducing cost.

While Figure 2 shows what syftr can deliver, it’s equally important to understand how those results are achieved.
At the core of syftr is a multi-objective search process designed to efficiently navigate vast workflow configuration spaces. The framework prioritizes both performance and computational efficiency – essential requirements for real-world experimentation at scale.

Since evaluating every workflow in this space isn’t feasible, we typically evaluate around 500 workflows per run.
To make this process even more efficient, syftr includes a novel early stopping mechanism — Pareto Pruner — which halts evaluation of workflows that are unlikely to improve the Pareto frontier. This significantly reduces computational cost and search time while preserving result quality.
Why current benchmarks aren’t enough
While model benchmarks, like MMLU, LiveBench, Chatbot Arena, and the Berkeley Function-Calling Leaderboard, have advanced our understanding of isolated model capabilities, foundation models rarely operate alone in real-world production environments.
Instead, they’re typically one component — albeit an essential one — within larger, sophisticated AI systems.
Measuring intrinsic model performance is critical, but it leaves open critical system-level questions:
- How do you construct a workflow that meets task-specific goals for accuracy, latency, and cost?
- Which models should you use—and in which parts of the pipeline?
syftr addresses this gap by enabling automated, multi-objective evaluation across entire workflows.
It captures nuanced tradeoffs that emerge only when components interact within a broader pipeline, and systematically explores configuration spaces that are otherwise impractical to evaluate manually.
Inspiration and related work
syftr is the first open-source framework specifically designed to automatically identify Pareto-optimal generative AI workflows that balance multiple competing objectives simultaneously — not just accuracy, but latency and cost as well.
It draws inspiration from existing research, including:
- AutoRAG, which focuses solely on optimizing for accuracy
- Kapoor et al. ‘s work, AI Agents That Matter, which emphasizes cost-controlled evaluation to prevent incentivizing overly costly, leaderboard-focused agents. This principle serves as one of our core research inspirations.
Importantly, syftr is also orthogonal to LLM-as-optimizer frameworks like Trace and TextGrad, and generic flow optimizers like DSPy. Such frameworks can be combined with syftr to further optimize prompts in workflows.
In early experiments, syftr first identified Pareto-optimal workflows on the CRAG Sports benchmark.
We then applied Trace to optimize prompts across all of those configurations — taking a two-stage approach: multi-objective workflow search followed by fine-grained prompt tuning.
The result: notable accuracy improvements, especially in low-cost workflows that initially exhibited lower accuracy (those clustered in the lower-left of the Pareto frontier). These gains suggest that post-hoc prompt optimization can meaningfully boost performance, even in highly cost-constrained settings.
This two-stage approach — first multi-objective configuration search, then prompt refinement — highlights the benefits of combining syftr with specialized downstream tools, enabling modular and flexible workflow optimization strategies.

Building and extending syftr’s search space
Syftr cleanly separates the workflow search space from the underlying optimization algorithm. This modular design enables users to easily extend or customize the space, adding or removing flows, models, and components by editing configuration files.
The default implementation uses Multi-Objective Tree-of-Parzen-Estimators (MOTPE), but syftr supports swapping in other optimization strategies.
Contributions of new flows, modules, or algorithms are welcomed via pull request at github.com/datarobot/syftr.

Built on the shoulders of open source
syftr builds on a number of powerful open source libraries and frameworks:
- Ray for distributing and scaling search over large clusters of CPUs and GPUs
- Ray Serve for autoscaling model hosting
- Optuna for its flexible define-by-run interface (similar to PyTorch’s eager execution) and support for state-of-the-art multi-objective optimization algorithms
- LlamaIndex for building sophisticated agentic and non-agentic RAG workflows
- HuggingFace Datasets for fast, collaborative, and uniform dataset interface
- Trace for optimizing textual components within workflows, such as prompts
syftr is framework-agnostic: workflows can be constructed using any orchestration library or modeling stack. This flexibility allows users to extend or adapt syftr to fit a wide variety of tooling preferences.
Case study: syftr on CRAG Sports
Benchmark setup
The CRAG benchmark dataset was introduced by Meta for the KDD Cup 2024 and includes three tasks:
- Task 1: Retrieval summarization
- Task 2: Knowledge graph and web retrieval
- Task 3: End-to-end RAG
syftr was evaluated on Task 3 (CRAG3), which includes 4,400 QA pairs spanning a wide range of topics. The official benchmark performs RAG over 50 webpages retrieved for each question.
To increase difficulty, we combined all webpages across all questions into a single corpus, creating a more realistic, challenging retrieval setting.

Note: Amazon Q pricing uses a per-user/month pricing model, which differs from the per-query token-based cost estimates used for syftr workflows.
Key observations and insights
Across datasets, syftr consistently surfaces meaningful optimization patterns:
- Non-agentic workflows dominate the Pareto frontier. They’re faster and cheaper, leading the optimizer to favor these configurations more frequently than agentic ones.
- GPT-4o-mini frequently appears in Pareto-optimal flows, suggesting it offers a strong balance of quality and cost as a synthesizing LLM.
- Reasoning models like o3-mini perform well on quantitative tasks (e.g., FinanceBench, InfiniteBench), likely due to their multi-hop reasoning capabilities.
- Pareto frontiers eventually flatten after an initial rise, with diminishing returns in accuracy relative to steep cost increases, underscoring the need for tools like syftr that help pinpoint efficient operating points.
We routinely find that the workflow at the knee point of the Pareto frontier loses just a few percentage points in accuracy compared to the most accurate setup — while being 10x cheaper.
syftr makes it easy to find that sweet spot.
Cost of running syftr
In our experiments, we allocated a budget of ~500 workflow evaluations per task. Although exact costs vary based on the dataset and search space complexity, we consistently identified strong Pareto frontiers with a one-time search cost of approximately $500 per use case.
We expect this cost to decrease as more efficient search algorithms and space definitions are developed.
Importantly, this initial investment is minimal relative to the long-term gains from deploying optimized workflows, whether through reduced compute usage, improved accuracy, or better user experience in high-traffic systems.
For detailed results across six benchmark tasks, including datasets beyond CRAG, refer to the full syftr paper.
Getting started and contributing
To get started with syftr, clone or fork the repository on GitHub. Benchmark datasets are available on HuggingFace, and syftr also supports user-defined datasets for custom experimentation.
The current search space includes:
- 9 proprietary LLMs
- 11 embedding models
- 4 general prompt strategies
- 3 retrievers
- 4 text splitters (with parameter configurations)
- 4 agentic RAG flows and 1 non-agentic RAG flow, each with associated hierarchical hyperparameters
New components, such as models, flows, or search modules, can be added or modified via configuration files. Detailed walkthroughs are available to support customization.
syftr is developed fully in the open. We welcome contributions via pull requests, feature proposals, and benchmark reports. We’re particularly interested in ideas that advance the research direction or improve the framework’s extensibility.
What’s ahead for syftr
syftr is still evolving, with several active areas of research designed to extend its capabilities and practical impact:
- Meta-learning
Currently, each search is performed from scratch. We’re exploring meta-learning techniques that leverage prior runs across similar tasks to accelerate and guide future searches. - Multi-agent workflow evaluation
While multi-agent systems are gaining traction, they introduce additional complexity and cost. We’re investigating how these workflows compare to single-agent and non-agentic pipelines, and when their tradeoffs are justified. - Composability with prompt optimization frameworks
syftr is complementary to tools like DSPy, Trace, and TextGrad, which optimize textual components within workflows. We’re exploring ways to more deeply integrate these systems to jointly optimize structure and language. - More agentic tasks
We started with question-answer tasks, a critical production use case for agents. Next, we plan to rapidly expand syftr’s task repertoire to code generation, data analysis, and interpretation. We also invite the community to suggest additional tasks for syftr to prioritize.
As these efforts progress, we aim to expand syftr’s value as a research tool, a benchmarking framework, and a practical assistant for system-level generative AI design.
If you’re working in this space, we welcome your feedback, ideas, and contributions.
Try the code, read the research
To explore syftr further, check out the GitHub repository or read the full paper on ArXiv for details on methodology and results.
Syftr has been accepted to appear at the International Conference on Automated Machine Learning (AutoML) in September, 2025 in New York City.
We look forward to seeing what you build and discovering what’s next, together.
The post Designing Pareto-optimal GenAI workflows with syftr appeared first on DataRobot.
How AI, Robotics, and Automation Power Next Generation Pack Assembly
Legal Considerations for Robots-as-a-Service (RaaS) in Warehousing
Congratulations to the #ICRA2025 best paper award winners

The 2025 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award ceremony at the conference, which took place from 19-23 May in Atlanta, USA.
IEEE ICRA Best Paper Award on Robot Learning
Winner
- *Robo-DM: Data Management for Large Robot Datasets, Kaiyuan Chen, Letian Fu, David Huang, Yanxiang Zhang, Yunliang Lawrence Chen, Huang Huang, Kush Hari, Ashwin Balakrishna, Ted Xiao, Pannag Sanketi, John Kubiatowicz, Ken Goldberg
Finalists
- Achieving Human Level Competitive Robot Table Tennis, David D’Ambrosio, Saminda Wishwajith Abeyruwan, Laura Graesser, Atil Iscen, Heni Ben Amor, Alex Bewley, Barney J. Reed, Krista Reymann, Leila Takayama, Yuval Tassa, Krysztof Choromanski, Erwin Coumans, Deepali Jain, Navdeep Jaitly, Natasha Jaques, Satoshi Kataoka, Yuheng Kuang, Nevena Lazic, Reza, Mahjourian, Sherry Moore, Kenneth Oslund, Anish Shankar, Vikas Sindhwani, Vincent Vanhoucke, Grace Vesom, Peng Xu, Pannag Sanketi
- *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock
IEEE ICRA Best Paper Award in Field and Service Robotics
Winner
- *PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-Rich Manipulation Using Tactile-Diffusion Policies, Jialiang Zhao, Naveen Kuppuswamy, Siyuan Feng, Benjamin Burchfiel, Edward Adelson
Finalists
- A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking, Daniel Rodrigues Da Costa, Maxime Robic, Pascal Vasseur, Fabio Morbidi
- *Learning-Based Adaptive Navigation for Scalar Field Mapping and Feature Tracking, Jose Fuentes, Paulo Padrao, Abdullah Al Redwan Newaz, Leonardo Bobadilla
IEEE ICRA Best Paper Award on Human-Robot Interaction
Winner
- *Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition, Shengchent Luo, Quanuan Peng, Jun Lv, Kaiwen Hong, Katherin Driggs-Campbell, Cewu Lu, Yong-Lu Li
Finalists
- *To Ask or Not to Ask: Human-In-The-Loop Contextual Bandits with Applications in Robot-Assisted Feeding, Rohan Banerjee, Rajat Kumar Jenamani, Sidharth Vasudev, Amal Nanavati, Katherine Dimitropoulou, Sarah Dean, Tapomayukh Bhattacharjee
- *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand
IEEE ICRA Best Paper Award on Mechanisms and Design
Winner
- Individual and Collective Behaviors in Soft Robot Worms Inspired by Living Worm Blobs, Carina Kaeser, Junghan Kwon, Elio Challita, Harry Tuazon, Robert Wood, Saad Bhamla, Justin Werfel
Finalists
- *Informed Repurposing of Quadruped Legs for New Tasks, Fuchen Chen, Daniel Aukes
- *Intelligent Self-Healing Artificial Muscle: Mechanisms for Damage Detection and Autonomous, Ethan Krings, Patrick Mcmanigal, Eric Markvicka
IEEE ICRA Best Paper Award on Planning and Control
Winner
- *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock
Finalists
- *SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models, Yi Wu, Zikang Xiong, Yiran Hu, Shreyash Sridhar Iyengar, Nan Jiang, Aniket Bera, Lin Tan, Suresh Jagannathan
- *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot
IEEE ICRA Best Paper Award in Robot Perception
Winner
- *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer
Finalists
- *Ground-Optimized 4D Radar-Inertial Odometry Via Continuous Velocity Integration Using Gaussian Process, Wooseong Yang, Hyesu Jang, Ayoung Kim
- *UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation, Yihe Tang, Wenlong Huang, Yingke Wang, Chengshu Li, Roy Yuan, Ruohan Zhang, Jiajun Wu, Li Fei-Fei
IEEE ICRA Best Paper Award in Robot Manipulation and Locomotion
Winner
- *D(R, O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous Grasping, Zhenyu Wei, Zhixuan Xu, Jingxiang Guo, Yiwen Hou, Chongkai Gao, Zhehao Cai, Jiayu Luo, Lin Shao
Finalists
- *Full-Order Sampling-Based MPC for Torque-Level Locomotion Control Via Diffusion-Style Annealing, Haoru Xue, Chaoyi Pan, Zeji Yi, Guannan Qu, Guanya Shi
- *TrofyBot: A Transformable Rolling and Flying Robot with High Energy Efficiency, Mingwei Lai, Yugian Ye, Hanyu Wu, Chice Xuan, Ruibin Zhang, Qiuyu Ren, Chao Xu, Fei Gao, Yanjun Cao
IEEE ICRA Best Paper Award in Automation
Winner
- *Physics-Aware Robotic Palletization with Online Masking Inference, Tiangi Zhang, Zheng Wu, Yuxin Chen, Yixiao Wang, Boyuan Liang, Scott Moura, Masayoshi Tomizuka, Mingyu Ding, Wei Zhan
Finalists
- *In-Plane Manipulation of Soft Micro-Fiber with Ultrasonic Transducer Array and Microscope, Jieyun Zou, Siyuan An, Mingyue Wang, Jiaqi Li, Yalin Shi, You-Fu Li, Song Liu
- *A Complete and Bounded-Suboptimal Algorithm for a Moving Target Traveling Salesman Problem with Obstacles in 3D, Anoop Bhat, Geordan Gutow, Bhaskhar Vundurthy, Zhonggiang, Sivakumar Rathinam, Howie Choset
IEEE ICRA Best Paper Award in Medical Robotics
Winner
- *In-Vivo Tendon-Driven Rodent Ankle Exoskeleton System for Sensorimotor Rehabilitation, Juwan Han, Seunghyeon Park, Keehon Kim
Finalists
- *Image-Based Compliance Control for Robotic Steering of a Ferromagnetic Guidewire, An Hu, Chen Sun, Adam Dmytriw, Nan Xiao, Yu Sun
- *AutoPeel: Adhesion-Aware Safe Peeling Trajectory Optimization for Robotic Wound Care, Xiao Liang, Youcheng Zhang, Fei Liu, Florian Richter, Michael C. Yip
IEEE ICRA Best Paper Award on Multi-Robot Systems
Winner
- *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li
Finalists
- Distributed Multi-Robot Source Seeking in Unknown Environments with Unknown Number of Sources, Lingpeng Chen, Siva Kailas, Srujan Deolasee, Wenhao Luo, Katia Sycara, Woojun Kim
- *Multi-Nonholonomic Robot Object Transportation with Obstacle Crossing Using a Deformable Sheet, Weijian Zhang, Charlie Street, Masoumeh Mansouri
IEEE ICRA Best Conference Paper Award
Winners
- *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot
- *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer
In addition to the papers listed above, these paper were also finalists for the IEEE ICRA Best Conference Paper Award.
Finalists
- *MiniVLN: Efficient Vision-And-Language Navigation by Progressive Knowledge Distillation, Junyou Zhu, Yanyuan Qiao, Siqi Zhang, Xingjian He, Qi Wu, Jing Liu
- *RoboCrowd: Scaling Robot Data Collection through Crowdsourcing, Suvir Mirchandani, David D. Yuan, Kaylee Burns, Md Sazzad Islam, Zihao Zhao, Chelsea Finn, Dorsa Sadigh
- How Sound-Based Robot Communication Impacts Perceptions of Robotic Failure, Jai’La Lee Crider, Rhian Preston, Naomi T. Fitter
- *Obstacle-Avoidant Leader Following with a Quadruped Robot, Carmen Scheidemann, Lennart Werner, Victor Reijgwart, Andrei Cramariuc, Joris Chomarat, Jia-Ruei Chiu, Roland Siegwart, Marco Hutter
- *Dynamic Tube MPC: Learning Error Dynamics with Massively Parallel Simulation for Robust Safety in Practice, William Compton, Noel Csomay-Shanklin, Cole Johnson, Aaron Ames
- *Bat-VUFN: Bat-Inspired Visual-And-Ultrasound Fusion Network for Robust Perception in Adverse Conditions, Gyeongrok Lim, Jeong-ui Hong, Min Hyeon Bae
- *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller
- *TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic Sign Recognition, Guoyang Zhao, Fulong Ma, Weiging Qi, Chenguang Zhang, Yuxuan Liu, Ming Liu, Jun Ma
- *Geometric Design and Gait Co-Optimization for Soft Continuum Robots Swimming at Low and High Reynolds Numbers, Yanhao Yang, Ross Hatton
- *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
- *Occlusion-aware 6D Pose Estimation with Depth-guided Graph Encoding and Cross-semantic Fusion for Robotic Grasping, Jingyang Liu, Zhenyu Lu, Lu Chen, Jing Yang, Chenguang Yang
- *Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery, Tinghe Hong, Shenlin Cai, Boyang Li, Kai Huang
- *Configuration-Adaptive Visual Relative Localization for Spherical Modular Self-Reconfigurable Robots, Yuming Liu, Qiu Zheng, Yuxiao Tu, Yuan Gao, Guanqi Liang, Tin Lun Lam
- *Realm: Real-Time Line-Of-Sight Maintenance in Multi-Robot Navigation with Unknown Obstacles, Ruofei Bai, Shenghai Yuan, Kun Li, Hongliang Guo, Wei-Yun Yau, Lihua Xie
IEEE ICRA Best Student Paper Award
Winners
- *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li
- *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
- *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand
- *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller
Note: papers with an * were eligible for the IEEE ICRA Best Student Paper Award.
ChatGPT Competitor Abandons Chabot Updates
The newest update to Claude reveals that its maker is no longer interested in chasing ChatGPT with continual updates in the AI chatbot market.
Instead, the ChatGPT competitor – now in version 4 — has been reinvented to focus more on computer coding, deep research and other complex tasks, according to writer Hayden Field.
Observes Field: “Anthropic said Claude Opus 4 is the ‘best coding model in the world’ and could autonomously work for nearly a full corporate workday — seven hours.”
The move will most likely come as a great disappointment to a number of writers who currently prefer working with Claude in chatbot mode.
In other news and analysis on AI writing:
*New Claude and Sonnet ‘Great for Research Tasks:’ The latest updates to Anthropic’s AI engines are clocking great advances when it comes to deep research, according to the editors of “The Neuron,” an AI newsletter.
Observe the editors: “These models can also now ‘think’ while using tools like Web search, work on tasks for hours without losing focus – and even keep notes about what they’re doing.”
*The AI Research Gains for Writers Keep Coming: Google just announced a new, experimental research mode for its Google Gemini 2.5 Pro chatbot that goes beyond the AI’s current research capabilities.
Dubbed ‘Deep Think,’ the new mode is designed to enable Gemini to consider multiple hypotheses before responding to a question or research task.
*Oops: Chicago Sun-Times Publishes AI-Generated Gibberish: In yet another egg-on-my-face AI moment, a Chicago newspaper has published an AI guide to summer fun that features made-up books and experts.
According to writer Mia Sato, the AI-generated, hallucinatory guide was created by Hearst Media and then published by the Chicago Sun-Times without so much as a quick glance to verify accuracy.
‘Facts-take-a-holiday’ moments in the guide include the non-existent book, “Nightshade Market,” the nonexistent food expert, Dr. Catherine Frost and the non-existent professor of leisure studies, Dr. Jennifer Campos.
*Google Promising Enhanced Auto-Replies for Gmail in Q3: Google’s ‘Smart Replies’ – an AI feature for Gmail that auto-generates email replies for users – will get a power-boost by this fall, according to writer Ayushmann Chawla.
Observes Chawla: “The update promises more personalized, context-rich suggestions by pulling data not just from your Gmail thread, but also from your Google Drive, calendar and other linked Workspace tools.”
Even better: Google is also promising that the writing tone of those Gmail auto-replies will be modulated based on your relationship with the recipient, according to Chawla.
*Google Search Releases New ‘AI Mode:’ Google is promising to roll out a new way to search the Web with an ‘AI Mode’ that combines the power of the Google search engine with the AI powers of the Google chatbot, Gemini.
Observes Elizabeth Reid, VP, head of search, Google: “Under the hood, AI Mode uses our query fan-out technique, breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf.
“This enables Search to dive deeper into the web than a traditional search on Google, helping you discover even more of what the web has to offer and find incredible, hyper-relevant content that matches your question.”
U.S. Google Chrome users should already be able to find the ‘AI Mode’ button just beneath the search box on Google.
*New AI Voice Researcher Interviews Thousands Simultaneously: In one of those collective ‘gulp!’ moments among journalists worldwide, new AI has emerged that’s designed to:
–interview thousands by AI voice simultaneously
–auto-analyze all responses gleaned from those interviews in real-time to extract trends and actionable insights
–archive everything it finds, hears and opines about for easy reference
While the product, dubbed ‘Hey Marvin,’ is designed to solicit customer feedback, it can also be used to conduct multiple interviews for news stories.
Observes Prayag Narula, CEO, Hey Marvin: “What makes it so powerful is that it enables free-flowing, qualitative, engaging conversations — but on demand and at scale.
”We’re talking hundreds, even thousands of people — something that was previously only seen at large scale using a small army of volunteers in moments like presidential elections.”
*ChatGPT Competitor MS Copilot Gets Image Generation Upgrade: Microsoft Copilot – which runs on AI engines like ChatGPT and similar – has added advanced ChatGPT-4o AI imaging to its feature set.
Observes writer Kevin Okemwa: “OpenAI’s GPT-4o model brings a plethora of new image generation capabilities to Microsoft Copilot.
Adds Okemwa: Those include “the capability to edit your creations, transform an existing image’s style, generate photorealistic images, render accurate and readable text, and follow complex directions.”
ChatGPT’s maker released the advanced image maker back in March.
*Microsoft Promises Access to 11,000+ More Open Source ChatGPT Competitors: Writers and others looking to try out alternate – and often cheaper – ChatGPT competitors should be cheered by Microsoft’s decision to feature many of those in its Azure AI Foundry.
The tech titan just cut a deal with Hugging Face – a depository of nearly two million open source AI engines — to feature 11,000+ of those AI engines on Microsoft Azure, according to writer Ankush Das.
Says Asha Sharma, a VP at Microsoft: “We’re giving developers the freedom to pick the best model for the job — and helping organizations innovate safely and at scale.”
*AI Big Picture: Microsoft Releases 50+ AI Tools to Help Build the ‘Agentic Web:’ Writers and others looking to build AI agents to perform deep research and similar tasks on the Web will want to take a look at new tools Microsoft has designed for those tasks.
Observes writer Michael Nunez: “Microsoft launched a comprehensive strategy to position itself at the center of what it calls the ‘open agentic Web’ at its annual Build conference this morning, introducing dozens of AI tools and platforms designed to help developers create autonomous systems that can make decisions and complete tasks with limited human intervention.”

Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post ChatGPT Competitor Abandons Chabot Updates appeared first on Robot Writers AI.
#ICRA2025 social media round-up

The 2025 IEEE International Conference on Robotics & Automation (ICRA) took place from 19–23 May, in Atlanta, USA. The event featured plenary and keynote sessions, tutorial and workshops, forums, and a community day. Find out what the participants got up during the conference.
Check out what’s happening at the #ICRA2025 Welcome Reception! pic.twitter.com/w66IQDFsku
— IEEE ICRA (@ieee_ras_icra) May 19, 2025
The excitement is real — #ICRA2025 is already buzzing! pic.twitter.com/DtVgLwiaTB
— IEEE ICRA (@ieee_ras_icra) May 19, 2025
#ICRA #ICRA2025 #RoboticsInAfrica
— Black in Robotics (@blackinrobotics.bsky.social) 18 May 2025 at 23:22
At #ICRA2025? Check out my student Yi Wu’s talk (TuCT1.4) at 3:30PM Tuesday in Room 302 at the Award Finalists 3 Session about how SELP Generates Safe and Efficient Plans for #Robot #Agents with #LLMs! #ConstrainedDecoding #LLMPlanner
@purduecs.bsky.social
@cerias.bsky.social— Lin Tan (@lin-tan.bsky.social) 19 May 2025 at 13:25
My MS student, Robel Mamo, is presenting his poster at #ICRA2025 #Field_Robotics workshop. His work is on "Crop-Aligned Cutout," a novel data augmentation method for under-canopy navigation
@ieee_ras_icra @kennesawstate @KSUresearch pic.twitter.com/CBewagkpaQ
— Taeyeong Choi (최태영) (@ssuty) May 19, 2025
#ICRA2025 pic.twitter.com/FRfqgmSqNd
— Masato Kobayashi @るっと
(@MeRTcooking) May 21, 2025
Malte Mosbach will present today 16:45 at #ICRA2025 in room 404 our paper:
"Prompt-responsive Object Retrieval with Memory-augmented Student-Teacher Learning"
www.ais.uni-bonn.de/videos/ICRA_…— Sven Behnke (@sven-behnke.bsky.social) 20 May 2025 at 15:57
#ICRA2025 pic.twitter.com/ANKoq3ry5K
— Masato Kobayashi @るっと
(@MeRTcooking) May 21, 2025
Tomorrow morning at #ICRA2025, I will be presenting our findings on whether robots can learn dual-arm tasks from just a single demonstration. (Spoiler: they can!)
Come along!
This was led by my excellent PhD student Yilong Wang.
Paper & videos here: https://t.co/F2PtNJEJZT. pic.twitter.com/ycSAzuAgPn
— Edward Johns @ ICRA 2025 (@Ed__Johns) May 20, 2025
I will present our work on air-ground collaboration with SPOMP in 407A in a few minutes! We deployed 1 UAV and 3 UGVs in a fully autonomous mapping mission in large-scale environments. Come check it out! #ICRA2025 @grasplab.bsky.social
— Fernando Cladera (@fcladera.bsky.social) 21 May 2025 at 20:13
Snapshots from #ICRA2025 @ieee_ras_icra : fans keep balloon walkers in constrained area @DennisHongRobot ; Artly coffee robot https://t.co/NCAMEo4fr5 ; mural ; and robo-friends #robots #artly #innovators pic.twitter.com/YAZdUWr9CD
— Heather Knight (@heatherknight) May 21, 2025
Cool things happening at #ICRA2025
RoboRacers gearing up for their qualifiers— Ameya Salvi (@ameyasalvi.bsky.social) 21 May 2025 at 13:56
Wednesday #ICRA2025 highlights included:
Plenary talk by Tessa Lau, CEO & Co-Founder, Dusty Robotics
Keynote & Technical Sessions
Community Day
ICRA Expo
Competitions
And more!
Check out tomorrow's events here: https://t.co/kS4WmAlZwJ pic.twitter.com/IboRH05KEI
— IEEE ICRA (@ieee_ras_icra) May 22, 2025
Our Community Building Day has been a success. LatinX in Robotics, Queer in Robotics, and Black in Robotics are just some of the incredible groups building community here at #ICRA2025! pic.twitter.com/EWNi0xY2NI
— IEEE ICRA (@ieee_ras_icra) May 21, 2025
New work at #ICRA2025!
Robust Robot Walker
We enable quadruped robots to pass tiny traps (bars, pits, poles) using only proprioception – no cameras, no depth!Catch us at Thursday 16:55pm in Room 305!
https://t.co/571p4xTJ5c pic.twitter.com/03F2Gqf40D
— shaoting zhu (@ShaotingZ38103) May 22, 2025
Robot parade at @ieee_ras_icra #ICRA2025
pic.twitter.com/q2wHcQQN3R
— Sriram
(@SriRam2528) May 22, 2025
Prof. Concha Monje presenting our BSc degree in Robotics Engineering at #ICRA2025 Forum on Undergraduate Robotics in Atlanta @uc3m @EPS_UC3M @ofic_eps_uc3m @ieeeras @ieee_ras_icra @mecanohumano https://t.co/R8fjbvJ6V3 pic.twitter.com/kDiWqSgx8M
— uc3mRoboticsLab (@uc3mRoboticsLab) May 22, 2025
We received the #ICRA2025 #HRI #Award for Arts & Robotics on our co-painting robot
![]()
This project shows how arts and robotics can be used as a testbed to create better robots and to discover new knowledge about humans.@ieee_ras_icra @UMRobotics @DARPA pic.twitter.com/T4VKxH2dKP
— patrícia alves-oliveira (@p_alvesoliveira) May 23, 2025
Fun times at the #ICRA2025 farewell reception! Celebrating Atlanta style with a block party. pic.twitter.com/rg71RPWWKc
— IEEE ICRA (@ieee_ras_icra) May 22, 2025