Page 1 of 3
1 2 3

Gearing up for RoboCupJunior: Interview with Ana Patrícia Magalhães

Action from RoboCupJunior Rescue at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

The annual RoboCup event, where teams gather from across the globe to take part in competitions across a number of leagues, will this year take place in Brazil, from 15-21 July. An important part of the week is RoboCupJunior, which is designed to introduce RoboCup to school children, and sees hundreds of kids taking part in a variety of challenges across different leagues. This year, the lead organizer for RoboCupJunior is Ana Patrícia Magalhães. We caught up with her to find out how the preparations are going, what to expect at this year’s competition, and how RoboCup inspires communities.

Could you tell us about RoboCupJunior and the plans you have for the competition this year?

RoboCup will take place from 15-21 July, in Salvador, Brazil. We expect to receive people from more than 40 countries, across the Junior and Major Leagues. We are preparing everything to accommodate all the students taking part in RoboCupJunior, who will participate in the Junior Leagues of Soccer, Rescue and OnStage. They are children and teenagers, so we have organized shuttles to take them from the hotels to the convention center. We’ve also prepared a handbook with recommendations about security, places they can visit, places to eat. The idea is to provide all the necessary support for them, because they are so young. We’re also organizing a welcome party for the Juniors so that they can experience a little bit of our culture. It will hopefully be a good experience for them.

The Juniors will be located on the first level of the mezzanine at the convention center. They will be separate from the Major Leagues, who will be on the ground floor. Of course, they’ll be able to visit the Major Leagues, and talk to the students and other competitors there, but it will be nice for them to have their own space. There will also be some parents and teachers with them, so we decided to use this special, dedicated space.

RoboCupJunior On Stage at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

Do you have any idea of roughly how many teams will be taking part?

Yes, so we’ll have about 48 teams in the Soccer Leagues, 86 teams in the Rescue Leagues, and 27 in OnStage. That’s a lot of teams. Each team has about three or four students, and many of the parents, teachers and professors travel with them too. In total, we expect about 600 people to be associated with RoboCupJunior.

RoboCupJunior Soccer at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

Have you got more RoboCupJunior participants from Brazil this year due to the location?

Yes, we have many teams from Brazil competing. I don’t know the exact number, but there are definitely more Brazilian teams this year, because it’s a lot cheaper and easier for them to travel here. When we have competitions in other countries, it’s expensive for them. For example, I have a team here in Salvador that qualified for the super regional event in the US and our team couldn’t go. They had qualified, but they couldn’t go because they didn’t have money to pay for the ticket. Now, it will be possible for all the Brazilian teams qualified to participate because it’s cheaper for them to come here. So it’s a big opportunity for development and to live the RoboCup experience. It’s very important for children and teenagers to share their research, meet people from other countries, and see what they are doing, and what research path they are following. They are very grateful for the opportunity to have their work tested against others. In a competition, it is possible to compare your research with others. So it’s different from conferences where you present a paper and show your work, but it’s not possible to compare and evaluate the results with other similar work. In a competition you have this opportunity. It’s a good way to get insights and improve your research.

RoboCupJunior Rescue at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

Your role at this RoboCup will be organizing RoboCupJunior. Are you also involved in the Major Leagues?

Yes, so my main role is organizing RoboCupJunior and I am also one of the chairs of the RoboCup Symposium. Besides, some teams from my laboratory are competing in the Major leagues. My team participates in the @Home league, but I haven’t had much time to help them recently, with all the preparations for RoboCup2025. Our laboratory also has teams from the 3d Simulation Soccer League, and the Flying Robots Demo. This will be the first time we’ll see a flying robot demo league at a RoboCup.

We’ll also have two junior teams from the Rescue Simulation League. They are very excited about taking part.

RoboCupJunior Rescue at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

RoboCup was last held in Brazil in 2014, and I understand that there were quite a lot of new people that were inspired to join a team after that. Do you think the 2025 RoboCup will have the same effect and will inspire more people in Brazil to take part?

Yes, I hope so. The last one inspired many, many students. We could perceive the difference before and after RoboCup at that time, related to projects in schools. In 2014, RoboCup was held in João Pessoa, a city in the north east that is not as developed or populated as many other states in Brazil. It really improved the research in that place and the interest in robotics especially. After the 2014 RoboCup, we’ve had many projects submitted to the Brazilian RoboCup competition from that state every year. We believe that it was because of RoboCup being held there.

We hope that RoboCup2025 next month will have the same effect. We think it might have an even bigger impact, because there is more social media now and the news can spread a lot further. We are expecting many visitors. We will have a form where schools that want to visit can enroll on a guided visit of RoboCup. This will go live on the website next week, but we are already receiving many messages from schools asking how they can participate with their group. They are interested in the events, so we have high expectations.

We have been working on organizing RoboCup2025 for over a year, and there is still much to do. We are excited to receive everybody here, both for the competition and to see the city. We have a beautiful city on the coast, and some beautiful places to visit, so I recommend that people come and stay for some days after the competition to get to know our city.

About Ana Patrícia

Ana Patrícia F. Magalhães Mascarenhas received her PhD in Computer Science from the Federal University of Bahia (2016) and Master in Mechatronics from the Federal University of Bahia (2007). She is currently an adjunct professor at the State University of Bahia (UNEB) at the Information Systems course. She is a researcher and vice coordinator of the Center for Research in Computer Architecture, Intelligent Systems and Robotics (ACSO). Her current research focuses on service robotics and software engineering, especially related to the use of Artificial Intelligence (AI) in the software development process and in Model-Driven Development (DDM).

Preparing for kick-off at RoboCup2025: an interview with General Chair Marco Simões

The Salvador Convention Center, where RoboCup 2025 will take place.

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event, where teams gather from across the globe to take part in competitions across a number of leagues, will this year take place in Brazil, from 15-21 July. We spoke to Marco Simões, one of the General Chairs of RoboCup 2025 and President of RoboCup Brazil, to find out what plans they have for the event, some new initiatives, and how RoboCup has grown in Brazil over the past ten years.

Marco Simões

Could you give us a quick introduction to RoboCup 2025?

RoboCup will be held in Salvador, Brazil. When RoboCup was held in Brazil 11 years ago, in 2014, we had a total of 100,000 visitors, so that was a great success. This year, we expect even more, around 150,000, during all the events. Nowadays, AI and robotics are attracting more attention. We are also in a town, Salvador, with a bigger population than the previous location (João Pessoa). For these reasons, we estimate the attendance to be about 150,000 people.

Regarding the number of teams, registration has not closed yet, so we’re unsure about the final numbers. However, we expect to have about 300-400 teams and around 3000 competitors. We have been helping with visas, so we hope to see higher participation from teams who couldn’t attend in the previous two years due to visa issues. We are doing our best to ensure people can come and have fun at RoboCup!

This is also a great year for the RoboCup community: We have just agreed on new global league partners, including the Chinese companies Unitree, Fourier, and Booster Robotics. They will bring their humanoids and four-legged robots to RoboCup. These will not only be exhibited to the public but also used by some teams. They are amazing robots with very good skills. So, I think this will be an amazing edition of RoboCup this year.

Did the 2014 event in Brazil inspire more teams to participate in RoboCup?

Yes, we have seen a significant increase in our RoboCup community. In the last two years, Brazil has had the fourth-largest number of teams and participants at RoboCup in Bordeaux (2023) and Eindhoven (2024). This was a very big increase because ten years ago, we were not even in the top eight or nine.

We’ve made a significant effort with RoboCupJunior in the last ten years. Most people who’ve taken part in RoboCupJunior have carried on and joined the RoboCup Major League. So, the number of teams in Brazil has been increasing year by year over the last ten years. This year, we have a great number of participants because of the lower travel costs. We are expecting to be in the top three this year in terms of the highest number of participants.

Photo of participants at RoboCup 2024, which took place in Eindhoven. Photo credit: RoboCup/Bart van Overbeeke

It’s impressive that so many RoboCupJunior participants go on to join a Major League team.

Yes, we have an initiative here in Brazil called the Brazilian Robotics Olympiad. In this event, we chose two Junior Leagues – OnStage and the Rescue Line League – and we organized a competition based on these two Leagues. We run it in regional competitions all over Brazil – so 27 states. We organize at least one competition in each state during the year, and the best teams from each state come to the national competition together with the Major Leagues. We organize the Brazilian Olympiad to get RoboCupJunior to more students. This is how we’ve managed to increase participation in RoboCupJunior. Then, when students go to university, many of them continue to participate, but in the Major Leagues. So that’s a very successful strategy we’ve used in Brazil in the last 10 years.

Could you tell us about some more of the highlights from the Brazilian RoboCup community in recent years?

Two or three years ago, one of the Brazilian teams was the champion of RoboCup @Home. We have seen a big increase in the number of teams in the @Home League. In the national competition in Brazil, we have more than 12 teams participating. Back in 2014, we only had one team participating. So we’ve had a great increase—this League is one of the highlights in Brazil.

More teams are also participating in the Small Size League (part of the soccer League). Two years ago, one of the Brazilian teams was the champion of the division B of the Small Size League. So, over the last five years, we’ve seen some Brazilian teams in the top three positions in Major Leagues in the RoboCup world competition. This is a result of the increase in the number of teams and the quality of what the teams are developing. So at this time, we have an increased number of publications and teams participating in the competition with good results, so that’s very important.

Another excellent contribution for this year is a league we created five years ago – a flying robot league, where autonomous drones perform some missions and tasks. We’ve proposed this League as a demo for RoboCup2025, and we will have a Flying Robot Demo at the competition this year. This will be the first time we’ll have autonomous drones at the RoboCup competition, and the Brazilian community proposed it.

RoboCup @Home with Toyota HSR robots in the Domestic Standard Platform League, RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

Will you be taking part in the competition this year, or will you be focusing entirely on your role as General Chair?

This year, my laboratory (ACSO/Uneb) has qualified for the 3d Simulation League (soccer), the Flying Robot Demo, and RoboCup @Home, so we are participating in three Leagues. We also supervise RoboCupJunior Teams in the Rescue Simulation League. This year, my students have had only a little supervision from me because I’ve been very engaged with the organization.

In our 3D simulation team, we have lots of developments with deep reinforcement learning and some new novel strategies that allow our teams to gain new skills, and we are combining the new skills with our former multi-agent coordination strategy. For this reason, I think we will have a robust team in the competition because we are not only working on skills, we are also working on multi-agency strategies. When both aspects are joined, you can have a really good soccer team that plays very well. We have a good team and expect to achieve a greater position this year. In the latter years, we were in the top four or five, but we hope to get into the top three this year.

In 3D, you not only work on multi-agent aspects but also need to work on skills such as walking, kicking, and running. Teams are now trying to develop new skills. For example, in recent years, our team has developed the sprint running movement, which was a result of deep reinforcement learning. It is not a natural running motion but a running movement that works according to the League’s rules. It makes the robots go very fast from one point to another, making the team very competitive.

Most teams are learning skills but don’t know how to exploit them strategically in the game. Our focus is not only on creating new skills but also on using them strategically. We are currently working on a very innovative approach.

This year, the simulation league will run a challenge using a new simulator based on MuJoCo. If the challenge goes well, we may move to this new simulator in the following years, which can more realistically simulate real humanoid robots.

Action from the semi-finals of RoboCup Soccer Humanoid League at RoboCup 2024. Photo: RoboCup/Bart van Overbeeke.

Finally, is there anything else you’d like to highlight about RoboCup2025?

We are working on partnerships with local companies. For example, we have sponsorship from Petrobras, one of the biggest oil companies in the world. They will discuss how they are using robotics and AI in their industry. They were also one of the first sponsors of the Flying Robots League. It’s important to have these links between industry and the RoboCup community.

We also have excellent support from local companies and the government. They will be showing the community their latest developments. In the Rescue League, for example, we’ll have a demonstration from the local force showing what they do to support people in disaster situations.

This event is also an excellent opportunity for RoboCuppers, especially those who have never been to Brazil, to spend some days after the event in Salvador, visiting some tourist spots. Salvador was the first Brazilian capital, so we have a rich history. There are a lot of historical sites to see and some great entertainment options, such as beaches or parties. People can have fun and enjoy the country!

About Marco

Marco Simões is an Associate Professor at Bahia State University, Salvador, Brazil. He is the General Chair of RoboCup2025, and President of RoboCup Brazil.

Interview with Amar Halilovic: Explainable AI for robotics

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we hear from Amar Halilovic, a PhD student at Ulm University.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently a PhD student at Ulm University in Germany, where I focus on explainable AI for robotics. My research investigates how robots can generate explanations of their actions in a way that aligns with human preferences and expectations, particularly in navigation tasks.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, I’ve developed a framework for environmental explanations of robot actions and decisions, especially when things go wrong. I have explored black-box and generative approaches for the generation of textual and visual explanations. Furthermore, I have been working on planning of different explanation attributes, such as timing, representation, duration, etc. Lately, I’ve been working on methods for dynamically selecting the best explanation strategy depending on the context and user preferences.

Is there an aspect of your research that has been particularly interesting?

Yes, I find it fascinating how people interpret robot behavior differently depending on the urgency or failure context. It’s been especially rewarding to study how explanation expectations shift in different situations and how we can tailor explanation timing and content accordingly.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Next, I’ll be extending the framework to incorporate real-time adaptation, enabling robots to learn from user feedback and adjust their explanations on the fly. I’m also planning more user studies to validate the effectiveness of these explanations in real-world human-robot interaction settings.

Amar with his poster at the AAAI/SIGAI Doctoral Consortium at AAAI 2025.

What made you want to study AI, and, in particular, explainable robot navigation?

I’ve always been interested in the intersection of humans and machines. During my studies, I realized that making AI systems understandable isn’t just a technical challenge—it’s key to trust and usability. Robot navigation struck me as a particularly compelling area because decisions are spatial and visual, making explanations both challenging and impactful.

What advice would you give to someone thinking of doing a PhD in the field?

Pick a topic that genuinely excites you—you’ll be living with it for several years! Also, build a support network of mentors and peers. It’s easy to get lost in the technical work, but collaboration and feedback are vital.

Could you tell us an interesting (non-AI related) fact about you?

I have lived and studied in four different countries.

About Amar

Amar is a PhD student at the Institute of Artificial Intelligence of Ulm University in Germany. His research focuses on Explainable Artificial Intelligence (XAI) in Human-Robot Interaction (HRI), particularly how robots can generate context-sensitive explanations for navigation decisions. He combines symbolic planning and machine learning to build explainable robot systems that adapt to human preferences and different contexts. Before starting his PhD, he studied Electrical Engineering at the University of Sarajevo in Sarajevo, Bosnia and Herzegovina, and Computer Science at Mälardalen University in Västerås, Sweden. Outside academia, Amar enjoys travelling, photography, and exploring connections between technology and society.

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

winners' medal

The AAMAS 2025 best paper and demo awards were presented at the 24th International Conference on Autonomous Agents and Multiagent Systems, which took place from 19-23 May 2025 in Detroit. The Distinguished Dissertation Award was also recently announced. The winners in the various categories are as follows:


Best Paper Award

Winner

  • Soft Condorcet Optimization for Ranking of General Agents, Marc Lanctot, Kate Larson, Michael Kaisers, Quentin Berthet, Ian Gemp, Manfred Diaz, Roberto-Rafael Maura-Rivero, Yoram Bachrach, Anna Koop, Doina Precup

Finalists

  • Azorus: Commitments over Protocols for BDI Agents, Amit K. Chopra, Matteo Baldoni, Samuel H. Christie V, Munindar P. Singh
  • Curiosity-Driven Partner Selection Accelerates Convention Emergence in Language Games, Chin-Wing Leung, Paolo Turrini, Ann Nowe
  • Reinforcement Learning-based Approach for Vehicle-to-Building Charging with Heterogeneous Agents and Long Term Rewards, Fangqi Liu, Rishav Sen, Jose Paolo Talusan, Ava Pettet, Aaron Kandel, Yoshinori Suzue, Ayan Mukhopadhyay, Abhishek Dubey
  • Ready, Bid, Go! On-Demand Delivery Using Fleets of Drones with Unknown, Heterogeneous Energy Storage Constraints, Mohamed S. Talamali, Genki Miyauchi, Thomas Watteyne, Micael Santos Couceiro, Roderich Gross

Pragnesh Jay Modi Best Student Paper Award

Winners

  • Decentralized Planning Using Probabilistic Hyperproperties, Francesco Pontiggia, Filip Macák, Roman Andriushchenko, Michele Chiari, Milan Ceska
  • Large Language Models for Virtual Human Gesture Selection, Parisa Ghanad Torshizi, Laura B. Hensel, Ari Shapiro, Stacy Marsella

Runner-up

  • ReSCOM: Reward-Shaped Curriculum for Efficient Multi-Agent Communication Learning, Xinghai Wei, Tingting Yuan, Jie Yuan, Dongxiao Liu, Xiaoming Fu

Finalists

  • Explaining Facial Expression Recognition, Sanjeev Nahulanthran, Leimin Tian, Dana Kulic, Mor Vered
  • Agent-Based Analysis of Green Disclosure Policies and Their Market-Wide Impact on Firm Behavior, Lingxiao Zhao, Maria Polukarov, Carmine Ventre

Blue Sky Ideas Track Best Paper Award

Winner

  • Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition, François Olivier, Zied Bouraoui

Finalist

  • Towards Foundation-model-based multiagent system to Accelerate AI for social impact, Yunfan Zhao, Niclas Boehmer, Aparna Taneja, Milind Tambe

Best Demo Award

Winner

  • Serious Games for Ethical Preference Elicitation, Jayati Deshmukh, Zijie Liang, Vahid Yazdanpanah, Sebastian Stein, Sarvapali Ramchurn

Victor Lesser Distinguished Dissertation Award

The Victor Lesser Distinguished Dissertation Award is given for dissertations in the field of autonomous agents and multiagent systems that show originality, depth, impact, as well as quality of writing, supported by high-quality publications.

Winner

  • Jannik Peters. Thesis title: Facets of Proportionality: Selecting Committees, Budgets, and Clusters

Runner-up

  • Lily Xu. Thesis title: High-stakes decisions from low-quality data: AI decision-making for planetary health

Congratulations to the #ICRA2025 best paper award winners

The 2025 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award ceremony at the conference, which took place from 19-23 May in Atlanta, USA.


IEEE ICRA Best Paper Award on Robot Learning

Winner

  • *Robo-DM: Data Management for Large Robot Datasets, Kaiyuan Chen, Letian Fu, David Huang, Yanxiang Zhang, Yunliang Lawrence Chen, Huang Huang, Kush Hari, Ashwin Balakrishna, Ted Xiao, Pannag Sanketi, John Kubiatowicz, Ken Goldberg

Finalists

  • Achieving Human Level Competitive Robot Table Tennis, David D’Ambrosio, Saminda Wishwajith Abeyruwan, Laura Graesser, Atil Iscen, Heni Ben Amor, Alex Bewley, Barney J. Reed, Krista Reymann, Leila Takayama, Yuval Tassa, Krysztof Choromanski, Erwin Coumans, Deepali Jain, Navdeep Jaitly, Natasha Jaques, Satoshi Kataoka, Yuheng Kuang, Nevena Lazic, Reza, Mahjourian, Sherry Moore, Kenneth Oslund, Anish Shankar, Vikas Sindhwani, Vincent Vanhoucke, Grace Vesom, Peng Xu, Pannag Sanketi
  • *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock

IEEE ICRA Best Paper Award in Field and Service Robotics

Winner

  • *PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-Rich Manipulation Using Tactile-Diffusion Policies, Jialiang Zhao, Naveen Kuppuswamy, Siyuan Feng, Benjamin Burchfiel, Edward Adelson

Finalists

  • A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking, Daniel Rodrigues Da Costa, Maxime Robic, Pascal Vasseur, Fabio Morbidi
  • *Learning-Based Adaptive Navigation for Scalar Field Mapping and Feature Tracking, Jose Fuentes, Paulo Padrao, Abdullah Al Redwan Newaz, Leonardo Bobadilla

IEEE ICRA Best Paper Award on Human-Robot Interaction

Winner

  • *Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition, Shengchent Luo, Quanuan Peng, Jun Lv, Kaiwen Hong, Katherin Driggs-Campbell, Cewu Lu, Yong-Lu Li

Finalists

  • *To Ask or Not to Ask: Human-In-The-Loop Contextual Bandits with Applications in Robot-Assisted Feeding, Rohan Banerjee, Rajat Kumar Jenamani, Sidharth Vasudev, Amal Nanavati, Katherine Dimitropoulou, Sarah Dean, Tapomayukh Bhattacharjee
  • *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand

IEEE ICRA Best Paper Award on Mechanisms and Design

Winner

  • Individual and Collective Behaviors in Soft Robot Worms Inspired by Living Worm Blobs, Carina Kaeser, Junghan Kwon, Elio Challita, Harry Tuazon, Robert Wood, Saad Bhamla, Justin Werfel

Finalists

  • *Informed Repurposing of Quadruped Legs for New Tasks, Fuchen Chen, Daniel Aukes
  • *Intelligent Self-Healing Artificial Muscle: Mechanisms for Damage Detection and Autonomous, Ethan Krings, Patrick Mcmanigal, Eric Markvicka

IEEE ICRA Best Paper Award on Planning and Control

Winner

  • *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock

Finalists

  • *SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models, Yi Wu, Zikang Xiong, Yiran Hu, Shreyash Sridhar Iyengar, Nan Jiang, Aniket Bera, Lin Tan, Suresh Jagannathan
  • *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot

IEEE ICRA Best Paper Award in Robot Perception

Winner

  • *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer

Finalists

  • *Ground-Optimized 4D Radar-Inertial Odometry Via Continuous Velocity Integration Using Gaussian Process, Wooseong Yang, Hyesu Jang, Ayoung Kim
  • *UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation, Yihe Tang, Wenlong Huang, Yingke Wang, Chengshu Li, Roy Yuan, Ruohan Zhang, Jiajun Wu, Li Fei-Fei

IEEE ICRA Best Paper Award in Robot Manipulation and Locomotion

Winner

  • *D(R, O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous Grasping, Zhenyu Wei, Zhixuan Xu, Jingxiang Guo, Yiwen Hou, Chongkai Gao, Zhehao Cai, Jiayu Luo, Lin Shao

Finalists

  • *Full-Order Sampling-Based MPC for Torque-Level Locomotion Control Via Diffusion-Style Annealing, Haoru Xue, Chaoyi Pan, Zeji Yi, Guannan Qu, Guanya Shi
  • *TrofyBot: A Transformable Rolling and Flying Robot with High Energy Efficiency, Mingwei Lai, Yugian Ye, Hanyu Wu, Chice Xuan, Ruibin Zhang, Qiuyu Ren, Chao Xu, Fei Gao, Yanjun Cao

IEEE ICRA Best Paper Award in Automation

Winner

  • *Physics-Aware Robotic Palletization with Online Masking Inference, Tiangi Zhang, Zheng Wu, Yuxin Chen, Yixiao Wang, Boyuan Liang, Scott Moura, Masayoshi Tomizuka, Mingyu Ding, Wei Zhan

Finalists

  • *In-Plane Manipulation of Soft Micro-Fiber with Ultrasonic Transducer Array and Microscope, Jieyun Zou, Siyuan An, Mingyue Wang, Jiaqi Li, Yalin Shi, You-Fu Li, Song Liu
  • *A Complete and Bounded-Suboptimal Algorithm for a Moving Target Traveling Salesman Problem with Obstacles in 3D, Anoop Bhat, Geordan Gutow, Bhaskhar Vundurthy, Zhonggiang, Sivakumar Rathinam, Howie Choset

IEEE ICRA Best Paper Award in Medical Robotics

Winner

  • *In-Vivo Tendon-Driven Rodent Ankle Exoskeleton System for Sensorimotor Rehabilitation, Juwan Han, Seunghyeon Park, Keehon Kim

Finalists

  • *Image-Based Compliance Control for Robotic Steering of a Ferromagnetic Guidewire, An Hu, Chen Sun, Adam Dmytriw, Nan Xiao, Yu Sun
  • *AutoPeel: Adhesion-Aware Safe Peeling Trajectory Optimization for Robotic Wound Care, Xiao Liang, Youcheng Zhang, Fei Liu, Florian Richter, Michael C. Yip

IEEE ICRA Best Paper Award on Multi-Robot Systems

Winner

  • *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li

Finalists

  • Distributed Multi-Robot Source Seeking in Unknown Environments with Unknown Number of Sources, Lingpeng Chen, Siva Kailas, Srujan Deolasee, Wenhao Luo, Katia Sycara, Woojun Kim
  • *Multi-Nonholonomic Robot Object Transportation with Obstacle Crossing Using a Deformable Sheet, Weijian Zhang, Charlie Street, Masoumeh Mansouri

IEEE ICRA Best Conference Paper Award

Winners

  • *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot
  • *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer

In addition to the papers listed above, these paper were also finalists for the IEEE ICRA Best Conference Paper Award.

Finalists

  • *MiniVLN: Efficient Vision-And-Language Navigation by Progressive Knowledge Distillation, Junyou Zhu, Yanyuan Qiao, Siqi Zhang, Xingjian He, Qi Wu, Jing Liu
  • *RoboCrowd: Scaling Robot Data Collection through Crowdsourcing, Suvir Mirchandani, David D. Yuan, Kaylee Burns, Md Sazzad Islam, Zihao Zhao, Chelsea Finn, Dorsa Sadigh
  • How Sound-Based Robot Communication Impacts Perceptions of Robotic Failure, Jai’La Lee Crider, Rhian Preston, Naomi T. Fitter
  • *Obstacle-Avoidant Leader Following with a Quadruped Robot, Carmen Scheidemann, Lennart Werner, Victor Reijgwart, Andrei Cramariuc, Joris Chomarat, Jia-Ruei Chiu, Roland Siegwart, Marco Hutter
  • *Dynamic Tube MPC: Learning Error Dynamics with Massively Parallel Simulation for Robust Safety in Practice, William Compton, Noel Csomay-Shanklin, Cole Johnson, Aaron Ames
  • *Bat-VUFN: Bat-Inspired Visual-And-Ultrasound Fusion Network for Robust Perception in Adverse Conditions, Gyeongrok Lim, Jeong-ui Hong, Min Hyeon Bae
  • *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller
  • *TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic Sign Recognition, Guoyang Zhao, Fulong Ma, Weiging Qi, Chenguang Zhang, Yuxuan Liu, Ming Liu, Jun Ma
  • *Geometric Design and Gait Co-Optimization for Soft Continuum Robots Swimming at Low and High Reynolds Numbers, Yanhao Yang, Ross Hatton
  • *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
  • *Occlusion-aware 6D Pose Estimation with Depth-guided Graph Encoding and Cross-semantic Fusion for Robotic Grasping, Jingyang Liu, Zhenyu Lu, Lu Chen, Jing Yang, Chenguang Yang
  • *Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery, Tinghe Hong, Shenlin Cai, Boyang Li, Kai Huang
  • *Configuration-Adaptive Visual Relative Localization for Spherical Modular Self-Reconfigurable Robots, Yuming Liu, Qiu Zheng, Yuxiao Tu, Yuan Gao, Guanqi Liang, Tin Lun Lam
  • *Realm: Real-Time Line-Of-Sight Maintenance in Multi-Robot Navigation with Unknown Obstacles, Ruofei Bai, Shenghai Yuan, Kun Li, Hongliang Guo, Wei-Yun Yau, Lihua Xie

IEEE ICRA Best Student Paper Award

Winners

  • *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li
  • *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
  • *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand
  • *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller

Note: papers with an * were eligible for the IEEE ICRA Best Student Paper Award.


#ICRA2025 social media round-up

The 2025 IEEE International Conference on Robotics & Automation (ICRA) took place from 19–23 May, in Atlanta, USA. The event featured plenary and keynote sessions, tutorial and workshops, forums, and a community day. Find out what the participants got up during the conference.

#ICRA #ICRA2025 #RoboticsInAfrica

[image or embed]

— Black in Robotics (@blackinrobotics.bsky.social) 18 May 2025 at 23:22

At #ICRA2025? Check out my student Yi Wu’s talk (TuCT1.4) at 3:30PM Tuesday in Room 302 at the Award Finalists 3 Session about how SELP Generates Safe and Efficient Plans for #Robot #Agents with #LLMs! #ConstrainedDecoding #LLMPlanner
@purduecs.bsky.social
@cerias.bsky.social

[image or embed]

— Lin Tan (@lin-tan.bsky.social) 19 May 2025 at 13:25

Malte Mosbach will present today 16:45 at #ICRA2025 in room 404 our paper:
"Prompt-responsive Object Retrieval with Memory-augmented Student-Teacher Learning"
www.ais.uni-bonn.de/videos/ICRA_…

[image or embed]

— Sven Behnke (@sven-behnke.bsky.social) 20 May 2025 at 15:57

I will present our work on air-ground collaboration with SPOMP in 407A in a few minutes! We deployed 1 UAV and 3 UGVs in a fully autonomous mapping mission in large-scale environments. Come check it out! #ICRA2025 @grasplab.bsky.social

[image or embed]

— Fernando Cladera (@fcladera.bsky.social) 21 May 2025 at 20:13

Cool things happening at #ICRA2025
RoboRacers gearing up for their qualifiers

[image or embed]

— Ameya Salvi (@ameyasalvi.bsky.social) 21 May 2025 at 13:56

What’s coming up at #ICRA2025?


The 2025 IEEE International Conference on Robotics and Automation (ICRA) will take place from 19-23 May, in Atlanta, USA. The event will feature plenary talks, technical sessions, posters, workshops and tutorials, forums, and a science communication short course.

Plenary speakers

There are three plenary sessions this year. The speakers are as follows:

  • Allison Okamura (Stanford University) – Rewired: The Interplay of Robots and Society
  • Tessa Lau (Dusty Robotics) – So you want to build a robot company?
  • Raffaello (Raff) D’Andrea (ETH Zurich) – Models are dead, long live models!

Keynote sessions

Tuesday 20, Wednesday 21 and Thursday 22 will see a total of 12 keynote sessions. The featured topics and speakers are:

  • Rehabilitation & Physically Assistive Systems
    • Brenna Argall
    • Robert Gregg
    • Keehoon Kim
    • Christina Piazza
  • Optimization & Control
    • Todd Murphey
    • Angela Schoellig
    • Jana Tumova
    • Ram Vasudevan
  • Human Robot Interaction
    • Sonia Chernova
    • Dongheui Lee
    • Harold Soh
    • Holly Yanco
  • Soft Robotics
    • Robert Katzschmann
    • Hugo Rodrigue
    • Cynthia Sung
    • Wenzhen Yuan
  • Field Robotics
    • Margarita Chli
    • Tobias Fischer
    • Joshua Mangelson
    • Inna Sharf
  • Bio-inspired Robotics
    • Kyujin Cho
    • Dario Floreano
    • Talia Moore
    • Yasemin Ozkan-Aydin
  • Haptics
    • Jeremy Brown
    • Matej Hoffman
    • Tania Morimoto
    • Jee-Hwan Ryu
  • Planning
    • Hanna Kurniawati
    • Jen Jen Chung
    • Dan Halperin
    • Jing Xiao
  • Manipulation
    • Tamim Asfour
    • Yasuhisa Hasegawa
    • Alberto Rodriguez
    • Shuran Song
  • Locomotion
    • Sarah Bergbreiter
    • Cosimo Della Santina
    • Hae-Won Park
    • Ludovic Righetti
  • Safety & Formal Methods
    • Chuchu Fan
    • Meng Guo
    • Changliu Liu
    • Pian Yu
  • Multi-robot Systems
    • Sabine Hauert
    • Dimitra Panagou
    • Alyssa Pierson
    • Fumin Zhang

Science communication training

Join Sabine Hauert, Evan Ackerman and Laura Bridgeman for a crash course on science communication. In this concise tutorial, you will learn how to share your work with a broader audience. This session will take place on 22 May, 11:00 – 12:15.

Workshops and tutorials

The programme of workshops and tutorials will take place on Monday 19 May and Friday 23 May. There are 59 events to choose from, and you can see the full list here.

Forums

There will be three forums as part of the programme, one each on Tuesday 20, Wednesday 21 and Thursday 22.

Community building day

Wednesday 21 May is community building day, with six events planned:

Other events

You can find out more about the other sessions and event at the links below:

Multi-agent path finding in continuous environments

By Kristýna Janovská and Pavel Surynek

Imagine if all of our cars could drive themselves – autonomous driving is becoming possible, but to what extent? To get a vehicle somewhere by itself may not seem so tricky if the route is clear and well defined, but what if there are more cars, each trying to get to a different place? And what if we add pedestrians, animals and other unaccounted for elements? This problem has recently been increasingly studied, and already used in scenarios such as warehouse logistics, where a group of robots move boxes in a warehouse, each with its own goal, but all moving while making sure not to collide and making their routes – paths – as short as possible. But how to formalize such a problem? The answer is MAPF – multi-agent path finding [Silver, 2005].

Multi-agent path finding describes a problem where we have a group of agents – robots, vehicles or even people – who are each trying to get from their starting positions to their goal positions all at once without ever colliding (being in the same position at the same time).

Typically, this problem has been solved on graphs. Graphs are structures that are able to simplify an environment using its focal points and interconnections between them. These points are called vertices and can represent, for example, coordinates. They are connected by edges, which connect neighbouring vertices and represent distances between them.

If however we are trying to solve a real-life scenario, we strive to get as close to simulating reality as possible. Therefore, discrete representation (using a finite number of vertices) may not suffice. But how to search an environment that is continuous, that is, one where there is basically an infinite amount of vertices connected by edges of infinitely small sizes?

This is where something called sampling-based algorithms comes into play. Algorithms such as RRT* [Karaman and Frazzoli, 2011], which we used in our work, randomly select (sample) coordinates in our coordinate space and use them as vertices. The more points that are sampled, the more accurate the representation of the environment is. These vertices are connected to that of their nearest neighbours which minimizes the length of the path from the starting point to the newly sampled point. The path is a sequence of vertices, measured as a sum of the lengths of edges between them.

Figure 1: Two examples of paths connecting starting positions (blue) and goal positions (green) of three agents. Once an obstacle is present, agents plan smooth curved paths around it, successfully avoiding both the obstacle and each other.

We can get a close to optimal path this way, though there is still one problem. Paths created this way are still somewhat bumpy, as the transition between different segments of a path is sharp. If a vehicle was to take this path, it would probably have to turn itself at once when it reaches the end of a segment, as some robotic vacuum cleaners do when moving around. This slows the vehicle or a robot down significantly. A way we can solve this is to take these paths and smooth them, so that the transitions are no longer sharp, but smooth curves. This way, robots or vehicles moving on them can smoothly travel without ever stopping or slowing down significantly when in need of a turn.

Our paper [Janovská and Surynek, 2024] proposed a method for multi-agent path finding in continuous environments, where agents move on sets of smooth paths without colliding. Our algorithm is inspired by the Conflict Based Search (CBS) [Sharon et al., 2014]. Our extension into a continuous space called Continuous-Environment Conflict-Based Search (CE-CBS) works on two levels:

Figure 2: Comparison of paths found with discrete CBS algorithm on a 2D grid (left) and CE-CBS paths in a continuous version of the same environment. Three agents move from blue starting points to green goal points. These experiments are performed in the Robotic Agents Laboratory at Faculty of Information Technology of the Czech Technical University in Prague.

Firstly, each agent searches for a path individually. This is done with the RRT* algorithm as mentioned above. The resulting path is then smoothed using B-spline curves, polynomial piecewise curves applied to vertices of the path. This removes sharp turns and makes the path easier to traverse for a physical agent.

Individual paths are then sent to the higher level of the algorithm, in which paths are compared and conflicts are found. Conflict arises if two agents (which are represented as rigid circular bodies) overlap at any given time. If so, constraints are created to forbid one of the agents from passing through the conflicting space at a time interval during which it was previously present in that space. Both options which constrain one of the agents are tried – a tree of possible constraint settings and their solutions is constructed and expanded upon with each conflict found. When a new constraint is added, this information passes to all agents it concerns and their paths are re-planned so that they avoid the constrained time and space. Then the paths are checked again for validity, and this repeats until a conflict-free solution, which aims to be as short as possible is found.

This way, agents can effectively move without losing speed while turning and without colliding with each other. Although there are environments such as narrow hallways where slowing down or even stopping may be necessary for agents to safely pass, CE-CBS finds solutions in most environments.

This research is supported by the Czech Science Foundation, 22-31346S.

You can read our paper here.

References

Interview with Yuki Mitsufuji: Improving AI image generation


Yuki Mitsufuji is a Lead Research Scientist at Sony AI. Yuki and his team presented two papers at the recent Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation and are entitled: GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping and PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher . We caught up with Yuki to find out more about this research.

There are two pieces of research we’d like to ask you about today. Could we start with the GenWarp paper? Could you outline the problem that you were focused on in this work?

The problem we aimed to solve is called single-shot novel view synthesis, which is where you have one image and want to create another image of the same scene from a different camera angle. There has been a lot of work in this space, but a major challenge remains: when an image angle changes substantially, the image quality degrades significantly. We wanted to be able to generate a new image based on a single given image, as well as improve the quality, even in very challenging angle change settings.

How did you go about solving this problem – what was your methodology?

The existing works in this space tend to take advantage of monocular depth estimation, which means only a single image is used to estimate depth. This depth information enables us to change the angle and change the image according to that angle – we call it “warp.” Of course, there will be some occluded parts in the image, and there will be information missing from the original image on how to create the image from a new angle. Therefore, there is always a second phase where another module can interpolate the occluded region. Because of these two phases, in the existing work in this area, geometrical errors introduced in warping cannot be compensated for in the interpolation phase.

We solve this problem by fusing everything together. We don’t go for a two-phase approach, but do it all at once in a single diffusion model. To preserve the semantic meaning of the image, we created another neural network that can extract the semantic information from a given image as well as monocular depth information. We inject it using a cross-attention mechanism, into the main base diffusion model. Since the warping and interpolation were done in one model, and the occluded part can be reconstructed very well together with the semantic information injected from outside, we saw the overall quality improved. We saw improvements in image quality both subjectively and objectively, using metrics such as FID and PSNR.

Can people see some of the images created using GenWarp?

Yes, we actually have a demo, which consists of two parts. One shows the original image and the other shows the warped images from different angles.

Moving on to the PaGoDA paper, here you were addressing the high computational cost of diffusion models? How did you go about addressing that problem?

Diffusion models are very popular, but it’s well-known that they are very costly for training and inference. We address this issue by proposing PaGoDA, our model which addresses both training efficiency and inference efficiency.

It’s easy to talk about inference efficiency, which directly connects to the speed of generation. Diffusion usually takes a lot of iterative steps towards the final generated output – our goal was to skip these steps so that we could quickly generate an image in just one step. People call it “one-step generation” or “one-step diffusion.” It doesn’t always have to be one step; it could be two or three steps, for example, “few-step diffusion”. Basically, the target is to solve the bottleneck of diffusion, which is a time-consuming, multi-step iterative generation method.

In diffusion models, generating an output is typically a slow process, requiring many iterative steps to produce the final result. A key trend in advancing these models is training a “student model” that distills knowledge from a pre-trained diffusion model. This allows for faster generation—sometimes producing an image in just one step. These are often referred to as distilled diffusion models. Distillation means that, given a teacher (a diffusion model), we use this information to train another one-step efficient model. We call it distillation because we can distill the information from the original model, which has vast knowledge about generating good images.

However, both classic diffusion models and their distilled counterparts are usually tied to a fixed image resolution. This means that if we want a higher-resolution distilled diffusion model capable of one-step generation, we would need to retrain the diffusion model and then distill it again at the desired resolution.

This makes the entire pipeline of training and generation quite tedious. Each time a higher resolution is needed, we have to retrain the diffusion model from scratch and go through the distillation process again, adding significant complexity and time to the workflow.

The uniqueness of PaGoDA is that we train across different resolution models in one system, which allows it to achieve one-step generation, making the workflow much more efficient.

For example, if we want to distill a model for images of 128×128, we can do that. But if we want to do it for another scale, 256×256 let’s say, then we should have the teacher train on 256×256. If we want to extend it even more for higher resolutions, then we need to do this multiple times. This can be very costly, so to avoid this, we use the idea of progressive growing training, which has already been studied in the area of generative adversarial networks (GANs), but not so much in the diffusion space. The idea is, given the teacher diffusion model trained on 64×64, we can distill information and train a one-step model for any resolution. For many resolution cases we can get a state-of-the-art performance using PaGoDA.

Could you give a rough idea of the difference in computational cost between your method and standard diffusion models. What kind of saving do you make?

The idea is very simple – we just skip the iterative steps. It is highly dependent on the diffusion model you use, but a typical standard diffusion model in the past historically used about 1000 steps. And now, modern, well-optimized diffusion models require 79 steps. With our model that goes down to one step, we are looking at it about 80 times faster, in theory. Of course, it all depends on how you implement the system, and if there’s a parallelization mechanism on chips, people can exploit it.

Is there anything else you would like to add about either of the projects?

Ultimately, we want to achieve real-time generation, and not just have this generation be limited to images. Real-time sound generation is an area that we are looking at.

Also, as you can see in the animation demo of GenWarp, the images change rapidly, making it look like an animation. However, the demo was created with many images generated with costly diffusion models offline. If we could achieve high-speed generation, let’s say with PaGoDA, then theoretically, we could create images from any angle on the fly.

Find out more:

About Yuki Mitsufuji

Yuki Mitsufuji is a Lead Research Scientist at Sony AI. In addition to his role at Sony AI, he is a Distinguished Engineer for Sony Group Corporation and the Head of Creative AI Lab for Sony R&D. Yuki holds a PhD in Information Science & Technology from the University of Tokyo. His groundbreaking work has made him a pioneer in foundational music and sound work, such as sound separation and other generative models that can be applied to music, sound, and other modalities.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

In a series of interviews, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. In this latest interview, we hear from Amina Mević who is applying machine learning to semiconductor manufacturing. Find out more about her PhD research so far, what makes this field so interesting, and how she found the AAAI Doctoral Consortium experience.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am currently pursuing my PhD at the University of Sarajevo, Faculty of Electrical Engineering, Department of Computer Science and Informatics. My research is being carried out in collaboration with Infineon Technologies Austria as part of the Important Project of Common European Interest (IPCEI) in Microelectronics. The topic of my research focuses on developing an explainable multi-output virtual metrology system based on machine learning to predict the physical properties of metal layers in semiconductor manufacturing.

Could you give us an overview of the research you’ve carried out so far during your PhD?

In the first year of my PhD, I worked on preprocessing complex manufacturing data and preparing a robust multi-output prediction setup for virtual metrology. I collaborated with industry experts to understand the process intricacies and validate the prediction models. I applied a projection-based selection algorithm (ProjSe), which aligned well with both domain knowledge and process physics.

In the second year, I developed an explanatory method, designed to identify the most relevant input features for multi-output predictions.

Is there an aspect of your research that has been particularly interesting?

For me, the most interesting aspect is the synergy between physics, mathematics, cutting-edge technology, psychology, and ethics. I’m working with data collected during a physical process—physical vapor deposition—using concepts from geometry and algebra, particularly projection operators and their algebra, which have roots in quantum mechanics, to enhance both the performance and interpretability of machine learning models. Collaborating closely with engineers in the semiconductor industry has also been eye-opening, especially seeing how explanations can directly support human decision-making in high-stakes environments. I feel truly honored to deepen my knowledge across these fields and to conduct this multidisciplinary research.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to focus more on time series data and develop explanatory methods for multivariate time series models. Additionally, I intend to investigate aspects of responsible AI within the semiconductor industry and ensure that the solutions proposed during my PhD align with the principles outlined in the EU AI Act.

How was the AAAI Doctoral Consortium, and the AAAI conference experience in general?

Attending the AAAI Doctoral Consortium was an amazing experience! It gave me the opportunity to present my research and receive valuable feedback from leading AI researchers. The networking aspect was equally rewarding—I had inspiring conversations with fellow PhD students and mentors from around the world. The main conference itself was energizing and diverse, with cutting-edge research presented across so many AI subfields. It definitely strengthened my motivation and gave me new ideas for the final phase of my PhD.

Amina presenting two posters at AAAI 2025.

What made you want to study AI?

After graduating in theoretical physics, I found that job opportunities—especially in physics research—were quite limited in my country. I began looking for roles where I could apply the mathematical knowledge and problem-solving skills I had developed during my studies. At the time, data science appeared to be an ideal and promising field. However, I soon realized that I missed the depth and purpose of fundamental research, which was often lacking in industry roles. That motivated me to pursue a PhD in AI, aiming to gain a deep, foundational understanding of the technology—one that can be applied meaningfully and used in service of humanity.

What advice would you give to someone thinking of doing a PhD in the field?

Stay curious and open to learning from different disciplines—especially mathematics, statistics, and domain knowledge. Make sure your research has a purpose that resonates with you personally, as that passion will help carry you through challenges. There will be moments when you’ll feel like giving up, but before making any decision, ask yourself: am I just tired? Sometimes, rest is the solution to many of our problems. Finally, find mentors and communities to share ideas with and stay inspired.

Could you tell us an interesting (non-AI related) fact about you?

I’m a huge science outreach enthusiast! I regularly volunteer with the Association for the Advancement of Science and Technology in Bosnia, where we run workshops and events to inspire kids and high school students to explore STEM—especially in underserved communities.

About Amina

Amina Mević is a PhD candidate and teaching assistant at the University of Sarajevo, Faculty of Electrical Engineering, Bosnia and Herzegovina. Her research is conducted in collaboration with Infineon Technologies Austria as part of the IPCEI in Microelectronics. She earned a master’s degree in theoretical physics and was awarded two Golden Badges of the University of Sarajevo for achieving a GPA higher than 9.5/10 during both her bachelor’s and master’s studies. Amina actively volunteers to promote STEM education among youth in Bosnia and Herzegovina and is dedicated to improving the research environment in her country.

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

ACM SIGAI logo

Congratulations to Shlomo Zilberstein on winning the 2025 ACM/SIGAI Autonomous Agents Research Award. This prestigious award is made for excellence in research in the area of autonomous agents. It is intended to recognize researchers in autonomous agents whose current work is an important influence on the field.

Professor Shlomo Zilberstein was recognised for his work establishing the field of decentralized Markov Decision Processes (DEC-MDPs), laying the groundwork for decision-theoretic planning in multi-agent systems and multi-agent reinforcement learning (MARL). The selection committee noted that these contributions have become a cornerstone of multi-agent decision-making, influencing researchers and practitioners alike.

Shlomo Zilberstein is Professor of Computer Science and former Associate Dean of Research at the University of Massachusetts Amherst. He is a Fellow of AAAI and the ACM, and has received numerous awards, including the UMass Chancellor’s Medal, the IFAAMAS Influential Paper Award, and the AAAI Distinguished Service Award.

Report on the future of AI research

Image taken from the front cover of the Future of AI Research report.

The Association for the Advancement of Artificial Intelligence (AAAI), has published a report on the Future of AI Research. The report, which was announced by outgoing AAAI President Francesca Rossi during the AAAI 2025 conference, covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way.

The report is the result of a Presidential Panel, chaired by Francesca Rossi, and comprising of 24 experienced AI researchers, who worked on the project between summer 2024 and spring 2025. As well as the views of the panel members, the report also draws on community feedback, which was received from 475 AI researchers via a survey.

The 17 topics, each with a dedicated chapter, are as follows.

  • AI Reasoning
  • AI Factuality & Trustworthiness
  • AI Agents
  • AI Evaluation
  • AI Ethics & Safety
  • Embodied AI
  • AI & Cognitive Science
  • Hardware & AI
  • AI for Social Good
  • AI & Sustainability
  • AI for Scientific Discovery
  • Artificial General Intelligence (AGI)
  • AI Perception vs. Reality
  • Diversity of AI Research Approaches
  • Research Beyond the AI Research Community
  • Role of Academia
  • Geopolitical Aspects & Implications of AI

Each chapter includes a list of main takeaways, context and history, current state and trends, research challenges, and community opinion. You can read the report in full here.

Andrew Barto and Richard Sutton win 2024 Turing Award

Andrew Barto and Richard Sutton. Image credit: Association for Computing Machinery.

The Association for Computing Machinery, has named Andrew Barto and Richard Sutton as the recipients of the 2024 ACM A.M. Turing Award. The pair have received the honour for “developing the conceptual and algorithmic foundations of reinforcement learning”. In a series of papers beginning in the 1980s, Barto and Sutton introduced the main ideas, constructed the mathematical foundations, and developed important algorithms for reinforcement learning.

The Turing Award comes with a $1 million prize, to be split between the recipients. Since its inception in 1966, the award has honoured computer scientists and engineers on a yearly basis. The prize was last given for AI research in 2018, when Yoshua Bengio, Yann LeCun and Geoffrey Hinton were recognised for their contribution to the field of deep neural networks.

Andrew Barto is Professor Emeritus, Department of Information and Computer Sciences, University of Massachusetts, Amherst. He began his career at UMass Amherst as a postdoctoral Research Associate in 1977, and has subsequently held various positions including Associate Professor, Professor, and Department Chair. Barto received a BS degree in Mathematics (with distinction) from the University of Michigan, where he also earned his MS and PhD degrees in Computer and Communication Sciences.

Richard Sutton is a Professor in Computing Science at the University of Alberta, a Research Scientist at Keen Technologies (an artificial general intelligence company based in Dallas, Texas) and Chief Scientific Advisor of the Alberta Machine Intelligence Institute (Amii). Sutton was a Distinguished Research Scientist at Deep Mind from 2017 to 2023. Prior to joining the University of Alberta, he served as a Principal Technical Staff Member in the Artificial Intelligence Department at the AT&T Shannon Laboratory in Florham Park, New Jersey, from 1998 to 2002. Sutton received his BA in Psychology from Stanford University and earned his MS and PhD degrees in Computer and Information Science from the University of Massachusetts at Amherst.

The two researchers began collaborating in 1978, at the University of Massachusetts at Amherst, where Barto was Sutton’s PhD and postdoctoral advisor.

Find out more

Stuart J. Russell wins 2025 AAAI Award for Artificial Intelligence for the Benefit of Humanity

The AAAI Award for Artificial Intelligence for the Benefit of Humanity recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects. The award is given annually at the conference for the Association for the Advancement of Artificial Intelligence (AAAI).

This year, the AAAI Awards Committee has announced that the 2025 recipient of the award and $25,000 prize is Stuart J. Russell, “for his work on the conceptual and theoretical foundations of provably beneficial AI and his leadership in creating the field of AI safety”.

Stuart will give an invited talk at AAAI 2025 entitled “Can AI Benefit Humanity?”

About Stuart

Stuart J. Russell is a Distinguished Professor of Computer Science at the University of California, Berkeley, and holds the Michael H. Smith and Lotfi A. Zadeh Chair in Engineering. He is also a Distinguished Professor of Computational Precision Health at UCSF. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He has also worked with the United Nations to create a new global seismic monitoring system for the Comprehensive Nuclear-Test-Ban Treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Read our content featuring previous winners of the award

Online hands-on science communication training – sign up here!

On Friday 22 November, IEEE Robotics and Automation Society will be hosting an online science communication training session for robotics and AI researchers. The tutorial will introduce you to science communication and help you create your own story through hands-on activities.

Date: 22 November 2024
Time: 10:00 – 13:00 EST (07:00 – 10:00 PST, 15:00 – 18:00 GMT, 16:00 – 19:00 CET)
Location: Online – worldwide
Registration
Website

Science communication is essential. It helps demystify robotics and AI for a broad range of people including policy makers, business leaders, and the public. As a researcher, mastering this skill can not only enhance your communication abilities but also expand your network and increase the visibility and impact of your work.

In this three-hour session, leading science communicators in robotics and AI will teach you how to clearly and concisely explain your research to non-specialists. You’ll learn how to avoid hype, how to find suitable images and videos to illustrate your work, and where to start with social media. We’ll hear from a leading robotics journalist on how to deal with media and how to get your story out to a wider audience.

This is a hands-on session with exercises for you to take part in throughout the course. Therefore, please come prepared with an idea about a piece of research you’d like to communicate about.

Agenda

Part 1: How to communicate your work to a broader audience

  • The importance of science communication
  • How to produce a short summary of your research for communication via social media channels
  • How to expand your outline to write a complete blog post
  • How to find and use suitable images
  • How to avoid hype when communicating your research
  • Unconventional ways of doing science communication

Part 2: How to make videos about your robots

  • The value of video
  • Tips on making a video

Part 3: Working with media

  • Why bother talking to media anyway?
  • How media works and what it’s good and bad at
  • How to pitch media a story
  • How to work with your press office

Speakers:
Sabine Hauert, Professor of Swarm Engineering, Executive Trustee AIhub / Robohub
Lucy Smith, Senior Managing Editor AIhub / Robohub
Laura Bridgeman, Audience Development Manager IEEE Spectrum
Evan Ackerman, Senior Editor IEEE Spectrum

Sign up here.

Page 1 of 3
1 2 3