Page 1 of 2
1 2

Congratulations to the #AAMAS2025 best paper, best demo, and distinguished dissertation award winners

winners' medal

The AAMAS 2025 best paper and demo awards were presented at the 24th International Conference on Autonomous Agents and Multiagent Systems, which took place from 19-23 May 2025 in Detroit. The Distinguished Dissertation Award was also recently announced. The winners in the various categories are as follows:


Best Paper Award

Winner

  • Soft Condorcet Optimization for Ranking of General Agents, Marc Lanctot, Kate Larson, Michael Kaisers, Quentin Berthet, Ian Gemp, Manfred Diaz, Roberto-Rafael Maura-Rivero, Yoram Bachrach, Anna Koop, Doina Precup

Finalists

  • Azorus: Commitments over Protocols for BDI Agents, Amit K. Chopra, Matteo Baldoni, Samuel H. Christie V, Munindar P. Singh
  • Curiosity-Driven Partner Selection Accelerates Convention Emergence in Language Games, Chin-Wing Leung, Paolo Turrini, Ann Nowe
  • Reinforcement Learning-based Approach for Vehicle-to-Building Charging with Heterogeneous Agents and Long Term Rewards, Fangqi Liu, Rishav Sen, Jose Paolo Talusan, Ava Pettet, Aaron Kandel, Yoshinori Suzue, Ayan Mukhopadhyay, Abhishek Dubey
  • Ready, Bid, Go! On-Demand Delivery Using Fleets of Drones with Unknown, Heterogeneous Energy Storage Constraints, Mohamed S. Talamali, Genki Miyauchi, Thomas Watteyne, Micael Santos Couceiro, Roderich Gross

Pragnesh Jay Modi Best Student Paper Award

Winners

  • Decentralized Planning Using Probabilistic Hyperproperties, Francesco Pontiggia, Filip Macák, Roman Andriushchenko, Michele Chiari, Milan Ceska
  • Large Language Models for Virtual Human Gesture Selection, Parisa Ghanad Torshizi, Laura B. Hensel, Ari Shapiro, Stacy Marsella

Runner-up

  • ReSCOM: Reward-Shaped Curriculum for Efficient Multi-Agent Communication Learning, Xinghai Wei, Tingting Yuan, Jie Yuan, Dongxiao Liu, Xiaoming Fu

Finalists

  • Explaining Facial Expression Recognition, Sanjeev Nahulanthran, Leimin Tian, Dana Kulic, Mor Vered
  • Agent-Based Analysis of Green Disclosure Policies and Their Market-Wide Impact on Firm Behavior, Lingxiao Zhao, Maria Polukarov, Carmine Ventre

Blue Sky Ideas Track Best Paper Award

Winner

  • Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition, François Olivier, Zied Bouraoui

Finalist

  • Towards Foundation-model-based multiagent system to Accelerate AI for social impact, Yunfan Zhao, Niclas Boehmer, Aparna Taneja, Milind Tambe

Best Demo Award

Winner

  • Serious Games for Ethical Preference Elicitation, Jayati Deshmukh, Zijie Liang, Vahid Yazdanpanah, Sebastian Stein, Sarvapali Ramchurn

Victor Lesser Distinguished Dissertation Award

The Victor Lesser Distinguished Dissertation Award is given for dissertations in the field of autonomous agents and multiagent systems that show originality, depth, impact, as well as quality of writing, supported by high-quality publications.

Winner

  • Jannik Peters. Thesis title: Facets of Proportionality: Selecting Committees, Budgets, and Clusters

Runner-up

  • Lily Xu. Thesis title: High-stakes decisions from low-quality data: AI decision-making for planetary health

Congratulations to the #ICRA2025 best paper award winners

The 2025 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award ceremony at the conference, which took place from 19-23 May in Atlanta, USA.


IEEE ICRA Best Paper Award on Robot Learning

Winner

  • *Robo-DM: Data Management for Large Robot Datasets, Kaiyuan Chen, Letian Fu, David Huang, Yanxiang Zhang, Yunliang Lawrence Chen, Huang Huang, Kush Hari, Ashwin Balakrishna, Ted Xiao, Pannag Sanketi, John Kubiatowicz, Ken Goldberg

Finalists

  • Achieving Human Level Competitive Robot Table Tennis, David D’Ambrosio, Saminda Wishwajith Abeyruwan, Laura Graesser, Atil Iscen, Heni Ben Amor, Alex Bewley, Barney J. Reed, Krista Reymann, Leila Takayama, Yuval Tassa, Krysztof Choromanski, Erwin Coumans, Deepali Jain, Navdeep Jaitly, Natasha Jaques, Satoshi Kataoka, Yuheng Kuang, Nevena Lazic, Reza, Mahjourian, Sherry Moore, Kenneth Oslund, Anish Shankar, Vikas Sindhwani, Vincent Vanhoucke, Grace Vesom, Peng Xu, Pannag Sanketi
  • *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock

IEEE ICRA Best Paper Award in Field and Service Robotics

Winner

  • *PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-Rich Manipulation Using Tactile-Diffusion Policies, Jialiang Zhao, Naveen Kuppuswamy, Siyuan Feng, Benjamin Burchfiel, Edward Adelson

Finalists

  • A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking, Daniel Rodrigues Da Costa, Maxime Robic, Pascal Vasseur, Fabio Morbidi
  • *Learning-Based Adaptive Navigation for Scalar Field Mapping and Feature Tracking, Jose Fuentes, Paulo Padrao, Abdullah Al Redwan Newaz, Leonardo Bobadilla

IEEE ICRA Best Paper Award on Human-Robot Interaction

Winner

  • *Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition, Shengchent Luo, Quanuan Peng, Jun Lv, Kaiwen Hong, Katherin Driggs-Campbell, Cewu Lu, Yong-Lu Li

Finalists

  • *To Ask or Not to Ask: Human-In-The-Loop Contextual Bandits with Applications in Robot-Assisted Feeding, Rohan Banerjee, Rajat Kumar Jenamani, Sidharth Vasudev, Amal Nanavati, Katherine Dimitropoulou, Sarah Dean, Tapomayukh Bhattacharjee
  • *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand

IEEE ICRA Best Paper Award on Mechanisms and Design

Winner

  • Individual and Collective Behaviors in Soft Robot Worms Inspired by Living Worm Blobs, Carina Kaeser, Junghan Kwon, Elio Challita, Harry Tuazon, Robert Wood, Saad Bhamla, Justin Werfel

Finalists

  • *Informed Repurposing of Quadruped Legs for New Tasks, Fuchen Chen, Daniel Aukes
  • *Intelligent Self-Healing Artificial Muscle: Mechanisms for Damage Detection and Autonomous, Ethan Krings, Patrick Mcmanigal, Eric Markvicka

IEEE ICRA Best Paper Award on Planning and Control

Winner

  • *No Plan but Everything under Control: Robustly Solving Sequential Tasks with Dynamically Composed Gradient Descent, Vito Mengers, Oliver Brock

Finalists

  • *SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language Models, Yi Wu, Zikang Xiong, Yiran Hu, Shreyash Sridhar Iyengar, Nan Jiang, Aniket Bera, Lin Tan, Suresh Jagannathan
  • *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot

IEEE ICRA Best Paper Award in Robot Perception

Winner

  • *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer

Finalists

  • *Ground-Optimized 4D Radar-Inertial Odometry Via Continuous Velocity Integration Using Gaussian Process, Wooseong Yang, Hyesu Jang, Ayoung Kim
  • *UAD: Unsupervised Affordance Distillation for Generalization in Robotic Manipulation, Yihe Tang, Wenlong Huang, Yingke Wang, Chengshu Li, Roy Yuan, Ruohan Zhang, Jiajun Wu, Li Fei-Fei

IEEE ICRA Best Paper Award in Robot Manipulation and Locomotion

Winner

  • *D(R, O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous Grasping, Zhenyu Wei, Zhixuan Xu, Jingxiang Guo, Yiwen Hou, Chongkai Gao, Zhehao Cai, Jiayu Luo, Lin Shao

Finalists

  • *Full-Order Sampling-Based MPC for Torque-Level Locomotion Control Via Diffusion-Style Annealing, Haoru Xue, Chaoyi Pan, Zeji Yi, Guannan Qu, Guanya Shi
  • *TrofyBot: A Transformable Rolling and Flying Robot with High Energy Efficiency, Mingwei Lai, Yugian Ye, Hanyu Wu, Chice Xuan, Ruibin Zhang, Qiuyu Ren, Chao Xu, Fei Gao, Yanjun Cao

IEEE ICRA Best Paper Award in Automation

Winner

  • *Physics-Aware Robotic Palletization with Online Masking Inference, Tiangi Zhang, Zheng Wu, Yuxin Chen, Yixiao Wang, Boyuan Liang, Scott Moura, Masayoshi Tomizuka, Mingyu Ding, Wei Zhan

Finalists

  • *In-Plane Manipulation of Soft Micro-Fiber with Ultrasonic Transducer Array and Microscope, Jieyun Zou, Siyuan An, Mingyue Wang, Jiaqi Li, Yalin Shi, You-Fu Li, Song Liu
  • *A Complete and Bounded-Suboptimal Algorithm for a Moving Target Traveling Salesman Problem with Obstacles in 3D, Anoop Bhat, Geordan Gutow, Bhaskhar Vundurthy, Zhonggiang, Sivakumar Rathinam, Howie Choset

IEEE ICRA Best Paper Award in Medical Robotics

Winner

  • *In-Vivo Tendon-Driven Rodent Ankle Exoskeleton System for Sensorimotor Rehabilitation, Juwan Han, Seunghyeon Park, Keehon Kim

Finalists

  • *Image-Based Compliance Control for Robotic Steering of a Ferromagnetic Guidewire, An Hu, Chen Sun, Adam Dmytriw, Nan Xiao, Yu Sun
  • *AutoPeel: Adhesion-Aware Safe Peeling Trajectory Optimization for Robotic Wound Care, Xiao Liang, Youcheng Zhang, Fei Liu, Florian Richter, Michael C. Yip

IEEE ICRA Best Paper Award on Multi-Robot Systems

Winner

  • *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li

Finalists

  • Distributed Multi-Robot Source Seeking in Unknown Environments with Unknown Number of Sources, Lingpeng Chen, Siva Kailas, Srujan Deolasee, Wenhao Luo, Katia Sycara, Woojun Kim
  • *Multi-Nonholonomic Robot Object Transportation with Obstacle Crossing Using a Deformable Sheet, Weijian Zhang, Charlie Street, Masoumeh Mansouri

IEEE ICRA Best Conference Paper Award

Winners

  • *Marginalizing and Conditioning Gaussians Onto Linear Approximations of Smooth Manifolds with Applications in Robotics, Zi Cong Guo, James Richard Forbes, Timothy Barfoot
  • *MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry, Yuheng Qju, Yutian Chen, Zihao Zhang, Wenshan Wang, Sebastian Scherer

In addition to the papers listed above, these paper were also finalists for the IEEE ICRA Best Conference Paper Award.

Finalists

  • *MiniVLN: Efficient Vision-And-Language Navigation by Progressive Knowledge Distillation, Junyou Zhu, Yanyuan Qiao, Siqi Zhang, Xingjian He, Qi Wu, Jing Liu
  • *RoboCrowd: Scaling Robot Data Collection through Crowdsourcing, Suvir Mirchandani, David D. Yuan, Kaylee Burns, Md Sazzad Islam, Zihao Zhao, Chelsea Finn, Dorsa Sadigh
  • How Sound-Based Robot Communication Impacts Perceptions of Robotic Failure, Jai’La Lee Crider, Rhian Preston, Naomi T. Fitter
  • *Obstacle-Avoidant Leader Following with a Quadruped Robot, Carmen Scheidemann, Lennart Werner, Victor Reijgwart, Andrei Cramariuc, Joris Chomarat, Jia-Ruei Chiu, Roland Siegwart, Marco Hutter
  • *Dynamic Tube MPC: Learning Error Dynamics with Massively Parallel Simulation for Robust Safety in Practice, William Compton, Noel Csomay-Shanklin, Cole Johnson, Aaron Ames
  • *Bat-VUFN: Bat-Inspired Visual-And-Ultrasound Fusion Network for Robust Perception in Adverse Conditions, Gyeongrok Lim, Jeong-ui Hong, Min Hyeon Bae
  • *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller
  • *TSCLIP: Robust CLIP Fine-Tuning for Worldwide Cross-Regional Traffic Sign Recognition, Guoyang Zhao, Fulong Ma, Weiging Qi, Chenguang Zhang, Yuxuan Liu, Ming Liu, Jun Ma
  • *Geometric Design and Gait Co-Optimization for Soft Continuum Robots Swimming at Low and High Reynolds Numbers, Yanhao Yang, Ross Hatton
  • *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
  • *Occlusion-aware 6D Pose Estimation with Depth-guided Graph Encoding and Cross-semantic Fusion for Robotic Grasping, Jingyang Liu, Zhenyu Lu, Lu Chen, Jing Yang, Chenguang Yang
  • *Stable Tracking of Eye Gaze Direction During Ophthalmic Surgery, Tinghe Hong, Shenlin Cai, Boyang Li, Kai Huang
  • *Configuration-Adaptive Visual Relative Localization for Spherical Modular Self-Reconfigurable Robots, Yuming Liu, Qiu Zheng, Yuxiao Tu, Yuan Gao, Guanqi Liang, Tin Lun Lam
  • *Realm: Real-Time Line-Of-Sight Maintenance in Multi-Robot Navigation with Unknown Obstacles, Ruofei Bai, Shenghai Yuan, Kun Li, Hongliang Guo, Wei-Yun Yau, Lihua Xie

IEEE ICRA Best Student Paper Award

Winners

  • *Deploying Ten Thousand Robots: Scalable Imitation Learning for Lifelong Multi-Agent Path Finding, He Jiang, Yutong Wang, Rishi Veerapaneni, Tanishq Harish Duhan, Guillaume Adrien Sartoretti, Jiaoyang Li
  • *ShadowTac: Dense Measurement of Shear and Normal Deformation of a Tactile Membrane from Colored Shadows, Giuseppe Vitrani, Basile Pasquale, Michael Wiertlewski
  • *Point and Go: Intuitive Reference Frame Reallocation in Mode Switching for Assistive Robotics, Allie Wang, Chen Jiang, Michael Przystupa, Justin Valentine, Martin Jagersand
  • *TinySense: A Lighter Weight and More Power-Efficient Avionics System for Flying Insect-Scale Robots, Zhitao Yu, Josh Tran, Claire Li, Aaron Weber, Yash P. Talwekar, Sawyer Fuller

Note: papers with an * were eligible for the IEEE ICRA Best Student Paper Award.


#ICRA2025 social media round-up

The 2025 IEEE International Conference on Robotics & Automation (ICRA) took place from 19–23 May, in Atlanta, USA. The event featured plenary and keynote sessions, tutorial and workshops, forums, and a community day. Find out what the participants got up during the conference.

#ICRA #ICRA2025 #RoboticsInAfrica

[image or embed]

— Black in Robotics (@blackinrobotics.bsky.social) 18 May 2025 at 23:22

At #ICRA2025? Check out my student Yi Wu’s talk (TuCT1.4) at 3:30PM Tuesday in Room 302 at the Award Finalists 3 Session about how SELP Generates Safe and Efficient Plans for #Robot #Agents with #LLMs! #ConstrainedDecoding #LLMPlanner
@purduecs.bsky.social
@cerias.bsky.social

[image or embed]

— Lin Tan (@lin-tan.bsky.social) 19 May 2025 at 13:25

Malte Mosbach will present today 16:45 at #ICRA2025 in room 404 our paper:
"Prompt-responsive Object Retrieval with Memory-augmented Student-Teacher Learning"
www.ais.uni-bonn.de/videos/ICRA_…

[image or embed]

— Sven Behnke (@sven-behnke.bsky.social) 20 May 2025 at 15:57

I will present our work on air-ground collaboration with SPOMP in 407A in a few minutes! We deployed 1 UAV and 3 UGVs in a fully autonomous mapping mission in large-scale environments. Come check it out! #ICRA2025 @grasplab.bsky.social

[image or embed]

— Fernando Cladera (@fcladera.bsky.social) 21 May 2025 at 20:13

Cool things happening at #ICRA2025
RoboRacers gearing up for their qualifiers

[image or embed]

— Ameya Salvi (@ameyasalvi.bsky.social) 21 May 2025 at 13:56

What’s coming up at #ICRA2025?


The 2025 IEEE International Conference on Robotics and Automation (ICRA) will take place from 19-23 May, in Atlanta, USA. The event will feature plenary talks, technical sessions, posters, workshops and tutorials, forums, and a science communication short course.

Plenary speakers

There are three plenary sessions this year. The speakers are as follows:

  • Allison Okamura (Stanford University) – Rewired: The Interplay of Robots and Society
  • Tessa Lau (Dusty Robotics) – So you want to build a robot company?
  • Raffaello (Raff) D’Andrea (ETH Zurich) – Models are dead, long live models!

Keynote sessions

Tuesday 20, Wednesday 21 and Thursday 22 will see a total of 12 keynote sessions. The featured topics and speakers are:

  • Rehabilitation & Physically Assistive Systems
    • Brenna Argall
    • Robert Gregg
    • Keehoon Kim
    • Christina Piazza
  • Optimization & Control
    • Todd Murphey
    • Angela Schoellig
    • Jana Tumova
    • Ram Vasudevan
  • Human Robot Interaction
    • Sonia Chernova
    • Dongheui Lee
    • Harold Soh
    • Holly Yanco
  • Soft Robotics
    • Robert Katzschmann
    • Hugo Rodrigue
    • Cynthia Sung
    • Wenzhen Yuan
  • Field Robotics
    • Margarita Chli
    • Tobias Fischer
    • Joshua Mangelson
    • Inna Sharf
  • Bio-inspired Robotics
    • Kyujin Cho
    • Dario Floreano
    • Talia Moore
    • Yasemin Ozkan-Aydin
  • Haptics
    • Jeremy Brown
    • Matej Hoffman
    • Tania Morimoto
    • Jee-Hwan Ryu
  • Planning
    • Hanna Kurniawati
    • Jen Jen Chung
    • Dan Halperin
    • Jing Xiao
  • Manipulation
    • Tamim Asfour
    • Yasuhisa Hasegawa
    • Alberto Rodriguez
    • Shuran Song
  • Locomotion
    • Sarah Bergbreiter
    • Cosimo Della Santina
    • Hae-Won Park
    • Ludovic Righetti
  • Safety & Formal Methods
    • Chuchu Fan
    • Meng Guo
    • Changliu Liu
    • Pian Yu
  • Multi-robot Systems
    • Sabine Hauert
    • Dimitra Panagou
    • Alyssa Pierson
    • Fumin Zhang

Science communication training

Join Sabine Hauert, Evan Ackerman and Laura Bridgeman for a crash course on science communication. In this concise tutorial, you will learn how to share your work with a broader audience. This session will take place on 22 May, 11:00 – 12:15.

Workshops and tutorials

The programme of workshops and tutorials will take place on Monday 19 May and Friday 23 May. There are 59 events to choose from, and you can see the full list here.

Forums

There will be three forums as part of the programme, one each on Tuesday 20, Wednesday 21 and Thursday 22.

Community building day

Wednesday 21 May is community building day, with six events planned:

Other events

You can find out more about the other sessions and event at the links below:

Multi-agent path finding in continuous environments

By Kristýna Janovská and Pavel Surynek

Imagine if all of our cars could drive themselves – autonomous driving is becoming possible, but to what extent? To get a vehicle somewhere by itself may not seem so tricky if the route is clear and well defined, but what if there are more cars, each trying to get to a different place? And what if we add pedestrians, animals and other unaccounted for elements? This problem has recently been increasingly studied, and already used in scenarios such as warehouse logistics, where a group of robots move boxes in a warehouse, each with its own goal, but all moving while making sure not to collide and making their routes – paths – as short as possible. But how to formalize such a problem? The answer is MAPF – multi-agent path finding [Silver, 2005].

Multi-agent path finding describes a problem where we have a group of agents – robots, vehicles or even people – who are each trying to get from their starting positions to their goal positions all at once without ever colliding (being in the same position at the same time).

Typically, this problem has been solved on graphs. Graphs are structures that are able to simplify an environment using its focal points and interconnections between them. These points are called vertices and can represent, for example, coordinates. They are connected by edges, which connect neighbouring vertices and represent distances between them.

If however we are trying to solve a real-life scenario, we strive to get as close to simulating reality as possible. Therefore, discrete representation (using a finite number of vertices) may not suffice. But how to search an environment that is continuous, that is, one where there is basically an infinite amount of vertices connected by edges of infinitely small sizes?

This is where something called sampling-based algorithms comes into play. Algorithms such as RRT* [Karaman and Frazzoli, 2011], which we used in our work, randomly select (sample) coordinates in our coordinate space and use them as vertices. The more points that are sampled, the more accurate the representation of the environment is. These vertices are connected to that of their nearest neighbours which minimizes the length of the path from the starting point to the newly sampled point. The path is a sequence of vertices, measured as a sum of the lengths of edges between them.

Figure 1: Two examples of paths connecting starting positions (blue) and goal positions (green) of three agents. Once an obstacle is present, agents plan smooth curved paths around it, successfully avoiding both the obstacle and each other.

We can get a close to optimal path this way, though there is still one problem. Paths created this way are still somewhat bumpy, as the transition between different segments of a path is sharp. If a vehicle was to take this path, it would probably have to turn itself at once when it reaches the end of a segment, as some robotic vacuum cleaners do when moving around. This slows the vehicle or a robot down significantly. A way we can solve this is to take these paths and smooth them, so that the transitions are no longer sharp, but smooth curves. This way, robots or vehicles moving on them can smoothly travel without ever stopping or slowing down significantly when in need of a turn.

Our paper [Janovská and Surynek, 2024] proposed a method for multi-agent path finding in continuous environments, where agents move on sets of smooth paths without colliding. Our algorithm is inspired by the Conflict Based Search (CBS) [Sharon et al., 2014]. Our extension into a continuous space called Continuous-Environment Conflict-Based Search (CE-CBS) works on two levels:

Figure 2: Comparison of paths found with discrete CBS algorithm on a 2D grid (left) and CE-CBS paths in a continuous version of the same environment. Three agents move from blue starting points to green goal points. These experiments are performed in the Robotic Agents Laboratory at Faculty of Information Technology of the Czech Technical University in Prague.

Firstly, each agent searches for a path individually. This is done with the RRT* algorithm as mentioned above. The resulting path is then smoothed using B-spline curves, polynomial piecewise curves applied to vertices of the path. This removes sharp turns and makes the path easier to traverse for a physical agent.

Individual paths are then sent to the higher level of the algorithm, in which paths are compared and conflicts are found. Conflict arises if two agents (which are represented as rigid circular bodies) overlap at any given time. If so, constraints are created to forbid one of the agents from passing through the conflicting space at a time interval during which it was previously present in that space. Both options which constrain one of the agents are tried – a tree of possible constraint settings and their solutions is constructed and expanded upon with each conflict found. When a new constraint is added, this information passes to all agents it concerns and their paths are re-planned so that they avoid the constrained time and space. Then the paths are checked again for validity, and this repeats until a conflict-free solution, which aims to be as short as possible is found.

This way, agents can effectively move without losing speed while turning and without colliding with each other. Although there are environments such as narrow hallways where slowing down or even stopping may be necessary for agents to safely pass, CE-CBS finds solutions in most environments.

This research is supported by the Czech Science Foundation, 22-31346S.

You can read our paper here.

References

Interview with Yuki Mitsufuji: Improving AI image generation


Yuki Mitsufuji is a Lead Research Scientist at Sony AI. Yuki and his team presented two papers at the recent Conference on Neural Information Processing Systems (NeurIPS 2024). These works tackle different aspects of image generation and are entitled: GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping and PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher . We caught up with Yuki to find out more about this research.

There are two pieces of research we’d like to ask you about today. Could we start with the GenWarp paper? Could you outline the problem that you were focused on in this work?

The problem we aimed to solve is called single-shot novel view synthesis, which is where you have one image and want to create another image of the same scene from a different camera angle. There has been a lot of work in this space, but a major challenge remains: when an image angle changes substantially, the image quality degrades significantly. We wanted to be able to generate a new image based on a single given image, as well as improve the quality, even in very challenging angle change settings.

How did you go about solving this problem – what was your methodology?

The existing works in this space tend to take advantage of monocular depth estimation, which means only a single image is used to estimate depth. This depth information enables us to change the angle and change the image according to that angle – we call it “warp.” Of course, there will be some occluded parts in the image, and there will be information missing from the original image on how to create the image from a new angle. Therefore, there is always a second phase where another module can interpolate the occluded region. Because of these two phases, in the existing work in this area, geometrical errors introduced in warping cannot be compensated for in the interpolation phase.

We solve this problem by fusing everything together. We don’t go for a two-phase approach, but do it all at once in a single diffusion model. To preserve the semantic meaning of the image, we created another neural network that can extract the semantic information from a given image as well as monocular depth information. We inject it using a cross-attention mechanism, into the main base diffusion model. Since the warping and interpolation were done in one model, and the occluded part can be reconstructed very well together with the semantic information injected from outside, we saw the overall quality improved. We saw improvements in image quality both subjectively and objectively, using metrics such as FID and PSNR.

Can people see some of the images created using GenWarp?

Yes, we actually have a demo, which consists of two parts. One shows the original image and the other shows the warped images from different angles.

Moving on to the PaGoDA paper, here you were addressing the high computational cost of diffusion models? How did you go about addressing that problem?

Diffusion models are very popular, but it’s well-known that they are very costly for training and inference. We address this issue by proposing PaGoDA, our model which addresses both training efficiency and inference efficiency.

It’s easy to talk about inference efficiency, which directly connects to the speed of generation. Diffusion usually takes a lot of iterative steps towards the final generated output – our goal was to skip these steps so that we could quickly generate an image in just one step. People call it “one-step generation” or “one-step diffusion.” It doesn’t always have to be one step; it could be two or three steps, for example, “few-step diffusion”. Basically, the target is to solve the bottleneck of diffusion, which is a time-consuming, multi-step iterative generation method.

In diffusion models, generating an output is typically a slow process, requiring many iterative steps to produce the final result. A key trend in advancing these models is training a “student model” that distills knowledge from a pre-trained diffusion model. This allows for faster generation—sometimes producing an image in just one step. These are often referred to as distilled diffusion models. Distillation means that, given a teacher (a diffusion model), we use this information to train another one-step efficient model. We call it distillation because we can distill the information from the original model, which has vast knowledge about generating good images.

However, both classic diffusion models and their distilled counterparts are usually tied to a fixed image resolution. This means that if we want a higher-resolution distilled diffusion model capable of one-step generation, we would need to retrain the diffusion model and then distill it again at the desired resolution.

This makes the entire pipeline of training and generation quite tedious. Each time a higher resolution is needed, we have to retrain the diffusion model from scratch and go through the distillation process again, adding significant complexity and time to the workflow.

The uniqueness of PaGoDA is that we train across different resolution models in one system, which allows it to achieve one-step generation, making the workflow much more efficient.

For example, if we want to distill a model for images of 128×128, we can do that. But if we want to do it for another scale, 256×256 let’s say, then we should have the teacher train on 256×256. If we want to extend it even more for higher resolutions, then we need to do this multiple times. This can be very costly, so to avoid this, we use the idea of progressive growing training, which has already been studied in the area of generative adversarial networks (GANs), but not so much in the diffusion space. The idea is, given the teacher diffusion model trained on 64×64, we can distill information and train a one-step model for any resolution. For many resolution cases we can get a state-of-the-art performance using PaGoDA.

Could you give a rough idea of the difference in computational cost between your method and standard diffusion models. What kind of saving do you make?

The idea is very simple – we just skip the iterative steps. It is highly dependent on the diffusion model you use, but a typical standard diffusion model in the past historically used about 1000 steps. And now, modern, well-optimized diffusion models require 79 steps. With our model that goes down to one step, we are looking at it about 80 times faster, in theory. Of course, it all depends on how you implement the system, and if there’s a parallelization mechanism on chips, people can exploit it.

Is there anything else you would like to add about either of the projects?

Ultimately, we want to achieve real-time generation, and not just have this generation be limited to images. Real-time sound generation is an area that we are looking at.

Also, as you can see in the animation demo of GenWarp, the images change rapidly, making it look like an animation. However, the demo was created with many images generated with costly diffusion models offline. If we could achieve high-speed generation, let’s say with PaGoDA, then theoretically, we could create images from any angle on the fly.

Find out more:

About Yuki Mitsufuji

Yuki Mitsufuji is a Lead Research Scientist at Sony AI. In addition to his role at Sony AI, he is a Distinguished Engineer for Sony Group Corporation and the Head of Creative AI Lab for Sony R&D. Yuki holds a PhD in Information Science & Technology from the University of Tokyo. His groundbreaking work has made him a pioneer in foundational music and sound work, such as sound separation and other generative models that can be applied to music, sound, and other modalities.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

In a series of interviews, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. In this latest interview, we hear from Amina Mević who is applying machine learning to semiconductor manufacturing. Find out more about her PhD research so far, what makes this field so interesting, and how she found the AAAI Doctoral Consortium experience.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am currently pursuing my PhD at the University of Sarajevo, Faculty of Electrical Engineering, Department of Computer Science and Informatics. My research is being carried out in collaboration with Infineon Technologies Austria as part of the Important Project of Common European Interest (IPCEI) in Microelectronics. The topic of my research focuses on developing an explainable multi-output virtual metrology system based on machine learning to predict the physical properties of metal layers in semiconductor manufacturing.

Could you give us an overview of the research you’ve carried out so far during your PhD?

In the first year of my PhD, I worked on preprocessing complex manufacturing data and preparing a robust multi-output prediction setup for virtual metrology. I collaborated with industry experts to understand the process intricacies and validate the prediction models. I applied a projection-based selection algorithm (ProjSe), which aligned well with both domain knowledge and process physics.

In the second year, I developed an explanatory method, designed to identify the most relevant input features for multi-output predictions.

Is there an aspect of your research that has been particularly interesting?

For me, the most interesting aspect is the synergy between physics, mathematics, cutting-edge technology, psychology, and ethics. I’m working with data collected during a physical process—physical vapor deposition—using concepts from geometry and algebra, particularly projection operators and their algebra, which have roots in quantum mechanics, to enhance both the performance and interpretability of machine learning models. Collaborating closely with engineers in the semiconductor industry has also been eye-opening, especially seeing how explanations can directly support human decision-making in high-stakes environments. I feel truly honored to deepen my knowledge across these fields and to conduct this multidisciplinary research.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

I plan to focus more on time series data and develop explanatory methods for multivariate time series models. Additionally, I intend to investigate aspects of responsible AI within the semiconductor industry and ensure that the solutions proposed during my PhD align with the principles outlined in the EU AI Act.

How was the AAAI Doctoral Consortium, and the AAAI conference experience in general?

Attending the AAAI Doctoral Consortium was an amazing experience! It gave me the opportunity to present my research and receive valuable feedback from leading AI researchers. The networking aspect was equally rewarding—I had inspiring conversations with fellow PhD students and mentors from around the world. The main conference itself was energizing and diverse, with cutting-edge research presented across so many AI subfields. It definitely strengthened my motivation and gave me new ideas for the final phase of my PhD.

Amina presenting two posters at AAAI 2025.

What made you want to study AI?

After graduating in theoretical physics, I found that job opportunities—especially in physics research—were quite limited in my country. I began looking for roles where I could apply the mathematical knowledge and problem-solving skills I had developed during my studies. At the time, data science appeared to be an ideal and promising field. However, I soon realized that I missed the depth and purpose of fundamental research, which was often lacking in industry roles. That motivated me to pursue a PhD in AI, aiming to gain a deep, foundational understanding of the technology—one that can be applied meaningfully and used in service of humanity.

What advice would you give to someone thinking of doing a PhD in the field?

Stay curious and open to learning from different disciplines—especially mathematics, statistics, and domain knowledge. Make sure your research has a purpose that resonates with you personally, as that passion will help carry you through challenges. There will be moments when you’ll feel like giving up, but before making any decision, ask yourself: am I just tired? Sometimes, rest is the solution to many of our problems. Finally, find mentors and communities to share ideas with and stay inspired.

Could you tell us an interesting (non-AI related) fact about you?

I’m a huge science outreach enthusiast! I regularly volunteer with the Association for the Advancement of Science and Technology in Bosnia, where we run workshops and events to inspire kids and high school students to explore STEM—especially in underserved communities.

About Amina

Amina Mević is a PhD candidate and teaching assistant at the University of Sarajevo, Faculty of Electrical Engineering, Bosnia and Herzegovina. Her research is conducted in collaboration with Infineon Technologies Austria as part of the IPCEI in Microelectronics. She earned a master’s degree in theoretical physics and was awarded two Golden Badges of the University of Sarajevo for achieving a GPA higher than 9.5/10 during both her bachelor’s and master’s studies. Amina actively volunteers to promote STEM education among youth in Bosnia and Herzegovina and is dedicated to improving the research environment in her country.

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

ACM SIGAI logo

Congratulations to Shlomo Zilberstein on winning the 2025 ACM/SIGAI Autonomous Agents Research Award. This prestigious award is made for excellence in research in the area of autonomous agents. It is intended to recognize researchers in autonomous agents whose current work is an important influence on the field.

Professor Shlomo Zilberstein was recognised for his work establishing the field of decentralized Markov Decision Processes (DEC-MDPs), laying the groundwork for decision-theoretic planning in multi-agent systems and multi-agent reinforcement learning (MARL). The selection committee noted that these contributions have become a cornerstone of multi-agent decision-making, influencing researchers and practitioners alike.

Shlomo Zilberstein is Professor of Computer Science and former Associate Dean of Research at the University of Massachusetts Amherst. He is a Fellow of AAAI and the ACM, and has received numerous awards, including the UMass Chancellor’s Medal, the IFAAMAS Influential Paper Award, and the AAAI Distinguished Service Award.

Report on the future of AI research

Image taken from the front cover of the Future of AI Research report.

The Association for the Advancement of Artificial Intelligence (AAAI), has published a report on the Future of AI Research. The report, which was announced by outgoing AAAI President Francesca Rossi during the AAAI 2025 conference, covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way.

The report is the result of a Presidential Panel, chaired by Francesca Rossi, and comprising of 24 experienced AI researchers, who worked on the project between summer 2024 and spring 2025. As well as the views of the panel members, the report also draws on community feedback, which was received from 475 AI researchers via a survey.

The 17 topics, each with a dedicated chapter, are as follows.

  • AI Reasoning
  • AI Factuality & Trustworthiness
  • AI Agents
  • AI Evaluation
  • AI Ethics & Safety
  • Embodied AI
  • AI & Cognitive Science
  • Hardware & AI
  • AI for Social Good
  • AI & Sustainability
  • AI for Scientific Discovery
  • Artificial General Intelligence (AGI)
  • AI Perception vs. Reality
  • Diversity of AI Research Approaches
  • Research Beyond the AI Research Community
  • Role of Academia
  • Geopolitical Aspects & Implications of AI

Each chapter includes a list of main takeaways, context and history, current state and trends, research challenges, and community opinion. You can read the report in full here.

Andrew Barto and Richard Sutton win 2024 Turing Award

Andrew Barto and Richard Sutton. Image credit: Association for Computing Machinery.

The Association for Computing Machinery, has named Andrew Barto and Richard Sutton as the recipients of the 2024 ACM A.M. Turing Award. The pair have received the honour for “developing the conceptual and algorithmic foundations of reinforcement learning”. In a series of papers beginning in the 1980s, Barto and Sutton introduced the main ideas, constructed the mathematical foundations, and developed important algorithms for reinforcement learning.

The Turing Award comes with a $1 million prize, to be split between the recipients. Since its inception in 1966, the award has honoured computer scientists and engineers on a yearly basis. The prize was last given for AI research in 2018, when Yoshua Bengio, Yann LeCun and Geoffrey Hinton were recognised for their contribution to the field of deep neural networks.

Andrew Barto is Professor Emeritus, Department of Information and Computer Sciences, University of Massachusetts, Amherst. He began his career at UMass Amherst as a postdoctoral Research Associate in 1977, and has subsequently held various positions including Associate Professor, Professor, and Department Chair. Barto received a BS degree in Mathematics (with distinction) from the University of Michigan, where he also earned his MS and PhD degrees in Computer and Communication Sciences.

Richard Sutton is a Professor in Computing Science at the University of Alberta, a Research Scientist at Keen Technologies (an artificial general intelligence company based in Dallas, Texas) and Chief Scientific Advisor of the Alberta Machine Intelligence Institute (Amii). Sutton was a Distinguished Research Scientist at Deep Mind from 2017 to 2023. Prior to joining the University of Alberta, he served as a Principal Technical Staff Member in the Artificial Intelligence Department at the AT&T Shannon Laboratory in Florham Park, New Jersey, from 1998 to 2002. Sutton received his BA in Psychology from Stanford University and earned his MS and PhD degrees in Computer and Information Science from the University of Massachusetts at Amherst.

The two researchers began collaborating in 1978, at the University of Massachusetts at Amherst, where Barto was Sutton’s PhD and postdoctoral advisor.

Find out more

Stuart J. Russell wins 2025 AAAI Award for Artificial Intelligence for the Benefit of Humanity

The AAAI Award for Artificial Intelligence for the Benefit of Humanity recognizes positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects. The award is given annually at the conference for the Association for the Advancement of Artificial Intelligence (AAAI).

This year, the AAAI Awards Committee has announced that the 2025 recipient of the award and $25,000 prize is Stuart J. Russell, “for his work on the conceptual and theoretical foundations of provably beneficial AI and his leadership in creating the field of AI safety”.

Stuart will give an invited talk at AAAI 2025 entitled “Can AI Benefit Humanity?”

About Stuart

Stuart J. Russell is a Distinguished Professor of Computer Science at the University of California, Berkeley, and holds the Michael H. Smith and Lotfi A. Zadeh Chair in Engineering. He is also a Distinguished Professor of Computational Precision Health at UCSF. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He has also worked with the United Nations to create a new global seismic monitoring system for the Comprehensive Nuclear-Test-Ban Treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Read our content featuring previous winners of the award

Online hands-on science communication training – sign up here!

On Friday 22 November, IEEE Robotics and Automation Society will be hosting an online science communication training session for robotics and AI researchers. The tutorial will introduce you to science communication and help you create your own story through hands-on activities.

Date: 22 November 2024
Time: 10:00 – 13:00 EST (07:00 – 10:00 PST, 15:00 – 18:00 GMT, 16:00 – 19:00 CET)
Location: Online – worldwide
Registration
Website

Science communication is essential. It helps demystify robotics and AI for a broad range of people including policy makers, business leaders, and the public. As a researcher, mastering this skill can not only enhance your communication abilities but also expand your network and increase the visibility and impact of your work.

In this three-hour session, leading science communicators in robotics and AI will teach you how to clearly and concisely explain your research to non-specialists. You’ll learn how to avoid hype, how to find suitable images and videos to illustrate your work, and where to start with social media. We’ll hear from a leading robotics journalist on how to deal with media and how to get your story out to a wider audience.

This is a hands-on session with exercises for you to take part in throughout the course. Therefore, please come prepared with an idea about a piece of research you’d like to communicate about.

Agenda

Part 1: How to communicate your work to a broader audience

  • The importance of science communication
  • How to produce a short summary of your research for communication via social media channels
  • How to expand your outline to write a complete blog post
  • How to find and use suitable images
  • How to avoid hype when communicating your research
  • Unconventional ways of doing science communication

Part 2: How to make videos about your robots

  • The value of video
  • Tips on making a video

Part 3: Working with media

  • Why bother talking to media anyway?
  • How media works and what it’s good and bad at
  • How to pitch media a story
  • How to work with your press office

Speakers:
Sabine Hauert, Professor of Swarm Engineering, Executive Trustee AIhub / Robohub
Lucy Smith, Senior Managing Editor AIhub / Robohub
Laura Bridgeman, Audience Development Manager IEEE Spectrum
Evan Ackerman, Senior Editor IEEE Spectrum

Sign up here.

#IROS2024 – tweet round-up

The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) was held from 14-18 October in Abu Dhabi, UAE. We take a look at what the participants got up to.

What’s coming up at #IROS2024?


The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024) will be held from 14-18 October in Abu Dhabi, UAE. The programme includes plenary and keynote talks, workshops, tutorials, and forums. We (AIhub) are also holding a science communication session, run in collaboration with IEEE Spectrum.

Plenary talks

There are four plenary talks on the programme this year:

  • Oussama KhatibMission in Dubai, collaboration with UAE
  • Najwa AarajBuilding Trust in Autonomous Systems: Security Strategies for the Next Generation of Robotics
  • Yoshihiko NakamuraEmbodiment of AI and Biomechanics/Neuroscience
  • Magnus EgerstedtMutualistic Interactions in Heterogeneous Multi-Robot Systems: From Environmental Monitoring to the Robotarium

Keynote talks

The keynotes this year fall under the umbrella topics of: flying machines, biorobotics, AI and robotics, and robotics competitions.

  • Flying machines
    • Davide ScaramuzzaDrone Racing
    • Guido De CroonDelFly Explorer
    • Giuseppe LoiannoAgile Robotics and Perception Lab
    • Mirko KovacDrones for Environmental Health
  • Biorobotics
    • Auke IjspeertBio Robotics, Computational neuroscience
    • Barbara MazzolaiBioinspired Soft Robotics
    • Kaspar AlthoeferGraphene and 2D materials, sensor applications
  • AI and robotics
    • Barbara CaputoApplied Artificial Intelligence
    • Merouane DebbahTelecomGPT
    • Concepción Alicia Monje(soft) robot control
    • Jianwei ZhangCrossmodal Learning
  • Robotics competitions
    • Pedro LimaEurope: European Robotics League, euROBIN Coopetitions
    • Timothy ChungAmericas: DARPA Challenges
    • Ubbo VisserRoboCup Federation
    • Thomas McCarthyGrand Challenges as a Mechanism to Hasten Translation from Lab to Market

Forums

The forums are three-hour events that focus on a particular topic. Each forum will have keynote speakers, with some including a poster session and other talks.

Science communication for roboticists

This session is a collaboration between AIhub.org/Robohub.org and IEEE Spectrum. We will cover different ways to communicate about your work to a more general audience, and how to work with media. You can find out more here.

Workshops

The 46 workshops take place on 14 and 15 October.

Tutorials

The tutorials take place on 14 and 15 October. There are 10 to choose from this year.

You can view the programme overview here.

#RoboCup2024 – daily digest: 21 July

A break in play during a Small Size League match.

Today, 21 July, saw the competitions draw to a close in a thrilling finale. In the third and final of our round-up articles, we provide a flavour of the action from this last day. If you missed them, you can find our first two digests here: 19 July | 20 July.

My first port of call this morning was the Standard Platform League, where Dr Timothy Wiley and Tom Ellis from Team RedbackBots, RMIT University, Melbourne, Australia, demonstrated an exciting advancement that is unique to their team. They have developed an augmented reality (AR) system with the aim of enhancing the understanding and explainability of the on-field action.

The RedbackBots travelling team for 2024 (L-to-R: Murray Owens, Sam Griffiths, Tom Ellis, Dr Timothy Wiley, Mark Field, Jasper Avice Demay). Photo credit: Dr Timothy Wiley.

Timothy, the academic leader of the team explained: “What our students proposed at the end of last year’s competition, to make a contribution to the league, was to develop an augmented reality (AR) visualization of what the league calls the team communication monitor. This is a piece of software that gets displayed on the TV screens to the audience and the referee, and it shows you where the robots think they are, information about the game, and where the ball is. We set out to make an AR system of this because we think it’s so much better to view it overlaid on the field. What the AR lets us do is project all of this information live on the field as the robots are moving.”

The team has been demonstrating the system to the league at the event, with very positive feedback. In fact, one of the teams found an error in their software during a game whilst trying out the AR system. Tom said that they’ve received a lot of ideas and suggestions from the other teams for further developments. This is one of the first (if not, the first) AR system to be trialled across the competition, and first time it has been used in the Standard Platform League. I was lucky enough to get a demo from Tom and it definitely added a new level to the viewing experience. It will be very interesting to see how the system evolves.

Mark Field setting up the MetaQuest3 to use the augmented reality system. Photo credit: Dr Timothy Wiley.

From the main soccer area I headed to the RoboCupJunior zone, where Rui Baptista, an Executive Committee member, gave me a tour of the arenas and introduced me to some of the teams that have been using machine learning models to assist their robots. RoboCupJunior is a competition for school children, and is split into three leagues: Soccer, Rescue and OnStage.

I first caught up with four teams from the Rescue league. Robots identify “victims” within re-created disaster scenarios, varying in complexity from line-following on a flat surface to negotiating paths through obstacles on uneven terrain. There are three different strands to the league: 1) Rescue Line, where robots follow a black line which leads them to a victim, 2) Rescue Maze, where robots need to investigate a maze and identify victims, 3) Rescue Simulation, which is a simulated version of the maze competition.

Team Skollska Knijgia, taking part in the Rescue Line, used a YOLO v8 neural network to detect victims in the evacuation zone. They trained the network themselves with about 5000 images. Also competing in the Rescue Line event were Team Overengeniering2. They also used YOLO v8 neural networks, in this case for two elements of their system. They used the first model to detect victims in the evacuation zone and to detect the walls. Their second model is utilized during line following, and allows the robot to detect when the black line (used for the majority of the task) changes to a silver line, which indicates the entrance of the evacuation zone.

Left: Team Skollska Knijgia. Right: Team Overengeniering2.

Team Tanorobo! were taking part in the maze competition. They also used a machine learning model for victim detection, training on 3000 photos for each type of victim (these are denoted by different letters in the maze). They also took photos of walls and obstacles, to avoid mis-classification. Team New Aje were taking part in the simulation contest. They used a graphical user interface to train their machine learning model, and to debug their navigation algorithms. They have three different algorithms for navigation, with varying computational cost, which they can switch between depending on the place (and complexity) in the maze in which they are located.

Left: Team Tanorobo! Right: Team New Aje.

I met two of the teams who had recently presented in the OnStage event. Team Medic’s performance was based on a medical scenario, with the team including two machine learning elements. The first being voice recognition, for communication with the “patient” robots, and the second being image recognition to classify x-rays. Team Jam Session’s robot reads in American sign language symbols and uses them to play a piano. They used the MediaPipe detection algorithm to find different points on the hand, and random forest classifiers to determine which symbol was being displayed.

Left: Team Medic Bot Right: Team Jam Session.

Next stop was the humanoid league where the final match was in progress. The arena was packed to the rafters with crowds eager to see the action.
Standing room only to see the Adult Size Humanoids.

The finals continued with the Middle Size League, with the home team Tech United Eindhoven beating BigHeroX by a convincing 6-1 scoreline. You can watch the livestream of the final day’s action here.

The grand finale featured the winners of the Middle Size League (Tech United Eindhoven) against five RoboCup trustees. The humans ran out 5-2 winners, their superior passing and movement too much for Tech United.

Page 1 of 2
1 2