Online hands-on science communication training – sign up here!

On Friday 22 November, IEEE Robotics and Automation Society will be hosting an online science communication training session for robotics and AI researchers. The tutorial will introduce you to science communication and help you create your own story through hands-on activities.

Date: 22 November 2024
Time: 10:00 – 13:00 EST (07:00 – 10:00 PST, 15:00 – 18:00 GMT, 16:00 – 19:00 CET)
Location: Online – worldwide
Registration
Website

Science communication is essential. It helps demystify robotics and AI for a broad range of people including policy makers, business leaders, and the public. As a researcher, mastering this skill can not only enhance your communication abilities but also expand your network and increase the visibility and impact of your work.

In this three-hour session, leading science communicators in robotics and AI will teach you how to clearly and concisely explain your research to non-specialists. You’ll learn how to avoid hype, how to find suitable images and videos to illustrate your work, and where to start with social media. We’ll hear from a leading robotics journalist on how to deal with media and how to get your story out to a wider audience.

This is a hands-on session with exercises for you to take part in throughout the course. Therefore, please come prepared with an idea about a piece of research you’d like to communicate about.

Agenda

Part 1: How to communicate your work to a broader audience

  • The importance of science communication
  • How to produce a short summary of your research for communication via social media channels
  • How to expand your outline to write a complete blog post
  • How to find and use suitable images
  • How to avoid hype when communicating your research
  • Unconventional ways of doing science communication

Part 2: How to make videos about your robots

  • The value of video
  • Tips on making a video

Part 3: Working with media

  • Why bother talking to media anyway?
  • How media works and what it’s good and bad at
  • How to pitch media a story
  • How to work with your press office

Speakers:
Sabine Hauert, Professor of Swarm Engineering, Executive Trustee AIhub / Robohub
Lucy Smith, Senior Managing Editor AIhub / Robohub
Laura Bridgeman, Audience Development Manager IEEE Spectrum
Evan Ackerman, Senior Editor IEEE Spectrum

Sign up here.

#RoboCup2024 – daily digest: 21 July

A break in play during a Small Size League match.

Today, 21 July, saw the competitions draw to a close in a thrilling finale. In the third and final of our round-up articles, we provide a flavour of the action from this last day. If you missed them, you can find our first two digests here: 19 July | 20 July.

My first port of call this morning was the Standard Platform League, where Dr Timothy Wiley and Tom Ellis from Team RedbackBots, RMIT University, Melbourne, Australia, demonstrated an exciting advancement that is unique to their team. They have developed an augmented reality (AR) system with the aim of enhancing the understanding and explainability of the on-field action.

The RedbackBots travelling team for 2024 (L-to-R: Murray Owens, Sam Griffiths, Tom Ellis, Dr Timothy Wiley, Mark Field, Jasper Avice Demay). Photo credit: Dr Timothy Wiley.

Timothy, the academic leader of the team explained: “What our students proposed at the end of last year’s competition, to make a contribution to the league, was to develop an augmented reality (AR) visualization of what the league calls the team communication monitor. This is a piece of software that gets displayed on the TV screens to the audience and the referee, and it shows you where the robots think they are, information about the game, and where the ball is. We set out to make an AR system of this because we think it’s so much better to view it overlaid on the field. What the AR lets us do is project all of this information live on the field as the robots are moving.”

The team has been demonstrating the system to the league at the event, with very positive feedback. In fact, one of the teams found an error in their software during a game whilst trying out the AR system. Tom said that they’ve received a lot of ideas and suggestions from the other teams for further developments. This is one of the first (if not, the first) AR system to be trialled across the competition, and first time it has been used in the Standard Platform League. I was lucky enough to get a demo from Tom and it definitely added a new level to the viewing experience. It will be very interesting to see how the system evolves.

Mark Field setting up the MetaQuest3 to use the augmented reality system. Photo credit: Dr Timothy Wiley.

From the main soccer area I headed to the RoboCupJunior zone, where Rui Baptista, an Executive Committee member, gave me a tour of the arenas and introduced me to some of the teams that have been using machine learning models to assist their robots. RoboCupJunior is a competition for school children, and is split into three leagues: Soccer, Rescue and OnStage.

I first caught up with four teams from the Rescue league. Robots identify “victims” within re-created disaster scenarios, varying in complexity from line-following on a flat surface to negotiating paths through obstacles on uneven terrain. There are three different strands to the league: 1) Rescue Line, where robots follow a black line which leads them to a victim, 2) Rescue Maze, where robots need to investigate a maze and identify victims, 3) Rescue Simulation, which is a simulated version of the maze competition.

Team Skollska Knijgia, taking part in the Rescue Line, used a YOLO v8 neural network to detect victims in the evacuation zone. They trained the network themselves with about 5000 images. Also competing in the Rescue Line event were Team Overengeniering2. They also used YOLO v8 neural networks, in this case for two elements of their system. They used the first model to detect victims in the evacuation zone and to detect the walls. Their second model is utilized during line following, and allows the robot to detect when the black line (used for the majority of the task) changes to a silver line, which indicates the entrance of the evacuation zone.

Left: Team Skollska Knijgia. Right: Team Overengeniering2.

Team Tanorobo! were taking part in the maze competition. They also used a machine learning model for victim detection, training on 3000 photos for each type of victim (these are denoted by different letters in the maze). They also took photos of walls and obstacles, to avoid mis-classification. Team New Aje were taking part in the simulation contest. They used a graphical user interface to train their machine learning model, and to debug their navigation algorithms. They have three different algorithms for navigation, with varying computational cost, which they can switch between depending on the place (and complexity) in the maze in which they are located.

Left: Team Tanorobo! Right: Team New Aje.

I met two of the teams who had recently presented in the OnStage event. Team Medic’s performance was based on a medical scenario, with the team including two machine learning elements. The first being voice recognition, for communication with the “patient” robots, and the second being image recognition to classify x-rays. Team Jam Session’s robot reads in American sign language symbols and uses them to play a piano. They used the MediaPipe detection algorithm to find different points on the hand, and random forest classifiers to determine which symbol was being displayed.

Left: Team Medic Bot Right: Team Jam Session.

Next stop was the humanoid league where the final match was in progress. The arena was packed to the rafters with crowds eager to see the action.
Standing room only to see the Adult Size Humanoids.

The finals continued with the Middle Size League, with the home team Tech United Eindhoven beating BigHeroX by a convincing 6-1 scoreline. You can watch the livestream of the final day’s action here.

The grand finale featured the winners of the Middle Size League (Tech United Eindhoven) against five RoboCup trustees. The humans ran out 5-2 winners, their superior passing and movement too much for Tech United.

#RoboCup2024 – daily digest: 20 July

The Standard Platform Soccer League in action.

This is the second of our daily digests from RoboCup2024 in Eindhoven, The Netherlands. If you missed the first digest, which gives some background to RoboCup, you can find it here.

Competitions continued across all the leagues today, with participants vying for a place in Sunday’s finals.

The RoboCup@Work league focusses on robots in work-related scenarios, utilizing ideas and concepts from other RoboCup competitions to tackle open research challenges in industrial and service robotics.

I arrived at the arena in time to catch the advanced navigation test. Robots have to autonomously navigate, picking up and placing objects at different work stations. In this advanced test, caution tape is added to the arena floor, which the robots should avoid travelling over. There is also a complex placing element where teams have to put an object that they’ve collected into a slot – get the orientation or placement of the object slightly wrong and the it won’t fall into the slot.

The RoboCup@Work arena just before competition start.

Eight teams are taking part in the league this year. Executive Committee member Asad Norouzi said that there are plans to introduce a sub-league which would provide an entry point for new teams or juniors to get into the league proper.

I caught up with Harrison Burns, Mitchell Torok and Jasper Arnold from Team MiRobot. They are based at the University of New South Wales and are attending RoboCup for the first time.

Team MiRobot from UNSW.

The team actually only started six months ago, so final preparations have been a bit stressful. However, the experience has been great fun, and the competition has gone well so far. Like most teams, they’ve had to make many refinements as the competition has progressed, leading to some late nights.

One notable feature of the team’s robot is the bespoke, in-house-designed grasping mechanism on the end of the arm. The team note that “it has good flexible jaws, so when it grabs round objects it actually pulls the object directly into it. Because it uses a linear motion, compared to a lot of other rotating jaws, it has a lot better reliability for picking up objects”.

Here is some footage from the task, featuring Team bi-t-bots and Team Singapore.

In the Middle Size Soccer league (MSL), teams of five fully autonomous robots play with a regular size FIFA ball. Teams are free to design their own hardware but all sensors have to be on-board and there is a maximum size and weight limit of 40kg for the robots. The research focus is on mechatronics design, control and multi-agent cooperation at plan and perception levels. Nine teams are competing this year.

I spoke to António Ribeiro, who is a member of the technical committee and part of Team LAR@MSL from the University of Minho, Portugal. The team started in 1998, but António and most of his colleagues on the current team have only been involved in the MSL since September 2022. The robots have evolved as the competition has progressed, and further improvements are in progress. Refinements so far have included communication, the detection system, and the control system. They are pleased with the improvements from the previous RoboCup. “Last year we had a lot of hardware issues, but this year the hardware seems pretty stable. We also changed our coding architecture and it is now much easier and faster for us to develop code because we can all work on the code at the same time on different modules”.

António cited versatility and cost-effective solutions as strengths of the team. “Our robot is actually very cheap compared to other teams. We use a lot of old chassis, and our solutions always go to the lowest cost possible. Some teams have multiple thousand dollar robots, but, for example, our vision system is around $70-80. It works pretty well – we need to improve the way we handle it, but it seems stable”.

Team LAR@MSL

The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. A set of benchmark tests is used to evaluate the robots’ abilities and performance in a realistic non-standardized home environment setting. These tests include helping to prepare breakfast, clearing the table, and storing groceries.

I arrived in time to watch the “stickler for the rules” challenge, where robots have to navigate different rooms and make sure that the people inside (“guests” at a party) are sticking to four rules: 1) there is one forbidden room – if a guest is in there the robot must alert them and ask them to follow it into another room), 2) everyone must have a drink in their hand – if not, the robot directs them to a shelf with drinks, 3) no shoes to be worn in the house, 4) there should be no rubbish left on the floor.

After watching an attempt from the LAR@Home robot, Tiago from the team told me a bit about the robot. “The goal is to develop a robot capable of multi general-purpose tasks in home and healthcare environments.” With the exception of the robotic arm, all of the hardware was built by the team. The robot has two RGBD cameras, two LIDARs, a tray (where the robot can store items that it needs to carry), and two emergency stop buttons that deactivate all moving parts. Four omnidirectional wheels allow the robot to move in any direction at any time. The wheels have independent suspension systems which guarantees that they can all be on the ground at all times, even if there are bumps and cables on the venue floor. There is a tablet that acts as a visual interface, and a microphone and speakers to enable communication between humans and the robot, which is all done via speaking and listening.

Tiago told me that the team have talked to a lot healthcare practitioners to find out the main problems faced by elderly people, and this inspired one of their robot features. “They said that the two main injury sources are from when people are trying to sit down or stand up, and when they are trying to pick something up from the floor. We developed a torso that can pick objects from the floor one metre away from the robot”.

The LAR@Home team.


You can keep up with the latest news direct from RoboCup here.

Click here to see all of our content pertaining to RoboCup.

#RoboCup2024 – daily digest: 19 July

The main soccer arena.

RoboCup is an international scientific initiative with the goal to advance the state of the art of intelligent robots. As part of this initiative, a series of competitions and events are held throughout the year. The main showcase event is an international affair with teams travelling from far and wide to put their machines through their paces.

This year, RoboCup is being held in three arenas in the Genneper Parken, Eindhoven, The Netherlands. The organisers are expecting over 2,000 participants, from 45 different countries, with around 300 teams signed up to take part in the various competitions.

Although RoboCup started out as a football (or soccer) playing competition, other leagues have since been introduced, focussing on robots in industrial, rescue, and home settings. There is even a dedicated league for young roboticists – RoboCupJunior – where participants can take part in either football, rescue, or artistic events.

I am lucky enough to be able to attend this year, and, for the next three days, I’ll be bringing you a daily digest of some of the exciting happenings from Eindhoven.

Today, 19 July, sees the competition in full swing. The main soccer arena, boasting multiple pitches, hosts a number of the different leagues which form RoboCupSoccer.

Some of the pitches in the main soccer arena.

My first port of call was the Standard Platform League, where the round 5 champions cup match between SPQR Team vs rUNSWift was taking place. SPQR ran out winners and advance to round 6. In this league, all teams compete with identical robots (currently the humanoid NAO by Aldebaran). The robots operate fully autonomously, meaning that there is no external control from neither humans nor computers.

Standard platform league. Round 5 champions cup match between SPQR Team vs rUNSWift.

The Humanoid AdultSize league is arguably the most challenging of the leagues, with many constraints placed on the robots to make them as human-like as possible. For example, they must have roughly human-like body proportions, they need to walk on two legs, and they are only allowed to use human-like sensors (up to two cameras to sense the environment). In this AdultSize competition, two robots from each team compete, and the team members walk behind the robots to catch them in case of a fall. Such a mishap could prove costly in terms of potential hardware damage.

Action from the Humanoid AdultSize League.

The RoboCup Rescue Robot League sees teams developing robotic systems with the goal of enabling emergency responders to perform extremely hazardous tasks from safer stand-off distances. During the competition, teams compete in a round-robin, putting their robots through their paces on a number of different challenges. The leading teams following this initial phase progress to the finals on Sunday. The tasks include navigating in complex environments, opening doors, and sensing. Teams may run the machines completely autonomously, or with some assistive control. More points are awarded for completely autonomous operation.

RoboCup Rescue arena from above.

You can keep up with more RoboCup2024 news here.

Congratulations to the #ICRA2024 best paper winners

The 2024 IEEE International Conference on Robotics and Automation (ICRA) best paper winners and finalists in the various different categories have been announced. The recipients were revealed during an award luncheon at the conference, which took place from 13-17 May in Yokohama, Japan.


IEEE ICRA Best Paper Award in Automation

Winner

TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester

Finalists

  • A Movable Microfluidic Chip with Gap Effect for Manipulation of Oocytes, Shuzhang Liang, Satoshi Amaya, Hirotaka Sugiura, Hao Mo, Yuguo Dai, and Fumihito Arai
  • Under Pressure: Learning-Based Analog Gauge Reading in the Wild, Maurits Reitsma, Julian Keller, Kenneth Blomqvist, and Roland Siegwart
  • Efficient Composite Learning Robot Control Under Partial Interval Excitation, Tian Shi, Weibing Li, Haoyong Yu, and Yongping Pan
  • MORALS: Analysis of High-Dimensional Robot Controllers via Topological Tools in a Latent Space, Ewerton Vieira, Aravind Sivaramakrishnan, Sumanth Tangirala, Edgar Granados, Konstantin Mischaikow, and Kostas E. Bekris

IEEE ICRA Best Paper Award in Cognitive Robotics

Winner

VLFM: Vision-Language Frontier Maps for Semantic Navigation, Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, and Bernadette Bucher

Finalists

  • NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End, Chong Zhang, Jin Jin, Jonas Frey, Nikita Rudin, Matias Mattamala, Cesar Cadena Lerma, and Marco Hutter
  • Learning Continuous Control with Geometric Regularity from Robot Intrinsic Symmetry, Shengchao Yan, Baohe Zhang, Yuan Zhang, Joschka Boedecker, and Wolfram Burgard
  • Learning Vision-Based Bipedal Locomotion for Challenging Terrain, Helei Duan, Bikram Pandit, Mohitvishnu S. Gadde, Bart Jaap Van Marum, Jeremy Dao, Chanho Kim, and Alan Fern

IEEE ICRA Best Paper Award in Robot Manipulation

Winner

SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention, Isabel Leal, Krzysztof Choromanski, Deepali Jain, Avinava Dubey, Jacob Varley, Michael S. Ryoo, Yao Lu, Frederick Liu, Vikas Sindhwani, Tamas Sarlos, Kenneth Oslund, Karol Hausman, Quan Vuong, and Kanishka Rao

Finalists

  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Marin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • Towards Generalizable Zero-Shot Manipulation via Translating Human Interaction Plans, Homanga Bharadhwaj, Abhinav Gupta, Vikash Kumar, and Shubham Tulsiani
  • Hearing Touch: Audio-Visual Pretraining for Contact-Rich Manipulation, Jared Mejia, Victoria Dean, Tess Hellebrekers, and Abhinav Gupta
  • DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces, Won Kyung Do, Ankush Ankush Dhawan, Mathilda Kitzmann, and Monroe Kennedy
  • Constrained Bimanual Planning with Analytic Inverse Kinematics, Thomas Cohn, Seiji Shaw, Max Simchowitz, and Russ Tedrake

IEEE ICRA Best Paper Award on Human-Robot Interaction

Winner

CoFRIDA: Self-Supervised Fine-Tuning for Human-Robot Co-Painting, Peter Schaldenbrand, Gaurav Parmar, Jun-Yan Zhu, James Mccann, and Jean Oh

Finalists

  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments, Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, and Rainer Stiefelhagen
  • Robot-Assisted Navigation for Visually Impaired through Adaptive Impedance and Path Planning, Pietro Balatti, Idil Ozdamar, Doganay Sirintuna, Luca Fortini, Mattia Leonori, Juan M. Gandarias, and Arash Ajoudani
  • Incremental Learning of Full-Pose Via-Point Movement Primitives on Riemannian Manifolds, Tilman Daab, Noémie Jaquier, Christian R. G. Dreher, Andre Meixner, Franziska Krebs, and Tamim Asfour
  • Supernumerary Robotic Limbs to Support Post-Fall Recoveries for Astronauts, Erik Ballesteros, Sang-Yoep Lee, Kalind Carpenter, and Harry Asada

IEEE ICRA Best Paper Award in Medical Robotics

Winner

Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons

Finalists

  • Intraoperatively Iterative Hough Transform Based In-plane Hybrid Control of Arterial Robotic Ultrasound for Magnetic Catheterization, Zhengyang Li, Magejiang Yeerbulati, and Qingsong Xu
  • Efficient Model Learning and Adaptive Tracking Control of Magnetic Micro-Robots for Non-Contact Manipulation, Yongyi Jia, Shu Miao, Junjian Zhou, Niandong Jiao, Lianqing Liu, and Xiang Li
  • Colibri5: Real-Time Monocular 5-DoF Trocar Pose Tracking for Robot-Assisted Vitreoretinal Surgery, Shervin Dehghani, Michael Sommersperger, Mahdi Saleh, Alireza Alikhani, Benjamin Busam, Peter Gehlbach, Ioan Iulian Iordachita, Nassir Navab, and M. Ali Nasseri
  • Hybrid Volitional Control of a Robotic Transtibial Prosthesis using a Phase Variable Impedance Controller, Ryan Posh, Jonathan Allen Tittle, David Kelly, James Schmiedeler, and Patrick M. Wensing
  • Design and Implementation of a Robotized Hand-held Dissector for Endoscopic Pulmonary Endarterectomy, Runfeng Zhu, Xilong Hou, Wei Huang, Lei Du, Zhong Wu, Hongbin Liu, Henry Chu, and Qing Xiang Zhao

IEEE ICRA Best Paper Award on Mechanisms and Design

Winner

Design and Modeling of a Nested Bi-cavity-based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu

Finalists

  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Lissajous Curve-Based Vibrational Orbit Control of a Flexible Vibrational Actuator with a Structural Anisotropy, Yuto Miyazaki and Mitsuru Higashimori
  • Dynamic Modeling of Wing-Assisted Inclined Running with a Morphing Multi-Modal Robot, Eric Sihite, Alireza Ramezani, and Gharib Morteza

IEEE ICRA Best Paper Award on Multi-Robot Systems

Winner

Do We Run Large-Scale Multi-Robot Systems on the Edge? More Evidence for Two-Phase Performance in System Size Scaling, Jonas Kuckling, Robin Luckey, Viktor Avrutin, Andrew Vardy, Andreagiovanni Reina, and Heiko Hamann

Finalists

  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Learning for Dynamic Subteaming and Voluntary Waiting in Heterogeneous Multi-Robot Collaborative Scheduling, Williard Joshua Jose and Hao Zhang
  • Asynchronous Distributed Smoothing and Mapping via On-Manifold Consensus ADMM, Daniel Mcgann, Kyle Lassak, and Michael Kaess
  • Uncertainty-Bounded Active Monitoring of Unknown Dynamic Targets in Road-Networks with Minimum Fleet, Shuaikang Wang, Yiannis Kantaros, and Meng Guo

IEEE ICRA Best Paper Award in Service Robotics

Winner

Learning to Walk in Confined Spaces Using 3D Representation, Takahiro Miki, Joonho Lee, Lorenz Wellhausen, and Marco Hutter

Finalists

  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • Efficient and Accurate Transformer-Based 3D Shape Completion and Reconstruction of Fruits for Agricultural Robots, Federico Magistri, Rodrigo Marcuzzi, Elias Ariel Marks, Matteo Sodano, Jens Behley, and Cyrill Stachniss
  • CoPAL: Corrective Planning of Robot Actions with Large Language Models, Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, and Michael Gienger
  • CalliRewrite: Recovering Handwriting Behaviors from Calligraphy Images without Supervision, Yuxuan Luo, Zekun Wu, and Zhouhui Lian

IEEE ICRA Best Paper Award in Robot Vision

Winner

NGEL-SLAM: Neural Implicit Representation-based Global Consistent Low-Latency SLAM System, Yunxuan Mao, Xuan Yu, Kai Wang, Yue Wang, Rong Xiong, and Yiyi Liao

Finalists

  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • Deep Evidential Uncertainty Estimation for Semantic Segmentation under Out-Of-Distribution Obstacles, Siddharth Ancha, Philip Osteen, and Nicholas Roy
  • SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud Tracking, Yu Lin, Zhiheng Li, Yubo Cui, and Zheng Fang
  • Ultrafast Square-Root Filter-based VINS, Yuxiang Peng, Chuchu Chen, and Guoquan Huang
  • Universal Visual Decomposer: Long-Horizon Manipulation Made Easy, Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Jason Ma, and Luca Weihs

IEEE ICRA Best Paper Award on Unmanned Aerial Vehicles

Winner

Time-Optimal Gate-Traversing Planner for Autonomous Drone Racing, Chao Qin, Maxime Simon Joseph Michet, Jingxiang Chen, and Hugh H.-T. Liu

Finalists

  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao
  • Co-Design Optimisation of Morphing Topology and Control of Winged Drones, Fabio Bergonti, Gabriele Nava, Valentin Wüest, Antonello Paolino, Giuseppe L’Erario, Daniele Pucci, and Dario Floreano
  • FC-Planner: A Skeleton-guided Planning Framework for Fast Aerial Coverage of Complex 3D Scenes, Chen Feng, Haojia Li, Mingjie Zhang, Xinyi Chen, Boyu Zhou, and Shaojie Shen
  • Sequential Trajectory Optimization for Externally-Actuated Modular Manipulators with Joint Locking, Jaeu Choe, Jeongseob Lee, Hyunsoo Yang, Hai-Nguyen (Hann) Nguyen, and Dongjun Lee
  • Spatial Assisted Human-Drone Collaborative Navigation and Interaction through Immersive Mixed Reality, Luca Morando and Giuseppe Loianno

IEEE ICRA Best Student Paper Award

Winner

Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X Models, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

IEEE ICRA Best Conference Paper Award

Winners

  • Goal Masked Diffusion Policies for Unified Navigation and Exploration, Ajay Sridhar, Dhruv Shah, Catherine Glossop, and Sergey Levine
  • Open X-Embodiment: Robotic Learning Datasets and RT-X, Sergey Levine, Chelsea Finn, Ken Goldberg, Lawrence Yunliang Chen, Gaurav Sukhatme, Shivin Dass, Lerrel Pinto, Yuke Zhu, Yifeng Zhu, Shuran Song, Oier Mees, Deepak Pathak, Hao-Shu Fang, Henrik Iskov Christensen, Mingyu Ding, Youngwoon Lee, Dorsa Sadigh, Ilija Radosavovic, Jeannette Bohg, Xiaolong Wang, Xuanlin Li, Krishan Rana, Kento Kawaharazuka, Tatsuya Matsushima, Jihoon Oh, Takayuki Osa, Oliver Kroemer, Beomjoon Kim, Edward Johns, Freek Stulp, Jan Schneider, Jiajun Wu, Yunzhu Li, Heni Ben Amor, Lionel Ott, Roberto Martin-Martin, Karol Hausman, Quan Vuong, Pannag Sanketi, Nicolas Heess, Vincent Vanhoucke, Karl Pertsch, Stefan Schaal, Cheng Chi, Chuer Pan, and Alex Bewley

Finalists

  • TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers, Anoushka Alavilli, Khai Nguyen, Samuel Schoedel, Brian Plancher, and Zachary Manchester
  • POLITE: Preferences Combined with Highlights in Reinforcement Learning, Simon Holk, Daniel Marta, and Iolanda Leite
  • Exoskeleton-Mediated Physical Human-Human Interaction for a Sit-to-Stand Rehabilitation Task, Lorenzo Vianello, Emek Baris Kucuktabak, Matthew Short, Clément Lhoste, Lorenzo Amato, Kevin Lynch, and Jose L. Pons
  • Optimized Design and Fabrication of Skeletal Muscle Actuators for Bio-syncretic Robots, Lianchao Yang, Chuang Zhang, Ruiqian Wang, Yiwei Zhang, and Lianqing Liu
  • Design and Modeling of a Nested Bi-cavity- based Soft Growing Robot for Grasping in Constrained Environments, Haochen Yong, Fukang Xu, Chenfei Li, Han Ding, and Zhigang Wu
  • Observer-based Distributed MPC for Collaborative Quadrotor-Quadruped Manipulation of a Cable-Towed Load, Shaohang Xu, Yi’An Wang, Wentao Zhang, Chin Pang Ho, and Lijun Zhu
  • Censible: A Robust and Practical Global Localization Framework for Planetary Surface Missions, Jeremy Nash, Quintin Dwight, Lucas Saldyt, Haoda Wang, Steven Myint, Adnan Ansar, and Vandi Verma
  • HEGN: Hierarchical Equivariant Graph Neural Network for 9DoF Point Cloud Registration, Adam Misik, Driton Salihu, Xin Su, Heike Brock, and Eckehard Steinbach
  • A Trajectory-based Flight Assistive System for Novice Pilots in Drone Racing Scenario, Yuhang Zhong, Guangyu Zhao, Qianhao Wang, Guangtong Xu, Chao Xu, and Fei Gao

What’s coming up at #ICRA2024?

The 2024 IEEE International Conference on Robotics and Automation (ICRA) will take place from 13-17 May, in Yokohama, Japan. The event will feature plenary and keynote talks, technical sessions, posters, workshops and tutorials.

Plenary speakers

There are three plenary talks at the conference this year:

  • Yoky Matsuoka – How to Turn a Roboticist into a Corporate Explorer
  • Sami Haddadin – The Great Robot Accelerator: Collective Learning of Optimal Embodied AI
  • Sunil K Agrawal – Rehabilitation Robotics: How to Improve Daily Functions in People with Impairments?

Keynote talks

There will be 15 keynote talks, given by:

  • Lianqing Liu – Biosyncretic sensing, actuation and intelligence for robotics
  • Dawn M. Tilbury – Digital Twins for Manufacturing Automation
  • Claudio Pacchierotti – Beyond Force Feedback: Cutaneous Haptics in Human-centered Robotics
  • Yu Sun – Medical Robotics for Cell Surgery – Science and Applications
  • Yasuhisa Hirata – Adaptable AI-enabled Robots to Create a Vibrant Society – Moonshot R&D Projects in Japan
  • Calin Belta – Formal Methods for Safety-Critical Control
  • Manuel Catalano – Robots in the Wild: From Research Labs to the Real World
  • Harold Soh – Building Guidance Bridges with Generative Models for Robot Learning and Control
  • Lorenzo Sabattini – Unleashing the power of many: decentralized control of multi-robot systems
  • Myunghee Kim – Human-wearable robot co-adaptation
  • Yoko Yamanishi – Emergent Functions of Electrically-induced Bubbles and Intra-cellular-Cybernetic Avatar
  • Kensuke Harada – Robotic manipulation aiming for industrial applications
  • Iolanda Leite – The Quest for Social Robot Autonomy
  • Rong Xiong – Integration of Robotics and AI: Changes and Challenges
  • Mariana Medina-Sánchez – Tiny Robots, Big Impact: Transforming Gynecological Care

Tutorials

The tutorials will be held on Monday 13 May and Friday 17 May.

Workshops

The workshops will also be held on Monday 13 May and Friday 17 May. There are 73 to choose from this year.

You can see the programme overview here, with a detailed programme available here.

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season.


Josh Miller used ChatGPT to create the 12 days of Christmas. Prompt: “I want to make 12 images in an early 1980s era to depict the twelve days of Christmas”.

See the pictures in higher definition here.


“Santa’s Secret: Robots Saved Christmas! | CPS XMAS video 2023” from Chair of Cyber-Physical Systems.


“AI rewriting classic Christmas stories” from GingerPale.


Using PaLM2 to generate haikus.

Prompt: “Write me a haiku about an alternative xmas”.
No gifts, no grand feast,
Just laughter and joy, warmth spread,
Kindness fills the air.

Prompt: “Write a haiku about the winter holiday season”.
Snow blankets the land,
Hearts aglow with love’s warmth,
Joyous cheer surrounds.


“Put a Bow on It | Happy Holidays” from Boston Dynamics.


“Christmas Magic: Animated Backgrounds for an Enchanting Atmosphere” from AI-ArtificialIntelligence.


“XMAS – Episode 3: AI – Christmas music for the HAKOM Christmas party” from HAKOM Time Series.


“Christmas tree in python notebook with Turtle” from Learn Upwards.


“FZI Living Lab Christmas Robotics 2023 – Robotic Reindeer Rescue” from FZI channel.


A letter to Santa generated by PaLM2. Prompt: “Write me a letter to Santa Claus. I don’t want any presents, just inspiration in my machine learning research for 2024.”

Dear Santa Claus,

As the year comes to a close, I find myself reflecting on the remarkable advancements in machine learning that have transformed our world. From self-driving cars to personalized medicine, these innovations have opened up new possibilities and enriched our lives in ways we could have only imagined a few decades ago.

I am writing to you today not to ask for material gifts, but rather for something more precious: inspiration to continue exploring the frontiers of machine learning in the year to come. As a researcher in this field, I am eager to contribute to the development of algorithms and techniques that can further enhance our understanding of the world around us and create solutions to pressing global challenges.

I am particularly fascinated by the potential of machine learning to address the climate crisis. By harnessing the power of data and computation, we can develop more efficient renewable energy sources, optimize transportation systems, and mitigate the impact of extreme weather events. Additionally, I believe that machine learning can play a crucial role in improving education, healthcare, and environmental sustainability.

I know that you have a special ability to see the world in a unique and insightful way. I hope that you can share some of your wisdom with me and guide me towards promising research directions. I am eager to embark on new challenges and make a positive impact on the world through machine learning.

Thank you for considering my request, Santa Claus. I look forward to your guidance and inspiration as I pursue my research endeavors in the year ahead.

With warmest regards,

[Your Name]

AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design

An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

Could you start by telling us a bit about the book, and who the intended audience is?

The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have. I think it can be very intimidating for a lot of people, but I don’t think it needs to be. I think that the foundation is there for everybody and it’s just a matter of tapping into that and building on top of it. What I’m hoping, and what I’m seeing happening, is that machine learning and AI is helping to meet people part way. The machines are getting better at listening as we try to get better at telling them what to do.

What made you decide to write the book, what was the inspiration behind it?

I’ve taught large introductory computer science classes and I feel like there’s an important message in there about how a deeper knowledge of computing can be very empowering, and I wanted to bring that to a larger audience.

Could you talk a bit about the structure of the book?

The meat of the book talks about the fundamental components that make up programs, or, in other words, that make up the way that we tell computers what to do. Each chapter covers a different one of those topics – loops, variables, conditionals, for example. Within each chapter I talk about the ways in which this concept is already familiar to people, the ways that it shows up in regular life. I point to existing pieces of software or websites where you can make use of that one particular concept to tell computers what to do. Each chapter ends with an introduction to some concepts from machine learning that can help create that particular programming construct. For example, in the chapter on conditionals, I talk about the ways that we use the word “if” in regular life all the time. Weddings, for example, are very conditionally structured, with statements like “if anyone has anything to say, speak now or forever hold your peace”. That’s kind of an “if-then” statement. In terms of tools to play with, I talk about interactive fiction. Partway between video games and novels is this notion that you can make a story that adapts itself while it’s being read. What makes that interesting is this notion of conditionals – the reader can make a choice and that will cause a branch. There are really wonderful tools for being able to play with this idea online, so you don’t have to be a full-fledged programmer to make use of conditionals. The machine learning concept introduced there is decision trees, which is an older form of machine learning where you give a system a bunch of examples and then it outputs a little flowchart for decision making.

Do you touch on generative AI in the book?

The book was already in production by the time ChatGPT came out, but I was ahead of the curve, and I did have a section specifically about GPT-3 (pre-ChatGPT) which talks about what it is, how machine learning creates it, and how it itself can be helpful in making programs. So, you see it from both directions. You get the notion that this tool actually helps people tell machines what to do, and also the way that humanity created this tool in the first place using machine learning.

Did you learn anything while you were writing the book that was particularly interesting or surprising?

Researching the examples for each chapter caused me to dig into a whole bunch of topics. This notion of interactive fiction, and that there’s tools for creating interactive fiction, I found pretty interesting. When researching another chapter, I found an example from a Jewish prayer book that was just so shocking to me. So, Jewish prayer books (and I don’t know if this is true in other belief systems as well, but I’m mostly familiar with Judaism), contain things you’re supposed to read, but they have little conditional markings on them sometimes. For example, one might say “don’t read this if it’s a Saturday”, or “don’t read this if it’s a full moon”, or “don’t read if it’s a full moon on a Saturday”. I found one passage that actually had 14 different conditions that you had to check to decide whether or not it was appropriate to read this particular passage. That was surprising to me – I had no idea that people were expected to do so much complex computation during a worship activity.

Why is it important that everybody learns a little programming?

It’s really important to keep in mind the idea that at the end of the day what AI is doing is making it easier for us to tell machines what to do, and we should share that increased capability with a broad population. It shouldn’t just be the machine learning engineers who get to tell computers what to do more easily. We should find ways of making this easier for everybody.

Because computers are here to help, but it’s a two-way street. We need to be willing to learn to express what we want in a way that can be carried out accurately and automatically. If we don’t make that effort, then other parties, companies often, will step in and do it for us. At that point, the machines are working to serve some else’s interest instead of our own. I think it’s become absolutely essential that we restore a healthy relationship with these machines before we lose any more of our autonomy.

Any final thoughts or takeaways that we should bear in mind?

I think there’s a message here for computer science researchers, as well. When we tell other people what to do, we tend to combine a description or a rule, something that’s sort of program-like, with examples, something that’s more data-like. We just intermingle them when we talk to each other. At one point when I was writing the book, I had a dishwasher that was acting up and I wanted to understand why. I read through its manual, and I was struck by how often it was the case that in telling people what to do with the dishwasher, the authors would consistently mix together a high-level description of what they are telling you to do with some particular, vivid examples: a rule for what to load into the top rack, and a list of items that fit that rule. That seems to be the way that people want to both convey and receive information. What’s crazy to me is that we don’t program computers that way. We either use something that’s strictly programming, all rules, no examples, or we use machine learning, where it’s all examples, no rules. I think the reason that people communicate this way with each other is because those two different mechanisms have complementary strengths and weaknesses and when you combine the two together, you maximize the chance of being accurately understood. And that’s the goal when we’re telling machines what to do. I want the AI community to be thinking about how we can combine what we’ve learned about machine learning with something more programming-like to make a much more powerful way of telling machines what to do. I don’t think this is a solved problem yet, and that’s something that I really hope that people in the community think about.


Code to Joy: Why Everyone Should Learn a Little Programming is available to buy now.

michael littman

Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation.

Interview with Marek Šuppa: insights into RoboCupJunior

robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

#RoboCup2023 in tweets – part 2

robocup2023 logo

As this year’s RoboCup draws to a close, we take a look back at some of the highlights from the second half of the conference. Over the course of the weekend, the event focussed on the latter stages of the competitions, with the winners in all the different leagues decided. If you missed our round-up of the first half of RoboCup, you can catch up here.

#RoboCup2023 in tweets – part 1

AIhub | Tweets round-up

This year’s RoboCup kicked off on 4 July and will run until 10 July. Taking place in Bordeaux, the event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. Find out what attendees have been up to in preparation for, and in the first half of, the event.

[UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics

A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.

We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)

What LLMs are and how they work

Journal, conference, arXiv, and other articles

Newspaper, magazine, University website, and blogpost articles

Reports

Podcasts and video discussions

Focus on LLMs and education

Relating to art and other creative processes

Pertaining to robotics

Misinformation, fake news and the impact on journalism

Regulation and policy

CLAIRE and euRobotics: all questions answered on humanoid robotics

a pile of question marks

On 9 December, CLAIRE and euRobotics jointly hosted an All Questions Answered (AQuA) event. This one hour session focussed on humanoid robotics, and participants could ask questions regarding the current and future state of AI, robotics and human augmentation in Europe.

The questions were fielded by an expert panel, comprising:

  • Rainer Bischoff, euRobotics
  • Wolfram Burgard, Professor of Robotics and AI, University of Technology Nuremberg
  • Francesco Ferro, CEO, PAL Robotics
  • Holger Hoos, Chair of the Board of Directors, CLAIRE

The session was recorded and you can watch in full below:

RoboCup humanoid league: Interview with Jasper Güldenstein

Jasper

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event returned to an in-person format for 2022, taking place from 13-17 July in Bangkok. RoboCup comprises a number of leagues, with perhaps the most well-known being the soccer leagues.

In the Humanoid League, autonomous robots with a human-inspired body plan and senses play soccer against each other. We spoke to Jasper Güldenstein, a member of the technical committee, about the competition at RoboCup 2022, and also about the Humanoid League Virtual Season. As a biweekly virtual competition taking place between the physical RoboCup events it provides teams with an opportunity to test their ideas and keep in touch throughout the year.

Could you give us an overview of the Humanoid League competition at RoboCup this year?

This year we had the first in-person event after a three year break. It was really good to have the majority of the teams coming back together and working on their robots again. Although a lot of teams came with fewer participants than they usually do, we still managed to have 12 teams in the kid size league and three teams in the adult size. Unfortunately, some teams could not participate due to travel restrictions, but we hope to see them back next year.

Humanoid teamsHumanoid league finalists with their robots

What was the format for the competition?

At the beginning we had a drop-in round, which is where we play games with one robot from each team participating in a joint team. So, we have eight robots playing from eight different teams. That is useful for finding out which teams are likely to perform the best in the competition. Next, the results from the drop-in games were used as seeding for the round robin phase of the regular competition, with the strongest teams separated into different groups. After the round robin, we have a knockout competition. The seeding method means that we can hopefully avoid the situation where very good teams get kicked out early. We saw that the most interesting games were towards the end of the competition when the teams performed really well.

Have you noticed improvements since the last physical competition?

I’d say definitely that one big thing that has improved for a lot of teams is the localisation. A lot of teams are more easily able to localise themselves on the field, and they don’t run off randomly. They are more certain that they are in the correct position.

Furthermore, I think the kicking has improved. The robots kick the ball much further than they used to. People have been tuning their kicking motions to increase the distance.

In terms of computer vision, this has definitely improved quite a bit. Something we did differently this time, which was inspired by what we did in the virtual season, is that we had a set of six different balls, all from previous FIFA competitions. For each game a ball was drawn randomly, so the teams couldn’t really prepare for all the balls. Although they were visually quite different, the teams didn’t really have any problems detecting the ball. We’ve seen, in general, that computer vision approaches have improved and these improvements have been transferred to the RoboCup competition. I think that almost all teams are using a neural network to detect the ball. This is a change from three, four, five years ago, where many teams used hand-tuned classical computer vision algorithms.

To talk a bit more about ball detection, it will be interesting to see what happens if we move to an environment with natural and/or varying light conditions. This year we were in a convention hall with uniform lighting. I believe next year, in Bordeaux, there is going to be some form of natural light coming in, and perhaps even fields that are outside. It’s still at the planning stage but we are looking forward to that. It will be a challenge and I strongly believe that the participants will find approaches to make their vision approach robust against these varying conditions.

Teams in actionThe setup and testing for the Humanoid League competition at RoboCup 2022, Bangkok.

Thinking about the transfer from the simulation to the real world, are there any specific elements that lend themselves well to being transferred?

In terms of computer vision, we had a bit of transfer. In the virtual season we concentrated a lot on changing the lighting conditions and having varied backgrounds, to be able to emulate the real world a bit better. I think a few teams used their vision approaches from the virtual season in the real world.

However, I think the strongest part is behaviour. Teams were able to test their strategies in the virtual competition and adapt every other week. For example, CIT Brains, which won the virtual season and the physical competition, made quite a few changes to their strategy and they had robust systems running. Their strategy worked really well, and in the final they managed to score several goals win against the previous world champions (Rhoban Football Club).

How did the competition go for your team (Hamburg Bit-Bots)?

We actually had quite a few hardware problems, especially on the mechanics side. The motors are wearing out and warped due to wear, and flexed more than we expected. This meant we had difficulties walking stably. And, if you can’t walk in a stable manner that defeats the purpose of everything else. It’s a really integrated system – if one component breaks, you are out of luck as you are very restricted in what you can change during the competition as you don’t have much spare equipment with you.

However, what was good for us was that we had a lot of software up and running, and a lot of it had been tested really well during the virtual league. We had to try and find a way round the walking problem algorithmically, to try to find walking parameters that were more stable. We also switched from [robot operating system] ROS 1 to ROS 2 which brought with it many challenges. We actually did a write up on the issues we faced, with some tricks and tips.

Will there be a new virtual league season this year?

Yes, we’ve discussed this in our technical committee and we plan on doing it again. The last event was successful and the teams enjoyed it. We plan on making some changes, such as logging the games to extract some interesting metrics and doing some analysis on those.

Another thing we want to do is domain randomisation, making the environment a bit more varied. This means that the approaches have to be more robust. The hope is that, when they are more robust, they can be transferred more easily to the real world. We were thinking about making the terrain slightly uneven. Another approach could be to modify the models of the robots such that the joints emulate a bit of wear, so they simulated actuators might be a bit weaker or stronger randomly, and teams have to find robust approaches to deal with that.

We won’t do everything at the beginning. We’ll move through the season and talk to the teams and form some organising groups to develop the simulation further, to run the games and to organise the competition itself. We are always happy to have input and we always talk to the teams to see what they think. It’s a competition but it’s something we build together.

Robots at the humanoid league BangkokHumanoid robots at RoboCup 2022, Bangkok.

Could you tell us about the satellite event that took place at RoboCup?

This was a discussion about how to get teams more interested in participating and how to bridge the junior leagues and the major leagues.

We know that some people who participated in RoboCup Junior specifically selected a University that has a RoboCup team so that they could join that team. It would be awesome for more people to do this, and for more juniors to know what the major league is about.

To bridge the gap between the junior and major leagues we don’t want to introduce another league, but we want some form of events where the two groups can meet, and where the juniors can show off how well they are doing. It would be good to have more interaction between the leagues, although we haven’t decided on the exact format yet.

About Jasper

Jasper

Jasper Güldenstein is a PhD student at University of Hamburg. His research focuses on humanoid navigation. He has been participating in RoboCup as a member of the Hamburg Bit-Bots since 2016 where his focus is developing not only the software of the team but also the electronics of the robot platform. In his recent master thesis he evaluated using reinforcement learning to perform path planning and execution for humanoid robots.