All posts by AIhub

AI holidays 2023

Thanks to those that sent and suggested AI and robotics-themed holiday videos, images, and stories. Here’s a sample to get you into the spirit this season.


Josh Miller used ChatGPT to create the 12 days of Christmas. Prompt: “I want to make 12 images in an early 1980s era to depict the twelve days of Christmas”.

See the pictures in higher definition here.


“Santa’s Secret: Robots Saved Christmas! | CPS XMAS video 2023” from Chair of Cyber-Physical Systems.


“AI rewriting classic Christmas stories” from GingerPale.


Using PaLM2 to generate haikus.

Prompt: “Write me a haiku about an alternative xmas”.
No gifts, no grand feast,
Just laughter and joy, warmth spread,
Kindness fills the air.

Prompt: “Write a haiku about the winter holiday season”.
Snow blankets the land,
Hearts aglow with love’s warmth,
Joyous cheer surrounds.


“Put a Bow on It | Happy Holidays” from Boston Dynamics.


“Christmas Magic: Animated Backgrounds for an Enchanting Atmosphere” from AI-ArtificialIntelligence.


“XMAS – Episode 3: AI – Christmas music for the HAKOM Christmas party” from HAKOM Time Series.


“Christmas tree in python notebook with Turtle” from Learn Upwards.


“FZI Living Lab Christmas Robotics 2023 – Robotic Reindeer Rescue” from FZI channel.


A letter to Santa generated by PaLM2. Prompt: “Write me a letter to Santa Claus. I don’t want any presents, just inspiration in my machine learning research for 2024.”

Dear Santa Claus,

As the year comes to a close, I find myself reflecting on the remarkable advancements in machine learning that have transformed our world. From self-driving cars to personalized medicine, these innovations have opened up new possibilities and enriched our lives in ways we could have only imagined a few decades ago.

I am writing to you today not to ask for material gifts, but rather for something more precious: inspiration to continue exploring the frontiers of machine learning in the year to come. As a researcher in this field, I am eager to contribute to the development of algorithms and techniques that can further enhance our understanding of the world around us and create solutions to pressing global challenges.

I am particularly fascinated by the potential of machine learning to address the climate crisis. By harnessing the power of data and computation, we can develop more efficient renewable energy sources, optimize transportation systems, and mitigate the impact of extreme weather events. Additionally, I believe that machine learning can play a crucial role in improving education, healthcare, and environmental sustainability.

I know that you have a special ability to see the world in a unique and insightful way. I hope that you can share some of your wisdom with me and guide me towards promising research directions. I am eager to embark on new challenges and make a positive impact on the world through machine learning.

Thank you for considering my request, Santa Claus. I look forward to your guidance and inspiration as I pursue my research endeavors in the year ahead.

With warmest regards,

[Your Name]

AAAI Fall Symposium: Patrícia Alves-Oliveira on human-robot interaction design

An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The AAAI Fall Symposium Series took place in Arlington, USA, and comprised seven different symposia. One of these, the tenth Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium was run as a hybrid in-person/online event, and we tuned in to the opening keynote, which was given by Patrícia Alves-Oliveira.

As a psychology student, Patrícia’s dream was to become a therapist. However, an internship, where she encountered a robot for the first time, inspired her to change her plans, and she decided to go into the field of human-robot interaction. Following a PhD in the field, she worked as a postdoc, before heading to industry as a designer in the Amazon Astro robot team.

Patrícia has worked on a number of interesting projects during her time in academia and in industry. Thinking about how to design robots for specific user needs, and keeping the user at the forefront during the design process, has been core to her work. She began by summarising three very different academic projects.

Creativity and robotics

The objective of this project was to design, fabricate, and evaluate robots as creativity-provoking tools for kids. Patrícia created a social robot named YOLO (or Your Own Living Object) that she designed to be child-proof (in other words, it could withstand being dropped and knocked over), with the aim of trying to help children explore their creativity during play. A machine learning algorithm learns the pattern of play that the child has and adapts the robot behaviour accordingly. You can see the robot in action in the demo below:

FLEXI robot

As a postdoc project, Patrícia worked on building FLEXI, a social robot embodiment kit. This kit consists of a robot (with a face, and a torso with a screen on the front), which can be customised, and an open-source end-user programming interface designed to be user-friendly. The customisation element means that it can be used for many applications. The team has deployed FLEXI across three application scenarios: community-support, mental health, and education, with the aim of assessing the flexibility of the system. You can see the robot in action, in different scenarios, here.

Social dining

This project centred on a robotic arm for people with impaired mobility. Such systems already exist for assisting people with tasks such as eating. However, in a social context they can often form a barrier between the user and the rest of the group. The idea behind this project was to consider how such a robot could be adapted to work well in a social context, for example, during a meal with family or friends. The team interviewed people with impaired mobility to assess their needs, and came up with a set of design principles for creating robot-assisted feeding systems and an implementation guide for future research in this area. You can read the research paper on this project here.

You can find out more about these three projects, and the other projects that Patrícia has been involved in, here.

Astro robot

Patrícia has long been interested in robots for the real world, and how this real-world experience is aligned with the study of robots in academia and industry. She decided to leave academia and join the Astro robot programme, which she felt was a great opportunity to work on a large-scale real-world robot project.

The Astro robot is a home robot designed to assist with tasks such as monitoring your house, delivering small objects within the home, recognising your pet, telling a story, or playing games.

Patrícia took us through a typical day in the life of a designer where she always has in mind the bigger picture of what the team is aiming for, in other words, what the ideal robot, and its interactions with humans, would look like. Coupled to that, the process is governed by core design tenets, such as the customer needs, and non-negotiable core elements that the robot should include. When considering a particular element of the robot design, for example, the delivery of an item in the robot tray, Patrícia uses storyboards to map out details of potential human-robot interactions. An important aspect of design concerns edge cases, which occur regularly in the real world. Good design will consider potential edge cases and incorporate ways to deal with them.

Patrícia closed by emphasising the importance of teamwork in the design process, in particular, the need for interdisciplinary teams; by considering design from many different points of view, the chance of innovation is higher.

You can find out more about the Artificial Intelligence for Human-Robot Interaction (AI-HRI) symposium here.

Code to Joy: Why Everyone Should Learn a Little Programming – Interview with Michael Littman

Code to Joy: Why Everyone Should Learn a Little Programming is a new book from Michael Littman, Professor of Computer Science at Brown University and a founding trustee of AIhub. We spoke to Michael about what the book covers, what inspired it, and how we are all familiar with many programming concepts in our daily lives, whether we realize it or not.

Could you start by telling us a bit about the book, and who the intended audience is?

The intended audience is not computer scientists, although I have been getting a very warm reception from computer scientists, which I appreciate. The idea behind the book is to try to help people understand that telling machines what to do (which is how I view much of computer science and AI) is something that is really accessible to everyone. It builds on skills and practices that people already have. I think it can be very intimidating for a lot of people, but I don’t think it needs to be. I think that the foundation is there for everybody and it’s just a matter of tapping into that and building on top of it. What I’m hoping, and what I’m seeing happening, is that machine learning and AI is helping to meet people part way. The machines are getting better at listening as we try to get better at telling them what to do.

What made you decide to write the book, what was the inspiration behind it?

I’ve taught large introductory computer science classes and I feel like there’s an important message in there about how a deeper knowledge of computing can be very empowering, and I wanted to bring that to a larger audience.

Could you talk a bit about the structure of the book?

The meat of the book talks about the fundamental components that make up programs, or, in other words, that make up the way that we tell computers what to do. Each chapter covers a different one of those topics – loops, variables, conditionals, for example. Within each chapter I talk about the ways in which this concept is already familiar to people, the ways that it shows up in regular life. I point to existing pieces of software or websites where you can make use of that one particular concept to tell computers what to do. Each chapter ends with an introduction to some concepts from machine learning that can help create that particular programming construct. For example, in the chapter on conditionals, I talk about the ways that we use the word “if” in regular life all the time. Weddings, for example, are very conditionally structured, with statements like “if anyone has anything to say, speak now or forever hold your peace”. That’s kind of an “if-then” statement. In terms of tools to play with, I talk about interactive fiction. Partway between video games and novels is this notion that you can make a story that adapts itself while it’s being read. What makes that interesting is this notion of conditionals – the reader can make a choice and that will cause a branch. There are really wonderful tools for being able to play with this idea online, so you don’t have to be a full-fledged programmer to make use of conditionals. The machine learning concept introduced there is decision trees, which is an older form of machine learning where you give a system a bunch of examples and then it outputs a little flowchart for decision making.

Do you touch on generative AI in the book?

The book was already in production by the time ChatGPT came out, but I was ahead of the curve, and I did have a section specifically about GPT-3 (pre-ChatGPT) which talks about what it is, how machine learning creates it, and how it itself can be helpful in making programs. So, you see it from both directions. You get the notion that this tool actually helps people tell machines what to do, and also the way that humanity created this tool in the first place using machine learning.

Did you learn anything while you were writing the book that was particularly interesting or surprising?

Researching the examples for each chapter caused me to dig into a whole bunch of topics. This notion of interactive fiction, and that there’s tools for creating interactive fiction, I found pretty interesting. When researching another chapter, I found an example from a Jewish prayer book that was just so shocking to me. So, Jewish prayer books (and I don’t know if this is true in other belief systems as well, but I’m mostly familiar with Judaism), contain things you’re supposed to read, but they have little conditional markings on them sometimes. For example, one might say “don’t read this if it’s a Saturday”, or “don’t read this if it’s a full moon”, or “don’t read if it’s a full moon on a Saturday”. I found one passage that actually had 14 different conditions that you had to check to decide whether or not it was appropriate to read this particular passage. That was surprising to me – I had no idea that people were expected to do so much complex computation during a worship activity.

Why is it important that everybody learns a little programming?

It’s really important to keep in mind the idea that at the end of the day what AI is doing is making it easier for us to tell machines what to do, and we should share that increased capability with a broad population. It shouldn’t just be the machine learning engineers who get to tell computers what to do more easily. We should find ways of making this easier for everybody.

Because computers are here to help, but it’s a two-way street. We need to be willing to learn to express what we want in a way that can be carried out accurately and automatically. If we don’t make that effort, then other parties, companies often, will step in and do it for us. At that point, the machines are working to serve some else’s interest instead of our own. I think it’s become absolutely essential that we restore a healthy relationship with these machines before we lose any more of our autonomy.

Any final thoughts or takeaways that we should bear in mind?

I think there’s a message here for computer science researchers, as well. When we tell other people what to do, we tend to combine a description or a rule, something that’s sort of program-like, with examples, something that’s more data-like. We just intermingle them when we talk to each other. At one point when I was writing the book, I had a dishwasher that was acting up and I wanted to understand why. I read through its manual, and I was struck by how often it was the case that in telling people what to do with the dishwasher, the authors would consistently mix together a high-level description of what they are telling you to do with some particular, vivid examples: a rule for what to load into the top rack, and a list of items that fit that rule. That seems to be the way that people want to both convey and receive information. What’s crazy to me is that we don’t program computers that way. We either use something that’s strictly programming, all rules, no examples, or we use machine learning, where it’s all examples, no rules. I think the reason that people communicate this way with each other is because those two different mechanisms have complementary strengths and weaknesses and when you combine the two together, you maximize the chance of being accurately understood. And that’s the goal when we’re telling machines what to do. I want the AI community to be thinking about how we can combine what we’ve learned about machine learning with something more programming-like to make a much more powerful way of telling machines what to do. I don’t think this is a solved problem yet, and that’s something that I really hope that people in the community think about.


Code to Joy: Why Everyone Should Learn a Little Programming is available to buy now.

michael littman

Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown’s Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science, focusing on Artificial Intelligence. He is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation.

Interview with Marek Šuppa: insights into RoboCupJunior

robocupjunior soccer match in actionA RoboCupJunior soccer match in action.

In July this year, 2500 participants congregated in Bordeaux for RoboCup2023. The competition comprises a number of leagues, and among them is RoboCupJunior, which is designed to introduce RoboCup to school children, with the focus being on education. There are three sub-leagues: Soccer, Rescue and OnStage.

Marek Šuppa serves on the Executive Committee for RoboCupJunior, and he told us about the competition this year and the latest developments in the Soccer league.

What is your role in RoboCupJunior and how long have you been involved with this league?

I started with RoboCupJunior quite a while ago: my first international competition was in 2009 in Graz, where I was lucky enough to compete in Soccer for the first time. Our team didn’t do all that well in that event but RoboCup made a deep impression and so I stayed around: first as a competitor and later to help organise the RoboCupJunior Soccer league. Right now I am serving as part of the RoboCupJunior Execs who are responsible for the organisation of RoboCupJunior as a whole.

How was the event this year? What were some of the highlights?

I guess this year’s theme or slogan, if we were to give it one, would be “back to normal”, or something like that. Although RoboCup 2022 already took place in-person in Thailand last year after two years of a pandemic pause, it was in a rather limited capacity, as COVID-19 still affected quite a few regions. It was great to see that the RoboCup community was able to persevere and even thrive throughout the pandemic, and that RoboCup 2023 was once again an event where thousands of robots and roboticists meet.

It would also be difficult to do this question justice without thanking the local French organisers. They were actually ready to organise the event in 2020 but it got cancelled due to COVID-19. But they did not give up on the idea and managed to put together an awesome event this year, for which we are very thankful.

robots used for robocupjuniorExamples of the robots used by the RoboCupJunior Soccer teams.

Turning to RoboCupJunior Soccer specifically, could you talk about the mission of the league and how you, as organisers, go about realising that mission?

The mission of RoboCupJunior consists of two competing objectives: on one hand, it needs to be a challenge that’s approachable, interesting and relevant for (mostly) high school students and at the same time it needs to be closely related to the RoboCup “Major” challenges, which are tackled by university students and their mentors. We are hence continuously trying to both make it more compelling and captivating for the students and at the same time ensure it is technical enough to help them grow towards the RoboCup “Major” challenges.

One of the ways we do that is by introducing what we call “SuperTeam” challenges, in which teams from respective countries form a so-called “SuperTeam” and compete against another “SuperTeam” as if these were distinct teams. In RoboCupJunior Soccer the “SuperTeams” are composed of four to five teams and they compete on a field that is six times larger than the “standard” fields that are used for the individual games. While in the individual matches each team can play with two robots at most (resulting in a 2v2 game) in a SuperTeam match each SuperTeam fields five robots, meaning there are 10 robots that play on the SuperTeam field during a SuperTeam match. The setup is very similar to the Division B of the Small Size League of RoboCup “Major”.

The SuperTeam games have existed in RoboCupJunior Soccer since 2013, so for quite a while, and the feedback we received on them was overwhelmingly positive: it was a lot of fun for both the participants as well as the spectators. But compared to the Small Size League games there were still two noticeable differences: the robots did not have a way of communicating with one another and additionally, the referees did not have a way of communicating with the robots. The result was that not only was there little coordination among robots of the same SuperTeam, whenever the game needed to be stopped, the referees had to physically run after the robots on the field to catch them and do a kickoff after a goal was scored. Although hilarious, it’s far from how we would imagine the SuperTeam games to look.

The RoboCupJunior Soccer Standard Communication Modules aim to do both. The module itself is a small device that is attached to each robot on the SuperTeam field. These devices are all connected via Bluetooth to a single smartphone, through which the referee can send commands to all robots on the field. The devices themselves also support direct message exchange between robots on a single SuperTeam, meaning the teams do not have to invest into figuring out how to communicate with the other robots but can make use of a common platform. The devices, as well as their firmware, are open source, meaning not only that everyone can build their own Standard Communication Module if they’d like but also that the community can participate in its development, which makes it an interesting addition to RoboCupJunior Soccer.

two teams setting up their robotsRoboCupJunior Soccer teams getting ready for the competition.

How did this new module work out in the competition? Did you see an improvement in experience for the teams and organisers?

In this first big public test we focused on exploring how (and whether) these modules can improve the gameplay – especially the “chasing robots at kickoff”. Although we’ve done “lab experiments” in the past and had some empirical evidence that it should work rather well, this was the first time we tried it in a real competition.

All in all, I would say that it was a very positive experiment. The modules themselves did work quite well and for some of us, who happened to have experience with “robot chasing” mentioned above, it was sort of a magical feeling to see the robots stop right on the main referee’s whistle.

We also found out potential areas for improvement in the future. The modules themselves do not have a power source of their own and were powered by the robots themselves. We didn’t think this would be a problem but in the “real world” test it transpired that the voltage levels the robots are capable of providing fluctuates significantly – for instance when the robot decides to aggressively accelerate – which in turn means some of the modules disconnect when the voltage is lowered significantly. However, it ended up being a nice lesson for everyone involved, one that we can certainly learn from when we design the next iterations.


The livestream from Day 4 of RoboCupJunior Soccer 2023. This stream includes the SuperTeam finals and the technical challenges. You can also view the livestream of the semifinals and finals from day three here.

Could you tell us about the emergence of deep-learning models in the RoboCupJunior leagues?

This is something we started to observe in recent years which surprised us organisers, to some extent. In our day-to-day jobs (that is, when we are not organising RoboCup), many of us, the organisers, work in areas related to robotics, computer science, and engineering in general – with some of us also doing research in artificial intelligence and machine learning. And while we always thought that it would be great to see more of the cutting-edge research being applied at RoboCupJunior, we always dismissed it as something too advanced and/or difficult to set up for the high school students that comprise the majority of RoboCupJunior students.

Well, to our great surprise, some of the more advanced teams have started to utilise methods and technologies that are very close to the current state-of-the-art in various areas, particularly computer vision and deep learning. A good example would be object detectors (usually based on the YOLO architecture), which are now used across all three Junior leagues: in OnStage to detect various props, robots and humans who perform on the stage together, in Rescue to detect the victims the robots are rescuing and in Soccer to detect the ball, the goals, and the opponents. And while the participants generally used an off-the-shelf implementations, they still needed to do all the steps necessary for a successful deployment of this technology: gather a dataset, finetune the deep-learning model and deploy it on their robots – all of which is far from trivial and is very close to how these technologies get used in both research and industry.

Although we have seen only the more advanced teams use deep-learning models at RoboCupJunior, we expect that in the future we will see it become much more prevalent, especially as the technology and the tooling around it becomes more mature and robust. It does show, however, that despite their age, the RoboCupJunior students are very close to cutting-edge research and state-of-the-art technologies.

Two teams ready to start - robots on the fieldAction from RoboCupJunior Soccer 2023.

How can people get involved in RCJ (either as a participant or an organiser?)

A very good question!

The best place to start would be the RoboCupJunior website where one can find many interesting details about RoboCupJunior, the respective leagues (such as Soccer, Rescue and OnStage), and the relevant regional representatives who organise regional events. Getting in touch with a regional representative is by far the easiest way of getting started with RoboCup Junior.

Additionally, I can certainly recommend the RoboCupJunior forum, where many RoboCupJunior participants, past and present, as well as the organisers, discuss many related topics in the open. The community is very beginner friendly, so if RoboCupJunior sounds interesting, do not hesitate to stop by and say hi!

About Marek Šuppa

Marek Suppa

Marek stumbled upon AI as a teenager when building soccer-playing robots and quickly realised he is not smart enough to do all the programming by himself. Since then, he’s been figuring out ways to make machines learn by themselves, particularly from text and images. He currently serves as the Principal Data Scientist at Slido (part of Cisco), improving the way meetings are run around the world. Staying true to his roots, he tries to provide others with a chance to have a similar experience by organising the RoboCupJunior competition as part of the Executive Committee.

#RoboCup2023 in tweets – part 2

robocup2023 logo

As this year’s RoboCup draws to a close, we take a look back at some of the highlights from the second half of the conference. Over the course of the weekend, the event focussed on the latter stages of the competitions, with the winners in all the different leagues decided. If you missed our round-up of the first half of RoboCup, you can catch up here.

#RoboCup2023 in tweets – part 1

AIhub | Tweets round-up

This year’s RoboCup kicked off on 4 July and will run until 10 July. Taking place in Bordeaux, the event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. Find out what attendees have been up to in preparation for, and in the first half of, the event.

[UPDATE] A list of resources, articles, and opinion pieces relating to large language models & robotics

A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / Licenced by CC-BY 4.0.

We’ve collected some of the articles, opinion pieces, videos and resources relating to large language models (LLMs). Some of these links also cover other generative models. We will periodically update this list to add any further resources of interest. This article represents the third in the series. (The previous versions are here: v1 | v2.)

What LLMs are and how they work

Journal, conference, arXiv, and other articles

Newspaper, magazine, University website, and blogpost articles

Reports

Podcasts and video discussions

Focus on LLMs and education

Relating to art and other creative processes

Pertaining to robotics

Misinformation, fake news and the impact on journalism

Regulation and policy

CLAIRE and euRobotics: all questions answered on humanoid robotics

a pile of question marks

On 9 December, CLAIRE and euRobotics jointly hosted an All Questions Answered (AQuA) event. This one hour session focussed on humanoid robotics, and participants could ask questions regarding the current and future state of AI, robotics and human augmentation in Europe.

The questions were fielded by an expert panel, comprising:

  • Rainer Bischoff, euRobotics
  • Wolfram Burgard, Professor of Robotics and AI, University of Technology Nuremberg
  • Francesco Ferro, CEO, PAL Robotics
  • Holger Hoos, Chair of the Board of Directors, CLAIRE

The session was recorded and you can watch in full below:

RoboCup humanoid league: Interview with Jasper Güldenstein

Jasper

RoboCup is an international scientific initiative with the goal of advancing the state of the art of intelligent robots, AI and automation. The annual RoboCup event returned to an in-person format for 2022, taking place from 13-17 July in Bangkok. RoboCup comprises a number of leagues, with perhaps the most well-known being the soccer leagues.

In the Humanoid League, autonomous robots with a human-inspired body plan and senses play soccer against each other. We spoke to Jasper Güldenstein, a member of the technical committee, about the competition at RoboCup 2022, and also about the Humanoid League Virtual Season. As a biweekly virtual competition taking place between the physical RoboCup events it provides teams with an opportunity to test their ideas and keep in touch throughout the year.

Could you give us an overview of the Humanoid League competition at RoboCup this year?

This year we had the first in-person event after a three year break. It was really good to have the majority of the teams coming back together and working on their robots again. Although a lot of teams came with fewer participants than they usually do, we still managed to have 12 teams in the kid size league and three teams in the adult size. Unfortunately, some teams could not participate due to travel restrictions, but we hope to see them back next year.

Humanoid teamsHumanoid league finalists with their robots

What was the format for the competition?

At the beginning we had a drop-in round, which is where we play games with one robot from each team participating in a joint team. So, we have eight robots playing from eight different teams. That is useful for finding out which teams are likely to perform the best in the competition. Next, the results from the drop-in games were used as seeding for the round robin phase of the regular competition, with the strongest teams separated into different groups. After the round robin, we have a knockout competition. The seeding method means that we can hopefully avoid the situation where very good teams get kicked out early. We saw that the most interesting games were towards the end of the competition when the teams performed really well.

Have you noticed improvements since the last physical competition?

I’d say definitely that one big thing that has improved for a lot of teams is the localisation. A lot of teams are more easily able to localise themselves on the field, and they don’t run off randomly. They are more certain that they are in the correct position.

Furthermore, I think the kicking has improved. The robots kick the ball much further than they used to. People have been tuning their kicking motions to increase the distance.

In terms of computer vision, this has definitely improved quite a bit. Something we did differently this time, which was inspired by what we did in the virtual season, is that we had a set of six different balls, all from previous FIFA competitions. For each game a ball was drawn randomly, so the teams couldn’t really prepare for all the balls. Although they were visually quite different, the teams didn’t really have any problems detecting the ball. We’ve seen, in general, that computer vision approaches have improved and these improvements have been transferred to the RoboCup competition. I think that almost all teams are using a neural network to detect the ball. This is a change from three, four, five years ago, where many teams used hand-tuned classical computer vision algorithms.

To talk a bit more about ball detection, it will be interesting to see what happens if we move to an environment with natural and/or varying light conditions. This year we were in a convention hall with uniform lighting. I believe next year, in Bordeaux, there is going to be some form of natural light coming in, and perhaps even fields that are outside. It’s still at the planning stage but we are looking forward to that. It will be a challenge and I strongly believe that the participants will find approaches to make their vision approach robust against these varying conditions.

Teams in actionThe setup and testing for the Humanoid League competition at RoboCup 2022, Bangkok.

Thinking about the transfer from the simulation to the real world, are there any specific elements that lend themselves well to being transferred?

In terms of computer vision, we had a bit of transfer. In the virtual season we concentrated a lot on changing the lighting conditions and having varied backgrounds, to be able to emulate the real world a bit better. I think a few teams used their vision approaches from the virtual season in the real world.

However, I think the strongest part is behaviour. Teams were able to test their strategies in the virtual competition and adapt every other week. For example, CIT Brains, which won the virtual season and the physical competition, made quite a few changes to their strategy and they had robust systems running. Their strategy worked really well, and in the final they managed to score several goals win against the previous world champions (Rhoban Football Club).

How did the competition go for your team (Hamburg Bit-Bots)?

We actually had quite a few hardware problems, especially on the mechanics side. The motors are wearing out and warped due to wear, and flexed more than we expected. This meant we had difficulties walking stably. And, if you can’t walk in a stable manner that defeats the purpose of everything else. It’s a really integrated system – if one component breaks, you are out of luck as you are very restricted in what you can change during the competition as you don’t have much spare equipment with you.

However, what was good for us was that we had a lot of software up and running, and a lot of it had been tested really well during the virtual league. We had to try and find a way round the walking problem algorithmically, to try to find walking parameters that were more stable. We also switched from [robot operating system] ROS 1 to ROS 2 which brought with it many challenges. We actually did a write up on the issues we faced, with some tricks and tips.

Will there be a new virtual league season this year?

Yes, we’ve discussed this in our technical committee and we plan on doing it again. The last event was successful and the teams enjoyed it. We plan on making some changes, such as logging the games to extract some interesting metrics and doing some analysis on those.

Another thing we want to do is domain randomisation, making the environment a bit more varied. This means that the approaches have to be more robust. The hope is that, when they are more robust, they can be transferred more easily to the real world. We were thinking about making the terrain slightly uneven. Another approach could be to modify the models of the robots such that the joints emulate a bit of wear, so they simulated actuators might be a bit weaker or stronger randomly, and teams have to find robust approaches to deal with that.

We won’t do everything at the beginning. We’ll move through the season and talk to the teams and form some organising groups to develop the simulation further, to run the games and to organise the competition itself. We are always happy to have input and we always talk to the teams to see what they think. It’s a competition but it’s something we build together.

Robots at the humanoid league BangkokHumanoid robots at RoboCup 2022, Bangkok.

Could you tell us about the satellite event that took place at RoboCup?

This was a discussion about how to get teams more interested in participating and how to bridge the junior leagues and the major leagues.

We know that some people who participated in RoboCup Junior specifically selected a University that has a RoboCup team so that they could join that team. It would be awesome for more people to do this, and for more juniors to know what the major league is about.

To bridge the gap between the junior and major leagues we don’t want to introduce another league, but we want some form of events where the two groups can meet, and where the juniors can show off how well they are doing. It would be good to have more interaction between the leagues, although we haven’t decided on the exact format yet.

About Jasper

Jasper

Jasper Güldenstein is a PhD student at University of Hamburg. His research focuses on humanoid navigation. He has been participating in RoboCup as a member of the Hamburg Bit-Bots since 2016 where his focus is developing not only the software of the team but also the electronics of the robot platform. In his recent master thesis he evaluated using reinforcement learning to perform path planning and execution for humanoid robots.