Interview with Roberto Figueiredo: the RoboCup experience

Five people holding kid-sized robots
Roberto Figueiredo is a master’s student at the University of Aveiro. He is a member of the Bold Hearts RoboCup team which competes in the Humanoid KidSize soccer league. He is currently the local representative for the Junior Rescue Simulation. We spoke to Roberto about his RoboCup journey, from the junior to the major leagues, and his experience of RoboCup events.

When was your first RoboCup event and which competition did you take part in?

I started in 2016 in the Junior leagues with my high school and I took part in the rescue simulation competition (although I originally joined the on-stage competition). This first event actually happened in Portugal, and it was similar to a workshop. We qualified to go to the world cup in rescue simulation, in Leipzig, Germany, and we ended up in second place. That was really nice, and it was my first contact with RoboCup, and with robotics generally. I’d been working with electronics in the past, but simulation gave me a bit of an introduction to the more theoretical aspects of robotics, and to AI in general. Rescue simulation makes you think of ways to make the robots independent and not manually controlled by humans.

RoboCup team in front of the rescue setupRoberto’s first RoboCup in 2016, Leipzig, pictured with the Singapore team celebrating after the finals.

Could you tell us about the subsequent RoboCup events that you took part in?

In 2017 we qualified to go to Nagoya, Japan, which was not just an amazing RoboCup, but an amazing trip. That’s another good thing about robotics, you get to meet a lot of new people in new countries. We did quite well in this competition as well, I think we reached 5th place.

After that we went to European RoboCup Junior in Italy. The following year was my last RoboCup as a junior, which was in Sydney. That was also an interesting event and I got to chat a bit more with the majors and understand how their teams worked. By this point, I had gained more experience, and I felt ready to get involved with a major league RoboCup team.

There is a big gap between the junior and major leagues. When I joined my team (the Bold Hearts), most of the team were PhDs and I was just a second year bachelor’s student so it was quite hard to pick up all the knowledge. However, if you are persistent enough and you are interested in, and passionate about, robotics you’ll get the hang of it and you’ll learn by trial and error.

Seven people standing and one kneelingEuroRoboCup 2022 in Portugal. Roberto (kneeling in photo) was part of the organising committee.

When was your first competition with the team in the major league?

My first competition was actually last year, in Thailand. We didn’t perform as we would like to, however, there is much more to RoboCup than just the competition – it is now more of a scientific and knowledge-sharing event, it’s unique. Just this year, in Bordeaux, we had a problem with our robots. Every time we disconnected the ethernet cable, the robot just stopped playing, and we couldn’t figure out what was happening. I asked another team that was using the same software – they had figured out the problem before and they told us how to solve it. I don’t think you’ll see that in other competitions. Every team has a joint objective which is making science progress, making friendships, and making other teams better by sharing their knowledge. That’s really unique.

How did you join the Bold Hearts team?

I decided to do my master’s in the UK (at the University of Hertfordshire), to experience a different country and a different style of education. When I joined, I knew there was a team so I was already looking forward to joining. After a couple of years of work, we finally got to go to a competition as a team. It’s been an amazing time and a huge learning experience.

What is your role on the team?

In our team, everyone does a bit of everything. We still have a lot of problems to solve – on both the hardware and software side. All of us currently are computer scientists so it’s a bit more of a struggle to work on the hardware side. So, I do a bit of everything, both AI and non-AI related problems. For example, I’ve done some 3d modelling for the robots, and I’m currently working on the balancing problem. We all work together on the problems which is amazing because you get to see a bit of everything and learn from everyone. Robotics is a very multidisciplinary field. You get to learn about all kinds of topics: mechanical engineering, electrical engineering, machine learning, coding in general.

The Bold Hearts’ qualification video for this year’s RoboCup competition

Could you tell us about this year’s competition (which took place in Bordeaux)?

This year we were a lot more prepared than last year, when we’d just come back from COVID, and all of our experienced members had recently left the team, due to finishing their PhDs and starting work. Creating a successful robot team is a huge integration problem. There are so many pieces that need to go together and work perfectly for the robots to function, and if one fails it looks like your system isn’t doing anything. We got walking working perfectly this year, we had vision working well too, and we had a stable decision tree, and we were able to listen to the controller (which is like a referee and passes on information about fouls, game start and stops etc.). However, we had some bugs in the decision tree that made everything fall apart and we spent a lot of time trying to debug it. This happens to a lot of teams. However, you can still appreciate the work and progress of what they have done.

Five people holding kid-sized robotsRoboCup 2023 in Bordeaux. Roberto (left) with Bold Hearts teammates.

What are the immediate plans for the team?

We are now thinking about joining the simulation competition, which is part of our league. It takes place in the winter season and we’re planning on joining to work on our software. The transition between simulation and hardware is quite hard. You need a good simulation base to be able to transfer directly the knowledge to the robot. We’re working on having a very good simulation so we can transfer, at least more easily, the knowledge learnt in simulation to the robots.

RoboCup is moving more towards AI and learning, which we can see in the 3d simulation. The robots learn a lot of the motion through reinforcement learning, for example. In the physical leagues it’s not as easy as we have to transfer that to the real world, where there is play in the joints, there’s backlash, there’s play in the 3d parts – there are a lot of variables that are not taken into account in simulations.

How has being part of RoboCup inspired your studies and research?

Every time I go to RoboCup I come out thinking about what I’m going to do next. I couldn’t be more inspired. It’s a really intense field but I love it. It makes you want to work really hard and it makes you passionate about science. I did my bachelor’s project related to RoboCup, I joined a master’s course on robotics, I keep asking my Professors if they want to start a team back in Portugal. I’m going to do my master’s thesis on robotics, on humanoids. I think humanoids are a very complex and interesting challenge. There is no one single solution.

About Roberto

Roberto Figueiredo

Roberto Figueiredo is a Portuguese, AI-focused computer scientist with a bachelor’s degree from the University of Hertfordshire. He currently pursuing a master’s in Robotics and Intelligent Systems from the University of Aveiro, and is passionate about advancing his expertise in robotics. He has long been very enthusiastic about robots and AI, being a participant in RoboCup since 2016 in the Rescue Simulation league. He has since become local representative for the Rescue League in Portugal and joined a Major team, Bold Hearts, in the Kid Size league, one of the most challenging in RoboCup Humanoid Soccer.

What’s coming up at #RoboCup2023?

robocup2023 logo
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.

The leagues and their competitions

The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:

Symposium

The RoboCup symposium will take place on 10 July. The programme can be found here.

There will be three keynote talks:

  • Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
  • Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
  • Laurence Devillers, Socio-affective robots: ethical issues

Find out more at the event website.

#IJCAI invited talk: engineering social and collaborative agents with Ana Paiva

An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. In this post, we summarise the presentation by Ana Paiva, University of Lisbon and INESC-ID. The title of her talk was “Engineering sociality and collaboration in AI systems”.

Robots are widely used in industrial settings, but what happens when they enter our everyday world, and, specifically, social situations? Ana believes that social robots, chatbots and social agents have the potential to change the way we interact with technology. She envisages a hybrid society where humans and AI systems work in tandem. However, for this to be realised we need to carefully consider how such robots will interact with us socially and collaboratively. In essence, our world is social, so when machines enter they need to have some capabilities to interact with this social world.

Ana took us through the theory of what it means to the social. There are three aspects to this:

  1. Social understanding: the capacity to perceive others, exhibit theory of mind and respond appropriately.
  2. Intrapersonal competencies: the capability to communicate socially, establish relationships and adapt to others.
  3. Social responsibility: the capability to take actions towards the social environment, follow norms and adopt morally appropriate actions.

Ana talkingScreenshot from Ana’s talk.

Ana wants to go from this notion of social intelligence to what is called artificial social intelligence, which can be defined as: “the capability to perceive and understand social signals, manage and participate in social interactions, act appropriately in social settings, establish social relations, adapt to others, and exhibit social responsibility.”

As an engineer, she likes to build things, and, on seeing the definition above, wonders how she can pass from said definition to a model that will allow her to build social machines. This means looking at social perception, social modelling and decision making, and social acting. A lot of Ana’s work revolves around design, study and development for achieving this kind of architecture.

Ana gave us a flavour of some of the projects that she and her groups have carried out with regards to trying to engineer sociality and collaboration in robots and other agents.

One of these projects was called “Teach me how to write”, and it centres on using robots to improve the handwriting abilities of children. In this project the team wanted to create a robot that kids could teach to write. Through teaching the robot it was hypothesised that they would, in turn, improve their own skills.

The first step was to create and train a robot that could learn how to write. They used learning from demonstration to train a robotic arm to draw characters. The team realised that if they wanted to teach the kids to write, the robot had to learn and improve, and it had to make mistakes in order to be able to improve. They studied the taxonomy of handwriting mistakes that are made by children, so that they could put those mistakes into the system, and so that the robot could learn from the kids how to fix the mistakes.

You can see the system architecture in the figure below, and it includes the handwriting task element, and social behaviours. To add these social behaviours they used a toolkit developed in Ana’s lab, called FAtiMA. This toolkit can be integrated into a framework and is an affective agent architecture for creating autonomous characters that can evoke empathic responses.

system architectureScreenshot from Ana’s talk. System architecture.

In terms of actually using and evaluating the effectiveness of the robot, they couldn’t actually put the robot arm in the classroom as it was too big, unwieldy and dangerous. Therefore, they had to use a Nao robot, which moved its arms like it was writing, but it didn’t actually write.

Taking part in the study were 24 Portuguese-speaking children, and they participated in four sessions over the course of a few weeks. They assigned the robot two contrasting competencies: “learning” (where the robot improved over the course of the sessions) and “non-learning” (where the robot’s abilities remained constant). They measured the kids’ writing ability and improvement, and they used questionnaires to find out what the children thought about the friendliness of the robot, and their own teaching abilities.

They found that the children who worked with learning robot significantly improved their own abilities. They also found that the robot’s poor writing abilities did not affect the children’s fondness for it.

You can find out more about this project, and others, on Ana’s website.

RoboCup2022 underway – where to find the livestream action

ROboCup 2022

RoboCup 2022 kicked off yesterday, and there have already been lots of competitions within the various leagues. Many of these are livestreamed to YouTube, and the recordings are available for anyone to watch.

Below are the links to the livestream (and recorded) channels for the leagues that have them.

In addition to these channels, there are also some stand-alone recordings.

Here are some highlights from the humanoid league drop-in tournament:

This video features a match between the HTWK-Robots and rUNSWift.

Find out more about RoboCup 2022 here.

Radhika Nagpal at #NeurIPS2021: the collective intelligence of army ants

ants walking up a tree

The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. In this post, we give a flavour of the final presentation.

The collective intelligence of army ants, and the robots they inspire

Radhika Nagpal

Radhika’s research focusses on collective intelligence, with the overarching goal being to understand how large groups of individuals, with local interaction rules, can cooperate to achieve globally complex behaviour. These are fascinating systems. Each individual is miniscule compared to the massive phenomena that they create, and, with a limited view of the actions of the rest of the swarm, they achieve striking coordination.

Looking at collective intelligence from an algorithmic point-of-view, the phenomenon emerges from many individuals interacting using simple rules. When run by these large, decentralised groups, these simple rules result in highly intelligent behaviour.

The subject of Radhika’s talk was army ants, a species which spectacularly demonstrate collective intelligence. Without any leader, millions of ants work together to self-assemble nests and build bridge structures using their own bodies.

One particular aspect of study concerned self-assembly of such bridges. Radhika’s research team, which comprised three roboticists and two biologists, found that the ants created bridges adapt to traffic flow and terrain. The ants also disassembled the bridge when the flow of ants had stopped and it wasn’t needed any more.

The team proposed the following simple hypothesis to explain this behaviour using local rules: if an ant is walking along, and experiences congestion (i.e. another ant steps on it), then it becomes stationary and turns into a bridge, allowing other ants to walk over it. Then, if no ants are walking on it any more, it can get up and leave.

These observations, and this hypothesis, led the team to consider two research questions:

  • Could they build a robot swarm with soft robots that can self-assemble amorphous structures, just like the ant bridges?
  • Could they formulate rules which allowed these robots to self-assemble temporary and adaptive bridge structures?

There were two motivations for these questions. Firstly, the goal of moving closer to realising robot swarms that can solve problems in a particular environment. Secondly, the use of a synthetic system to better understand the collective intelligence of army ants.

Screenshot from Radhika's talkScreenshot from Radhika’s talk

Radhika showed a demonstration of the soft robot designed by her group. It has two feet and a soft body, and moves by flipping – one foot remains attached, while the other detaches from the surface and flips to attach in a different place. This allows movement in any orientation. Upon detaching, a foot searches through space to find somewhere to attach. By using grippers on the feet that can hook onto textured surfaces, and having a stretchable Velcro skin, the robots can climb over each other, like the ants. The robot pulses, and uses a vibration sensor, to detect whether it is in contact with another robot. A video demonstration of two robots interacting showed that they have successfully created a system that can recreate the simple hypothesis outlined above.

In order to investigate the high-level properties of army ant bridges, which would require a vast number of robots, the team created a simulation. Modelling the ants to have the same characteristics as their physical robots, they were able to replicate the high level properties of army ant bridges with their hypothesized rules.


You can read the round-ups of the other NeurIPS invited talks at these links:
#NeurIPS2021 invited talks round-up: part one – Duolingo, the banality of scale and estimating the mean
#NeurIPS2021 invited talks round-up: part two – benign overfitting, optimal transport, and human and machine intelligence

Maria Gini wins the 2022 ACM/SIGAI Autonomous Agents Research Award

trophy

Congratulations to Professor Maria Gini on winning the ACM/SIGAI Autonomous Agents Research Award for 2022! This prestigious prize recognises years of research and leadership in the field of robotics and multi-agent systems.

Maria Gini is Professor of Computer Science and Engineering at the University of Minnesota, and has been at the forefront of the field of robotics and multi-agent systems for many years, consistently bringing AI into robotics.

Her work includes the development of:

  • novel algorithms to connect the logical and geometric aspects of robot motion and learning,
  • novel robot programming languages to bridge the gap between high-level programming languages and programming by guidance,
  • pioneering novel economic-based multi-agent task planning and execution algorithms.

Her work has spanned both the design of novel algorithms and practical applications. These applications have been utilized in settings as varied as warehouses and hospitals, with uses such as surveillance, exploration, and search and rescue.

Maria has been an active member and leader of the agents community since its inception. She has been a consistent mentor and role model, deeply committed to bringing diversity to the fields of AI, robotics, and computing. She is also the former President of International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).

Maria will be giving an invited talk at AAMAS 2022. More details on this will be available soon on the conference website.

Interview with Tao Chen, Jie Xu and Pulkit Agrawal: CoRL 2021 best paper award winners

Congratulations to Tao Chen, Jie Xu and Pulkit Agrawal who have won the CoRL 2021 best paper award!

Their work, A system for general in-hand object re-orientation, was highly praised by the judging committee who commented that “the sheer scope and variation across objects tested with this method, and the range of different policy architectures and approaches tested makes this paper extremely thorough in its analysis of this reorientation task”.

Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

We present a system for reorienting novel objects using an anthropomorphic robotic hand with any configuration, with the hand facing both upwards and downwards. We demonstrate the capability of reorienting over 2000 geometrically different objects in both cases. The learned controller can also reorient novel unseen objects.

Could you tell us about the implications of your research and why it is an interesting area for study?

Our learned skill (in-hand object reorientation) can enable fast pick-and-place of objects in desired orientations and locations. For example, in logistics and manufacturing, it is a common demand to pack objects into slots for kitting. Currently, this is usually achieved via a two-stage process involving re-grasping. Our system will be able to achieve it in one step, which can substantially improve the packing speed and boost the manufacturing efficiency.

Another application is enabling robots to operate a wider variety of tools. The most common end-effector in industrial robots is a parallel-jaw gripper, partially due to its simplicity in control. However, such an end-effector is physically unable to handle many tools we see in our daily life. For example, even using pliers is difficult for such a gripper as it cannot dexterously move one handle back and forth. Our system will allow a multi-fingered hand to dexterously manipulate such tools, which opens up a new area for robotics applications.

Could you explain your methodology?

We use a model-free reinforcement learning algorithm to train the controller for reorienting objects. In-hand object reorientation is a challenging contact-rich task. It requires a tremendous amount of training. To speed up the learning process, we first train the policy with privileged state information such as object velocities. Using the privileged state information drastically improves the learning speed. Other than this, we also found that providing a good initialization on the hand and object pose is critical for training the controller to reorient objects when the hand faces downward. In addition, we develop a technique to facilitate the training by building a curriculum on gravitational acceleration. We call this technique “gravity curriculum”.

With these techniques, we are able to train a controller that can reorient many objects even with a downward-facing hand. However, a practical concern of the learned controller is that it makes use of privileged state information, which can be nontrivial to get in the real world. For example, it is hard to measure the object’s velocity in the real world. To ensure that we can deploy a controller reliably in the real world, we use teacher-student training. We use the controller trained with the privileged state information as the teacher. Then we train a second controller (student) that does not rely on any privileged state information and hence has the potential to be deployed reliably in the real world. This student controller is trained to imitate the teacher controller using imitation learning. The training of the student controller becomes a supervised learning problem and is therefore sample-efficient. In the deployment time, we only need the student controller.

What were your main findings?

We developed a general system that can be used to train controllers that can reorient objects with either the robotic hand facing upward or downward. The same system can also be used to train controllers that use external support such as a supporting surface for object re-orientation. Such controllers learned in our system are robust and can also reorient unseen novel objects. We also identified several techniques that are important for training a controller to reorient objects with a downward-facing hand.

A priori one might believe that it is important for the robot to know about object shape in order to manipulate new shapes. Surprisingly, we find that the robot can manipulate new objects without knowing their shape. It suggests that robust control strategies mitigate the need for complex perceptual processing. In other words, we might need much simpler perceptual processing strategies than previously thought for complex manipulation tasks.

What further work are you planning in this area?

Our immediate next step is to achieve such manipulation skills on a real robotic hand. To achieve this, we will need to tackle many challenges. We will investigate overcoming the sim-to-real gap such that the simulation results can be transferred to the real world. We also plan to design new robotic hand hardware through collaboration such that the entire robotic system can be dexterous and low-cost.


About the authors

Tao ChenTao Chen is a Ph.D. student in the Improbable AI Lab at MIT CSAIL, advised by Professor Pulkit Agrawal. His research interests revolve around the intersection of robot learning, manipulation, locomotion, and navigation. More recently, he has been focusing on dexterous manipulation. His research papers have been published in top AI and robotics conferences. He received his master’s degree, advised by Professor Abhinav Gupta, from the Robotics Institute at CMU, and his bachelor’s degree from Shanghai Jiao Tong University.

Jie XuJie Xu is a Ph.D. student at MIT CSAIL, advised by Professor Wojciech Matusik in the Computational Design and Fabrication Group (CDFG). He obtained a bachelor’s degree from Department of Computer Science and Technology at Tsinghua University with honours in 2016. During his undergraduate period, he worked with Professor Shi-Min Hu in the Tsinghua Graphics & Geometric Computing Group. His research mainly focuses on the intersection of Robotics, Simulation, and Machine Learning. Specifically, he is interested in the following topics: robotics control, reinforcement learning, differentiable physics-based simulation, robotics control and design co-optimization, and sim-to-real.

Pulkit AgrawalDr Pulkit Agrawal is the Steven and Renee Finn Chair Professor in the Department of Electrical Engineering and Computer Science at MIT. He earned his Ph.D. from UC Berkeley and co-founded SafelyYou Inc. His research interests span robotics, deep learning, computer vision and reinforcement learning. Pulkit completed his bachelor’s at IIT Kanpur and was awarded the Director’s Gold Medal. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Machine Learning Research Award, Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS, and Sridhar Memorial Prize, among others.


Find out more

  • Read the paper on arXiv.
  • The videos of the learned policies are available here, as is a video of the authors’ presentation at CoRL.
  • Read more about the winning and shortlisted papers for the CoRL awards here.