Social media round-up from #IROS2025

The 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) took place from October 19 to 25, 2025 in Hangzhou, China. The programme included plenary and keynote talks, workshops, tutorials, forums, competitions, and a debate. There was also an exhibition where companies and institutions were able to showcase their latest hardware and software.

We cast an eye over the social media platforms to see what participants got up to during the week.

📢 This week, we are participating in the IEEE/RSJ International Conference on Intelligent Robots and Systems #IROS2025 in Hangzhou #China

📸 (IRI researchers right-left): @juliaborrassol.bsky.social, David Blanco-Mulero and Anais Garrell

#IRI #IROSHangzho

[image or embed]

— IRI-Institut de Robòtica i Informàtica Industrial (@iri-robotics.bsky.social) 24 October 2025 at 01:33

Truly enjoyed discussing the consolidation of specialist and generalist approaches to physical AI at #IROS2025.

Hoping to visit Hangzhou in physical rather than digital form myself in the not too distant future – second IROS AC dinner missed in a row.

#Robotics #physicalAI

[image or embed]

— Markus Wulfmeier (@mwulfmeier.bsky.social) 20 October 2025 at 11:25

At #IROS2025 General Chair Professor Hesheng Wang and Program Chair Professor Yi Guo share what makes this year’s conference unique, from the inspiring location to the latest research shaping the future of intelligent robotics. youtu.be/_JzGoH7wilU

[image or embed]

— WebsEdge Science (@websedgescience.bsky.social) 23 October 2025 at 21:32

From Hangzhou, #IROS2025 unites the brightest minds in #Robotics, #AI & intelligent systems to explore the Human–Robotics Frontier. Watch IROS TV for highlights, interviews, and a behind-the-scenes look at the labs shaping our robotic future. youtu.be/SojyPncpH1g

[image or embed]

— WebsEdge Science (@websedgescience.bsky.social) 23 October 2025 at 21:25

Impressive live demonstration by @unitreerobotics.bsky.social #G1 at the #IROS2025 conference! It really works!

[image or embed]

— Davide Scaramuzza (@davidescaramuzza.bsky.social) 23 October 2025 at 15:26

What’s coming up at #IROS2025?

The 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) will be held from 19-25 October in Hangzhou, China. The programme includes plenary and keynote talks, workshops, tutorials, forums, competitions, and a debate.

Plenary talks

There are three plenary talks on the programme this year, with one per day on Tuesday 21, Wednesday 22, and Thursday 23 October.

  • Marco HutterThe New Era of Mobility: Humanoids and Quadrupeds Enter the Real World
  • Hyoun Jin KimAutonomous Aerial Manipulation: Toward Physically Intelligent Robots in Flight
  • Song-Chun ZhuTongBrain: Bridging Physical Robots and AGI Agents

Keynote talks

The keynotes this year fall under eleven umbrella topics:

  • Rehabilitation & Physically Assistive Systems
    • Patrick WensingFrom Controlled Tests to Open Worlds: Advancing Legged Robots and Lower-Limb Prostheses
    • Hao SuAI-Powered Wearable and Surgical Robots for Human Augmentation
    • Lorenzo MasiaWearable Robots and AI for Rehabilitation and Human Augmentation
    • Shingo ShimodaScience of Awareness: Toward a New Paradigm for Brain-Generated Disorders
  • Bio-inspired Robotics
    • Kevin ChenAgile and robust micro-aerial-robots driven by soft artificial muscles
    • Josie HughesBioinspired Robots: Building Embodied Intelligence
    • Jee-Hwan RyuSoft Growing Robots: From Disaster Response to Colonoscopy
    • Lei RenLayagrity robotics: inspiration from the human musculoskeletal system
  • Soft Robotics
    • Bram VanderborghtSelf healing materials for sustainable soft robots”
    • Cecilia LaschiFrom AI Scaling to Embodied Control: Toward Energy-Frugal Soft Robotics
    • Kyu-Jin ChoSoft Wearable Robots: Navigating the Challenges of Building Technology for the Human Body
    • Li WenMultimodal Soft Robots: Elevating Interaction in Complex and Diverse Environments
  • Al and Robot Learning
    • Fei MiaoFrom Uncertainty to Action: Robust and Safe Multi-Agent Reinforcement Learning for Embodied AI
    • Xifeng YanAdaptive Inference in Transformers
    • Long ChengLearning from Demonstrations by the Dynamical System Approach
    • Karinne Ramírez-AmaroTransparent Robot Decision-Making with Interpretable & Explainable Methods
  • Perception and Sensors
    • Davide ScaramuzzaLow-latency Robotics with Event Cameras
    • Kris DorseySensor design for soft robotic proprioception
    • Perla MaiolinoShaping Intelligence: Soft Bodies, Sensors, and Experience
    • Roberto CalandraDigitizing Touch and its Importance in Robotics
  • Human Robot Interaction
    • Javier Alonso-MoraMulti-Agent Autonomy: from Interaction-Aware Navigation to Coordinated Mobile Manipulation
    • Jing XiaoRobotic Manipulation in Unknown and Uncertain Environments
    • Dongheui LeeFrom Passive Learner to Pro-Active and Inter-Active Learner with Reasoning Capabilities
    • Ya-Jun PanIntelligent Adaptive Robot Interacting with Unknown Environment and Human
  • Embodied Intelligence
    • Fumiya IidaInformatizing Soft Robots for Super Embodied Intelligence
    • Nidhi SeethapathiPredictive Principles of Locomotion
    • Cewu LuDigital Gene: An Analytical Universal Embodied Manipulation Ideology
    • Long ChengLearning from Demonstrations by the Dynamical System Approach
  • Medical Robots
    • Kenji SuzukiSmall-data Deep Learning for AI Doctor and Smart Medical Imaging
    • Li ZhangMagnetic Microrobots for Translational Biomedicine: From Individual and Modular Designs to Microswarms
    • Kanako HaradaCo-evolution of Human and AI-Robots to Expand Science Frontiers
    • Loredana ZolloTowards Synergistic Human–Machine Interaction in Assistive and Rehabilitation Robotics: Multimodal Interfaces, Sensory Feedback, and Future Perspectives
  • Field Robotics
    • Matteo MatteucciRobotics Meets Agriculture: SLAM and Perception for Crop Monitoring and Precision Farming
    • Brendan EnglotSituational Awareness and Decision-Making Under Uncertainty for Marine Robots
    • Abhinav ValadaOpen World Embodied Intelligence: Learning from Perception to Action in the Wild
    • Timothy H. ChungCatalyzing the Future of Human, Robot, and AI Agent Teams in the Physical World
  • Humanoid Robot Systems
    • Kei OkadaTransforming Humanoid Robot Intelligence: From Reconfigurable Hardware to Human-Centric Applications
    • Xingxing WangA New Era of Global Collaboration in Intelligent Robotics
    • Wei ZhangTowards Physical Intelligence in Humanoid Robotics
    • Dennis HongStaging the Machine: Not Built for Work, Built for Wonder
  • Mechanisms and Controls
    • Kenjiro TadakumaTopological Robotic Mechanisms
    • Angela P. SchoelligAI-Powered Robotics: From Semantic Understanding to Safe Autonomy
    • Lu LiuSafety-Aware Multi-Agent Self-Deployment: Integrating Cybersecurity and Constrained Coordination
    • Fuchun SunKnowledge-Guided Tactile VLA: Bridging the Sim-to-Real Gap with Physics and Geometry Awareness

Debate

On Wednesday, a debate will be held on the following topic: “Humanoids Will Soon Replace Most Human Workers: True or False?” The participants will be: XingXing Wang (Unitree Robotics), Jun-Oh Ho (Samsung and Rainbow Robotics), Hong Qiao (Chinese Academy of Sciences), Andra Keay, (Silicon Valley Robotics), Yu Sun (EiC, IEEE Trans on Automation Science and Engineering), Tamim Asfour (Professor of Humanoid Robotics, Karlsruhe Institute of Technology), Ken Goldberg (UC Berkeley, Moderator).

Tutorials

There are three tutorials planned, taking place on Monday 20 and Friday 24 October.

Workshops

You can find a list of the workshops here. These will take place on Monday 20 and Friday 24 October.There are 83 to choose from this year.

Find out more

Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors


In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Zahra Ghorrati is developing frameworks for human activity recognition using wearable sensors. We caught up with Zahra to find out more about this research, the aspects she has found most interesting, and her advice for prospective PhD students.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I am pursuing my PhD at Purdue University, where my dissertation focuses on developing scalable and adaptive deep learning frameworks for human activity recognition (HAR) using wearable sensors. I was drawn to this topic because wearables have the potential to transform fields like healthcare, elderly care, and long-term activity tracking. Unlike video-based recognition, which can raise privacy concerns and requires fixed camera setups, wearables are portable, non-intrusive, and capable of continuous monitoring, making them ideal for capturing activity data in natural, real-world settings.

The central challenge my dissertation addresses is that wearable data is often noisy, inconsistent, and uncertain, depending on sensor placement, movement artifacts, and device limitations. My goal is to design deep learning models that are not only computationally efficient and interpretable but also robust to the variability of real-world data. In doing so, I aim to ensure that wearable HAR systems are both practical and trustworthy for deployment outside controlled lab environments.

This research has been supported by the Polytechnic Summer Research Grant at Purdue. Beyond my dissertation work, I contribute to the research community as a reviewer for conferences such as CoDIT, CTDIAC, and IRC, and I have been invited to review for AAAI 2026. I was also involved in community building, serving as Local Organizer and Safety Chair for the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), and continuing as Safety Chair for AAMAS 2026.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, my research has focused on developing a hierarchical fuzzy deep neural network that can adapt to diverse human activity recognition datasets. In my initial work, I explored a hierarchical recognition approach, where simpler activities are detected at earlier levels of the model and more complex activities are recognized at higher levels. To enhance both robustness and interpretability, I integrated fuzzy logic principles into deep learning, allowing the model to better handle uncertainty in real-world sensor data.

A key strength of this model is its simplicity and low computational cost, which makes it particularly well suited for real-time activity recognition on wearable devices. I have rigorously evaluated the framework on multiple benchmark datasets of multivariate time series and systematically compared its performance against state-of-the-art methods, where it has demonstrated both competitive accuracy and improved interpretability.

Is there an aspect of your research that has been particularly interesting?

Yes, what excites me most is discovering how different approaches can make human activity recognition both smarter and more practical. For instance, integrating fuzzy logic has been fascinating, because it allows the model to capture the natural uncertainty and variability of human movement. Instead of forcing rigid classifications, the system can reason in terms of degrees of confidence, making it more interpretable and closer to how humans actually think.

I also find the hierarchical design of my model particularly interesting. Recognizing simple activities first, and then building toward more complex behaviors, mirrors the way humans often understand actions in layers. This structure not only makes the model efficient but also provides insights into how different activities relate to one another.

Beyond methodology, what motivates me is the real-world potential. The fact that these models can run efficiently on wearables means they could eventually support personalized healthcare, elderly care, and long term activity monitoring in people’s everyday lives. And since the techniques I’m developing apply broadly to time series data, their impact could extend well beyond HAR, into areas like medical diagnostics, IoT monitoring, or even audio recognition. That sense of both depth and versatility is what makes the research especially rewarding for me.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Moving forward, I plan to further enhance the scalability and adaptability of my framework so that it can effectively handle large scale datasets and support real-time applications. A major focus will be on improving both the computational efficiency and interpretability of the model, ensuring it is not only powerful but also practical for deployment in real-world scenarios.

While my current research has focused on human activity recognition, I am excited to broaden the scope to the wider domain of time series classification. I see great potential in applying my framework to areas such as sound classification, physiological signal analysis, and other time-dependent domains. This will allow me to demonstrate the generalizability and robustness of my approach across diverse applications where time-based data is critical.

In the longer term, my goal is to develop a unified, scalable model for time series analysis that balances adaptability, interpretability, and efficiency. I hope such a framework can serve as a foundation for advancing not only HAR but also a broad range of healthcare, environmental, and AI-driven applications that require real-time, data-driven decision-making.

What made you want to study AI, and in particular the area of wearables?

My interest in wearables began during my time in Paris, where I was first introduced to the potential of sensor-based monitoring in healthcare. I was immediately drawn to how discreet and non-invasive wearables are compared to video-based methods, especially for applications like elderly care and patient monitoring.

More broadly, I have always been fascinated by AI’s ability to interpret complex data and uncover meaningful patterns that can enhance human well-being. Wearables offered the perfect intersection of my interests, combining cutting-edge AI techniques with practical, real-world impact, which naturally led me to focus my research on this area.

What advice would you give to someone thinking of doing a PhD in the field?

A PhD in AI demands both technical expertise and resilience. My advice would be:

  • Stay curious and adaptable, because research directions evolve quickly, and the ability to pivot or explore new ideas is invaluable.
  • Investigate combining disciplines. AI benefits greatly from insights in fields like psychology, healthcare, and human-computer interaction.
  • Most importantly, choose a problem you are truly passionate about. That passion will sustain you through the inevitable challenges and setbacks of the PhD journey.

Approaching your research with curiosity, openness, and genuine interest can make the PhD not just a challenge, but a deeply rewarding experience.

Could you tell us an interesting (non-AI related) fact about you?

Outside of research, I’m passionate about leadership and community building. As president of the Purdue Tango Club, I grew the group from just 2 students to over 40 active members, organized weekly classes, and hosted large events with internationally recognized instructors. More importantly, I focused on creating a welcoming community where students feel connected and supported. For me, tango is more than dance, it’s a way to bring people together, bridge cultures, and balance the intensity of research with creativity and joy.

I also apply these skills in academic leadership. For example, I serve as Local Organizer and Safety Chair for the AAMAS 2025 and 2026 conferences, which has given me hands-on experience managing events, coordinating teams, and creating inclusive spaces for researchers worldwide.

About Zahra

Zahra Ghorrati is a PhD candidate and teaching assistant at Purdue University, specializing in artificial intelligence and time series classification with applications in human activity recognition. She earned her undergraduate degree in Computer Software Engineering and her master’s degree in Artificial Intelligence. Her research focuses on developing scalable and interpretable fuzzy deep learning models for wearable sensor data. She has presented her work at leading international conferences and journals, including AAMAS, PAAMS, FUZZ-IEEE, IEEE Access, System and Applied Soft Computing. She has served as a reviewer for CoDIT, CTDIAC, and IRC, and has been invited to review for AAAI 2026. Zahra also contributes to community building as Local Organizer and Safety Chair for AAMAS 2025 and 2026.

Self-supervised learning for soccer ball detection and beyond: interview with winners of the RoboCup 2025 best paper award

Presentation of the best paper award at the RoboCup 2025 symposium.

An important aspect of autonomous soccer-playing robots concerns accurate detection of the ball. This is the focus of work by Can Lin, Daniele Affinita, Marco Zimmatore, Daniele Nardi, Domenico Bloisi, and Vincenzo Suriani, which won the best paper award at the recent RoboCup symposium. The symposium takes place alongside the annual RoboCup competition, which this year was held in Salvador, Brazil. We caught up with some of the authors to find out more about the work, how their method can be transferred to applications beyond RoboCup, and their future plans for the competition.

Could you start by giving us a brief description of the problem that you were trying to solve in your paper “Self-supervised Feature Extraction for Enhanced Ball Detection on Soccer Robots”?

Daniele Affinita: The main challenge we faced was that deep learning generally requires a large amount of labeled data. This is not a major problem for common tasks that have already been studied, because you can usually find labeled datasets online. But when the task is highly specific, like in RoboCup, you need to collect and label the data yourself. That means gathering the data and manually annotating it before you can even start applying deep learning. This process is not scalable and demands a significant human effort.

The idea behind our paper was to reduce this human effort. We approached the problem through self-supervised learning, which aims to learn useful representations of the data. After all, deep learning is essentially about learning latent representations from the available data.

Could you tell us a bit more about your self-supervised learning framework and how you went about developing it?

Daniele: First of all, let me introduce what self-supervised learning is. It is a way of learning the structure of the data without having access to labels. This is usually done through what we call pretext tasks. These are tasks that don’t require explicit labels, but instead exploit the structure of the data. For example, in our case we worked with images. You can randomly mask some patches and train the model to predict the missing parts. By doing so, the model is forced to learn meaningful features from the data.

In our paper, we enriched the data by using not only raw images but also external guidance. This came from a larger model which we refer to as the teacher. This model was trained on a different task which is more general than the target task we aimed for. This way the larger model can provide guidance (an external signal) that helps the self-supervision to focus more on the specific task we care about.

In our case, we wanted to predict a tight circle around the ball. To guide this, we used an external pretrained model (YOLO) for object detection, which instead predicts a loose bounding box around the ball. We can arguably say that the bounding box, a rectangle, is more general than a circle. So in this sense, we were trying to use external guidance that doesn’t solve exactly the underlying task.

Overview of the data preparation pipeline.

Were you able to test this model out at RoboCup 2025?

Daniele: Yes, we deployed it at RoboCup 2025 and showed great improvements over our previous benchmark, which was the model we used in 2024. In particular, we noticed that the final training requires much less data. The model was also more robust under different lighting conditions. The issue we had with previous models was that they were tailored for specific situations. But of course, all the venues are different, the lighting and the brightness are different, there might be shadows on the field. So it’s really important to have a reliable model and we really noticed a great improvement this year.

What’s your team name, and could you talk a bit about the competition and how it went?

Daniele: So our team is SPQR. We are from Rome, and we have been competing in RoboCup for a long time.

Domenico Blois: We started in 1998, so we are one of the oldest teams in RoboCup.

Daniele: Yeah, I wasn’t even born then! Our team started with the four-legged robots. And then the league shifted more towards biped robots because they are more challenging, they require balance and, overall it’s harder to walk on just two legs.

Our team has grown a lot during recent years. We have been following a very positive trend, going from 9th place in 2019 to third place at the German Open in 2025, and we got 4th place at RoboCup 2025. Our recent success has attracted more students to the team. So it’s kind of a loop – you win more, you attract more students, and you can work more on the challenges proposed by RoboCup.

SPQR team.

Domenico: I want to add that also, from a research point of view, we have won three best paper awards in the last five years, and we have been proposing some new trends towards, for example, the use of LLMs for coding (as a robot’s behaviour generator under the supervision of a human coach). So we are trying to keep the open research field active in our team. We want to win the matches but we also want to solve the research problems that are bound together with the competition.

One of the important contributions of our paper is towards the use of our algorithms outside RoboCup. For example, we are trying to apply the ball detector in precision farming. We want to use the same approach to detect rounded fruits. This is something that is really important for us; to exit the context of Robocup and to use Robocup tools for new approaches in other fields. So if we lose a match, it’s not a big deal for us. We want our students, our team members, to be open minded towards the use of RoboCup as a starting point for understanding teamwork and for understanding how to deal with strict deadlines. This is something that RoboCup can give us. We try to have a team that is ready for every type of challenge, not only within RoboCup, but also other types of AI applications. Winning is not everything for us. We’d prefer to use our own code and not win, than win using code developed by others. This is not optimal for achieving first place, but we want to teach our students to be prepared for the research that is outside of RoboCup.

You said that you’ve previously won two other best paper awards. What did those papers cover?

Domenico: So the last two best papers were kind of visionary papers. In one paper, we wanted to give an insight in how to use the spectators to help the robots score. For example, if you cheer louder, the robots tend to kick the ball. So this is something that is not actually used in the competition now, but is something more towards the 2050 challenge. So we want to imagine how it will be 10 years from now.

The other paper was called “play everywhere”, so you can, for example, play with different types of ball, you can play outside, you can even play without a specific goal, you can play using Coca-Cola cans as goalposts. So the robot has to have a general approach that is not related to the specific field used in RoboCup. This is in contrast to other teams that are very specific. We have a different approach and this is something that makes it harder for us to win the competition. However, we don’t want to win the competition, we want to achieve this goal of having, in 2050, this match between the RoboCup winners and the FIFA World Cup winners.

I’m interested in what you said about transferring the method for ball detection to farming and other applications. Could you say more about that research?

Vincenzo Suriani: Our lab has been involved in some different projects relating to farming applications. The Flourish project ran from 2015 – 2018. More recently, the CANOPIES project has focussed on precision agriculture for permanent crops where farmworkers can efficiently work together with teams of robots to perform agronomic interventions, like harvesting or pruning.

We have another project that is about detecting and harvesting grapes. There is a huge effort in bringing knowledge back from RoboCup to other projects, and vice versa.

Domenico: Our vision now is to focus on the new generation of humanoid robots. We participated in a new event, the World Humanoid Robot Games, held in Beijing in August 2025, because we want to use the platform of RoboCup for other kinds of applications. The idea is to have a single platform with software that is derived from RoboCup code that can be used for other applications. If you have a humanoid robot that needs to move, you can reuse the same code from RoboCup because you can use the same stabilization, the same vision core, the same framework (more or less), and you can just change some modules and you can have a completely different type of application with the same robot with more or less the same code. We want to go towards this idea of reusing code and having RoboCup as a test bed. It is a very tough test bed, but you can use the results in other fields and in other applications.

Looking specifically at RoboCup, what are your future plans for the team? There are some big changes planned for the RoboCup Leagues, so could you also say how this might affect your plans?

Domenico: We have a very strong team and some of the team members will do a PhD in the coming years. One of our targets was to keep the students inside the university and the research ward, and we were successful in this, because now they are very passionate about the RoboCup competition and about AI in general.

In terms of the changes, there will be a new league within RoboCup that is a merger of the standard platform league (SPL) and the humanoid kid-size league. The humanoid adult-size league will remain, so we need to decide whether to join the new merged league, or move to adult-sized robots. At the moment we don’t have too many details, but what we know is that we will go towards a new era of robots. We acquired robots from Booster and we are now acquiring another G1 robot from Unitree. So we are trying to have a complete family of new robots. And then I think we will go towards the league that is chosen by the other teams in the SPL league. But for now we are trying to organize an event in October in Rome with two other teams to exchange ideas and to understand where we want to go. There will also be a workshop to discuss the research side.

Vincenzo: We are also in discussion about the best size of robot for the competition. We are going to have two different positions, because robots are becoming cheaper and there are teams that are pushing to move more quickly to a bigger platform. On the other hand, there are teams that want to stick with a smaller platform in order to do research on multi agents. We have seen a lot of applications for a single robot but not many applications with a set of robots that are cooperating. And this has been historically one of the core parts of research we did in RoboCup, and also outside of RoboCup.

There are plenty of points of view on which robot size to use, because there are several factors, and we don’t know how fast the world will change in two or three years. We are trying to shape the rules and the conditions to play for next year, but, because of how quickly things are changing, we don’t know what the best decision will be. And also the research we are going to do will be affected by the decision we make on this.

There will be some changes to other leagues in the near future too; the small and middle sizes will close in two years probably, and the simulation league also. A lot will happen in the next five years, probably more than during the last 10-15 years. This is a critical year because the decisions are based on what we can see, what we can spot in the future, but we don’t have all the information we need, so it will be challenging.

For example, the SPL has a big, probably the biggest, community among the RoboCup leagues. We have a lot of teams that are grouping by interest and so there are teams that are sticking to working on this specific problem with a specific platform and teams that are trying to move to another platform and another problem. So even inside the same community we are going to have more than one point of view and hopes for the future. At a certain point we will try to figure out what is the best for all of them.

Daniele: I just want to add that in order to achieve the 2050 challenge, in my opinion, it is necessary to have just one league encompassing everything. So up to this point, different leagues have been focusing on different research problems. There were leagues focusing only on strategy, others focusing only on the hardware, our league focusing mainly on the coordination and dynamic handling of the gameplay. But at the end of the day, in order to compete with humans, there must be only one league bringing all these single aspects together. From my point of view, it totally makes sense to keep merging leagues together.

About the authors

Daniele Affinita is a PhD student in Machine Learning at EPFL, specializing in the intersection of Machine Learning and Robotics. He has over four years of experience competing in RoboCup with the SPQR team. In 2024, he worked at Sony on domain adaptation techniques. He holds a Bachelor’s degree in Computer Engineering and a Master’s degree in Artificial Intelligence and Robotics from Sapienza University of Rome.

Vincenzo Suriani earned his Ph.D. in Computer Engineering in 2024 from Sapienza University of Rome, with a specialization in artificial intelligence, robotic vision, and multi-agent coordination. Since 2016, he has served as Software Development Leader of the Sapienza Soccer Robot Team, contributing to major robotic competitions and international initiatives such as EUROBENCH, SciRoc, and Tech4YOU. He is currently a Research Fellow at the University of Basilicata, where he focuses on developing intelligent environments for software testing automation. His research, recognized with award-winning papers at the RoboCup International Symposium (2021, 2023, 2025), centers on robotic semantic mapping, object recognition, and human–robot interaction.

Domenico Daniele Bloisi is an associate professor of Artificial Intelligence at the International University of Rome UNINT. Previously, he was associate professor at the University of Basilicata, assistant professor at the University of Verona, and assistant professor at Sapienza University of Rome. He received his PhD, master’s and bachelor’s degrees in Computer Engineering from Sapienza University of Rome in 2010, 2006 and 2004, respectively. He is the author of more than 80 peer-reviewed papers published in international journals and conferences in the field of artificial intelligence and robotics, with a focus on image analysis, multi-robot coordination, visual perception and information fusion. Dr. Bloisi conducts research in the field of melanoma and oral carcinoma prevention through automatic medical image analysis in collaboration with specialized medical teams in Italy. In addition, Dr. Bloisi is WP3 leader of the EU H2020 SOLARIS project, unit leader for the PRIN PNRR RETINA project, unit leader for the PRIN 2022 AIDA project. Since 2015, he is the team manager of the SPQR robot soccer team participating in the RoboCup world competitions

Can Lin is a master student in Data Science at Sapienza university of Rome. He holds a bachelor degree in Computer science and Artificial intelligence from the same university. He joined the SPQR team in September of 2024, focusing on tasks related to computer vision.

Interview with Roberto Figueiredo: the RoboCup experience

Five people holding kid-sized robots
Roberto Figueiredo is a master’s student at the University of Aveiro. He is a member of the Bold Hearts RoboCup team which competes in the Humanoid KidSize soccer league. He is currently the local representative for the Junior Rescue Simulation. We spoke to Roberto about his RoboCup journey, from the junior to the major leagues, and his experience of RoboCup events.

When was your first RoboCup event and which competition did you take part in?

I started in 2016 in the Junior leagues with my high school and I took part in the rescue simulation competition (although I originally joined the on-stage competition). This first event actually happened in Portugal, and it was similar to a workshop. We qualified to go to the world cup in rescue simulation, in Leipzig, Germany, and we ended up in second place. That was really nice, and it was my first contact with RoboCup, and with robotics generally. I’d been working with electronics in the past, but simulation gave me a bit of an introduction to the more theoretical aspects of robotics, and to AI in general. Rescue simulation makes you think of ways to make the robots independent and not manually controlled by humans.

RoboCup team in front of the rescue setupRoberto’s first RoboCup in 2016, Leipzig, pictured with the Singapore team celebrating after the finals.

Could you tell us about the subsequent RoboCup events that you took part in?

In 2017 we qualified to go to Nagoya, Japan, which was not just an amazing RoboCup, but an amazing trip. That’s another good thing about robotics, you get to meet a lot of new people in new countries. We did quite well in this competition as well, I think we reached 5th place.

After that we went to European RoboCup Junior in Italy. The following year was my last RoboCup as a junior, which was in Sydney. That was also an interesting event and I got to chat a bit more with the majors and understand how their teams worked. By this point, I had gained more experience, and I felt ready to get involved with a major league RoboCup team.

There is a big gap between the junior and major leagues. When I joined my team (the Bold Hearts), most of the team were PhDs and I was just a second year bachelor’s student so it was quite hard to pick up all the knowledge. However, if you are persistent enough and you are interested in, and passionate about, robotics you’ll get the hang of it and you’ll learn by trial and error.

Seven people standing and one kneelingEuroRoboCup 2022 in Portugal. Roberto (kneeling in photo) was part of the organising committee.

When was your first competition with the team in the major league?

My first competition was actually last year, in Thailand. We didn’t perform as we would like to, however, there is much more to RoboCup than just the competition – it is now more of a scientific and knowledge-sharing event, it’s unique. Just this year, in Bordeaux, we had a problem with our robots. Every time we disconnected the ethernet cable, the robot just stopped playing, and we couldn’t figure out what was happening. I asked another team that was using the same software – they had figured out the problem before and they told us how to solve it. I don’t think you’ll see that in other competitions. Every team has a joint objective which is making science progress, making friendships, and making other teams better by sharing their knowledge. That’s really unique.

How did you join the Bold Hearts team?

I decided to do my master’s in the UK (at the University of Hertfordshire), to experience a different country and a different style of education. When I joined, I knew there was a team so I was already looking forward to joining. After a couple of years of work, we finally got to go to a competition as a team. It’s been an amazing time and a huge learning experience.

What is your role on the team?

In our team, everyone does a bit of everything. We still have a lot of problems to solve – on both the hardware and software side. All of us currently are computer scientists so it’s a bit more of a struggle to work on the hardware side. So, I do a bit of everything, both AI and non-AI related problems. For example, I’ve done some 3d modelling for the robots, and I’m currently working on the balancing problem. We all work together on the problems which is amazing because you get to see a bit of everything and learn from everyone. Robotics is a very multidisciplinary field. You get to learn about all kinds of topics: mechanical engineering, electrical engineering, machine learning, coding in general.

The Bold Hearts’ qualification video for this year’s RoboCup competition

Could you tell us about this year’s competition (which took place in Bordeaux)?

This year we were a lot more prepared than last year, when we’d just come back from COVID, and all of our experienced members had recently left the team, due to finishing their PhDs and starting work. Creating a successful robot team is a huge integration problem. There are so many pieces that need to go together and work perfectly for the robots to function, and if one fails it looks like your system isn’t doing anything. We got walking working perfectly this year, we had vision working well too, and we had a stable decision tree, and we were able to listen to the controller (which is like a referee and passes on information about fouls, game start and stops etc.). However, we had some bugs in the decision tree that made everything fall apart and we spent a lot of time trying to debug it. This happens to a lot of teams. However, you can still appreciate the work and progress of what they have done.

Five people holding kid-sized robotsRoboCup 2023 in Bordeaux. Roberto (left) with Bold Hearts teammates.

What are the immediate plans for the team?

We are now thinking about joining the simulation competition, which is part of our league. It takes place in the winter season and we’re planning on joining to work on our software. The transition between simulation and hardware is quite hard. You need a good simulation base to be able to transfer directly the knowledge to the robot. We’re working on having a very good simulation so we can transfer, at least more easily, the knowledge learnt in simulation to the robots.

RoboCup is moving more towards AI and learning, which we can see in the 3d simulation. The robots learn a lot of the motion through reinforcement learning, for example. In the physical leagues it’s not as easy as we have to transfer that to the real world, where there is play in the joints, there’s backlash, there’s play in the 3d parts – there are a lot of variables that are not taken into account in simulations.

How has being part of RoboCup inspired your studies and research?

Every time I go to RoboCup I come out thinking about what I’m going to do next. I couldn’t be more inspired. It’s a really intense field but I love it. It makes you want to work really hard and it makes you passionate about science. I did my bachelor’s project related to RoboCup, I joined a master’s course on robotics, I keep asking my Professors if they want to start a team back in Portugal. I’m going to do my master’s thesis on robotics, on humanoids. I think humanoids are a very complex and interesting challenge. There is no one single solution.

About Roberto

Roberto Figueiredo

Roberto Figueiredo is a Portuguese, AI-focused computer scientist with a bachelor’s degree from the University of Hertfordshire. He currently pursuing a master’s in Robotics and Intelligent Systems from the University of Aveiro, and is passionate about advancing his expertise in robotics. He has long been very enthusiastic about robots and AI, being a participant in RoboCup since 2016 in the Rescue Simulation league. He has since become local representative for the Rescue League in Portugal and joined a Major team, Bold Hearts, in the Kid Size league, one of the most challenging in RoboCup Humanoid Soccer.

What’s coming up at #RoboCup2023?

robocup2023 logo
This year, RoboCup will be held in Bordeaux, from 4-10 July. The event will see around 2500 participants, from 45 different countries take part in competitions, training sessions, and a symposium. You can see the schedule for the week here.

The leagues and their competitions

The league competitions will take place on 6-9 July. You can find out more about the different leagues at these links:

Symposium

The RoboCup symposium will take place on 10 July. The programme can be found here.

There will be three keynote talks:

  • Cynthia Breazeal, Social Robots: Reflections and Predictions of Our Future Relationship with Personal Robots
  • Ben Moran and Guy Lever, Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
  • Laurence Devillers, Socio-affective robots: ethical issues

Find out more at the event website.

#IJCAI invited talk: engineering social and collaborative agents with Ana Paiva

An illustration containing electronical devices that are connected by arm-like structuresAnton Grabolle / Better Images of AI / Human-AI collaboration / Licenced by CC-BY 4.0

The 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJACI-ECAI 2022) took place from 23-29 July, in Vienna. In this post, we summarise the presentation by Ana Paiva, University of Lisbon and INESC-ID. The title of her talk was “Engineering sociality and collaboration in AI systems”.

Robots are widely used in industrial settings, but what happens when they enter our everyday world, and, specifically, social situations? Ana believes that social robots, chatbots and social agents have the potential to change the way we interact with technology. She envisages a hybrid society where humans and AI systems work in tandem. However, for this to be realised we need to carefully consider how such robots will interact with us socially and collaboratively. In essence, our world is social, so when machines enter they need to have some capabilities to interact with this social world.

Ana took us through the theory of what it means to the social. There are three aspects to this:

  1. Social understanding: the capacity to perceive others, exhibit theory of mind and respond appropriately.
  2. Intrapersonal competencies: the capability to communicate socially, establish relationships and adapt to others.
  3. Social responsibility: the capability to take actions towards the social environment, follow norms and adopt morally appropriate actions.

Ana talkingScreenshot from Ana’s talk.

Ana wants to go from this notion of social intelligence to what is called artificial social intelligence, which can be defined as: “the capability to perceive and understand social signals, manage and participate in social interactions, act appropriately in social settings, establish social relations, adapt to others, and exhibit social responsibility.”

As an engineer, she likes to build things, and, on seeing the definition above, wonders how she can pass from said definition to a model that will allow her to build social machines. This means looking at social perception, social modelling and decision making, and social acting. A lot of Ana’s work revolves around design, study and development for achieving this kind of architecture.

Ana gave us a flavour of some of the projects that she and her groups have carried out with regards to trying to engineer sociality and collaboration in robots and other agents.

One of these projects was called “Teach me how to write”, and it centres on using robots to improve the handwriting abilities of children. In this project the team wanted to create a robot that kids could teach to write. Through teaching the robot it was hypothesised that they would, in turn, improve their own skills.

The first step was to create and train a robot that could learn how to write. They used learning from demonstration to train a robotic arm to draw characters. The team realised that if they wanted to teach the kids to write, the robot had to learn and improve, and it had to make mistakes in order to be able to improve. They studied the taxonomy of handwriting mistakes that are made by children, so that they could put those mistakes into the system, and so that the robot could learn from the kids how to fix the mistakes.

You can see the system architecture in the figure below, and it includes the handwriting task element, and social behaviours. To add these social behaviours they used a toolkit developed in Ana’s lab, called FAtiMA. This toolkit can be integrated into a framework and is an affective agent architecture for creating autonomous characters that can evoke empathic responses.

system architectureScreenshot from Ana’s talk. System architecture.

In terms of actually using and evaluating the effectiveness of the robot, they couldn’t actually put the robot arm in the classroom as it was too big, unwieldy and dangerous. Therefore, they had to use a Nao robot, which moved its arms like it was writing, but it didn’t actually write.

Taking part in the study were 24 Portuguese-speaking children, and they participated in four sessions over the course of a few weeks. They assigned the robot two contrasting competencies: “learning” (where the robot improved over the course of the sessions) and “non-learning” (where the robot’s abilities remained constant). They measured the kids’ writing ability and improvement, and they used questionnaires to find out what the children thought about the friendliness of the robot, and their own teaching abilities.

They found that the children who worked with learning robot significantly improved their own abilities. They also found that the robot’s poor writing abilities did not affect the children’s fondness for it.

You can find out more about this project, and others, on Ana’s website.

RoboCup2022 underway – where to find the livestream action

ROboCup 2022

RoboCup 2022 kicked off yesterday, and there have already been lots of competitions within the various leagues. Many of these are livestreamed to YouTube, and the recordings are available for anyone to watch.

Below are the links to the livestream (and recorded) channels for the leagues that have them.

In addition to these channels, there are also some stand-alone recordings.

Here are some highlights from the humanoid league drop-in tournament:

This video features a match between the HTWK-Robots and rUNSWift.

Find out more about RoboCup 2022 here.

Radhika Nagpal at #NeurIPS2021: the collective intelligence of army ants

ants walking up a tree

The 35th conference on Neural Information Processing Systems (NeurIPS2021) featured eight invited talks. In this post, we give a flavour of the final presentation.

The collective intelligence of army ants, and the robots they inspire

Radhika Nagpal

Radhika’s research focusses on collective intelligence, with the overarching goal being to understand how large groups of individuals, with local interaction rules, can cooperate to achieve globally complex behaviour. These are fascinating systems. Each individual is miniscule compared to the massive phenomena that they create, and, with a limited view of the actions of the rest of the swarm, they achieve striking coordination.

Looking at collective intelligence from an algorithmic point-of-view, the phenomenon emerges from many individuals interacting using simple rules. When run by these large, decentralised groups, these simple rules result in highly intelligent behaviour.

The subject of Radhika’s talk was army ants, a species which spectacularly demonstrate collective intelligence. Without any leader, millions of ants work together to self-assemble nests and build bridge structures using their own bodies.

One particular aspect of study concerned self-assembly of such bridges. Radhika’s research team, which comprised three roboticists and two biologists, found that the ants created bridges adapt to traffic flow and terrain. The ants also disassembled the bridge when the flow of ants had stopped and it wasn’t needed any more.

The team proposed the following simple hypothesis to explain this behaviour using local rules: if an ant is walking along, and experiences congestion (i.e. another ant steps on it), then it becomes stationary and turns into a bridge, allowing other ants to walk over it. Then, if no ants are walking on it any more, it can get up and leave.

These observations, and this hypothesis, led the team to consider two research questions:

  • Could they build a robot swarm with soft robots that can self-assemble amorphous structures, just like the ant bridges?
  • Could they formulate rules which allowed these robots to self-assemble temporary and adaptive bridge structures?

There were two motivations for these questions. Firstly, the goal of moving closer to realising robot swarms that can solve problems in a particular environment. Secondly, the use of a synthetic system to better understand the collective intelligence of army ants.

Screenshot from Radhika's talkScreenshot from Radhika’s talk

Radhika showed a demonstration of the soft robot designed by her group. It has two feet and a soft body, and moves by flipping – one foot remains attached, while the other detaches from the surface and flips to attach in a different place. This allows movement in any orientation. Upon detaching, a foot searches through space to find somewhere to attach. By using grippers on the feet that can hook onto textured surfaces, and having a stretchable Velcro skin, the robots can climb over each other, like the ants. The robot pulses, and uses a vibration sensor, to detect whether it is in contact with another robot. A video demonstration of two robots interacting showed that they have successfully created a system that can recreate the simple hypothesis outlined above.

In order to investigate the high-level properties of army ant bridges, which would require a vast number of robots, the team created a simulation. Modelling the ants to have the same characteristics as their physical robots, they were able to replicate the high level properties of army ant bridges with their hypothesized rules.


You can read the round-ups of the other NeurIPS invited talks at these links:
#NeurIPS2021 invited talks round-up: part one – Duolingo, the banality of scale and estimating the mean
#NeurIPS2021 invited talks round-up: part two – benign overfitting, optimal transport, and human and machine intelligence

Maria Gini wins the 2022 ACM/SIGAI Autonomous Agents Research Award

trophy

Congratulations to Professor Maria Gini on winning the ACM/SIGAI Autonomous Agents Research Award for 2022! This prestigious prize recognises years of research and leadership in the field of robotics and multi-agent systems.

Maria Gini is Professor of Computer Science and Engineering at the University of Minnesota, and has been at the forefront of the field of robotics and multi-agent systems for many years, consistently bringing AI into robotics.

Her work includes the development of:

  • novel algorithms to connect the logical and geometric aspects of robot motion and learning,
  • novel robot programming languages to bridge the gap between high-level programming languages and programming by guidance,
  • pioneering novel economic-based multi-agent task planning and execution algorithms.

Her work has spanned both the design of novel algorithms and practical applications. These applications have been utilized in settings as varied as warehouses and hospitals, with uses such as surveillance, exploration, and search and rescue.

Maria has been an active member and leader of the agents community since its inception. She has been a consistent mentor and role model, deeply committed to bringing diversity to the fields of AI, robotics, and computing. She is also the former President of International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).

Maria will be giving an invited talk at AAMAS 2022. More details on this will be available soon on the conference website.

Interview with Tao Chen, Jie Xu and Pulkit Agrawal: CoRL 2021 best paper award winners

Congratulations to Tao Chen, Jie Xu and Pulkit Agrawal who have won the CoRL 2021 best paper award!

Their work, A system for general in-hand object re-orientation, was highly praised by the judging committee who commented that “the sheer scope and variation across objects tested with this method, and the range of different policy architectures and approaches tested makes this paper extremely thorough in its analysis of this reorientation task”.

Below, the authors tell us more about their work, the methodology, and what they are planning next.

What is the topic of the research in your paper?

We present a system for reorienting novel objects using an anthropomorphic robotic hand with any configuration, with the hand facing both upwards and downwards. We demonstrate the capability of reorienting over 2000 geometrically different objects in both cases. The learned controller can also reorient novel unseen objects.

Could you tell us about the implications of your research and why it is an interesting area for study?

Our learned skill (in-hand object reorientation) can enable fast pick-and-place of objects in desired orientations and locations. For example, in logistics and manufacturing, it is a common demand to pack objects into slots for kitting. Currently, this is usually achieved via a two-stage process involving re-grasping. Our system will be able to achieve it in one step, which can substantially improve the packing speed and boost the manufacturing efficiency.

Another application is enabling robots to operate a wider variety of tools. The most common end-effector in industrial robots is a parallel-jaw gripper, partially due to its simplicity in control. However, such an end-effector is physically unable to handle many tools we see in our daily life. For example, even using pliers is difficult for such a gripper as it cannot dexterously move one handle back and forth. Our system will allow a multi-fingered hand to dexterously manipulate such tools, which opens up a new area for robotics applications.

Could you explain your methodology?

We use a model-free reinforcement learning algorithm to train the controller for reorienting objects. In-hand object reorientation is a challenging contact-rich task. It requires a tremendous amount of training. To speed up the learning process, we first train the policy with privileged state information such as object velocities. Using the privileged state information drastically improves the learning speed. Other than this, we also found that providing a good initialization on the hand and object pose is critical for training the controller to reorient objects when the hand faces downward. In addition, we develop a technique to facilitate the training by building a curriculum on gravitational acceleration. We call this technique “gravity curriculum”.

With these techniques, we are able to train a controller that can reorient many objects even with a downward-facing hand. However, a practical concern of the learned controller is that it makes use of privileged state information, which can be nontrivial to get in the real world. For example, it is hard to measure the object’s velocity in the real world. To ensure that we can deploy a controller reliably in the real world, we use teacher-student training. We use the controller trained with the privileged state information as the teacher. Then we train a second controller (student) that does not rely on any privileged state information and hence has the potential to be deployed reliably in the real world. This student controller is trained to imitate the teacher controller using imitation learning. The training of the student controller becomes a supervised learning problem and is therefore sample-efficient. In the deployment time, we only need the student controller.

What were your main findings?

We developed a general system that can be used to train controllers that can reorient objects with either the robotic hand facing upward or downward. The same system can also be used to train controllers that use external support such as a supporting surface for object re-orientation. Such controllers learned in our system are robust and can also reorient unseen novel objects. We also identified several techniques that are important for training a controller to reorient objects with a downward-facing hand.

A priori one might believe that it is important for the robot to know about object shape in order to manipulate new shapes. Surprisingly, we find that the robot can manipulate new objects without knowing their shape. It suggests that robust control strategies mitigate the need for complex perceptual processing. In other words, we might need much simpler perceptual processing strategies than previously thought for complex manipulation tasks.

What further work are you planning in this area?

Our immediate next step is to achieve such manipulation skills on a real robotic hand. To achieve this, we will need to tackle many challenges. We will investigate overcoming the sim-to-real gap such that the simulation results can be transferred to the real world. We also plan to design new robotic hand hardware through collaboration such that the entire robotic system can be dexterous and low-cost.


About the authors

Tao ChenTao Chen is a Ph.D. student in the Improbable AI Lab at MIT CSAIL, advised by Professor Pulkit Agrawal. His research interests revolve around the intersection of robot learning, manipulation, locomotion, and navigation. More recently, he has been focusing on dexterous manipulation. His research papers have been published in top AI and robotics conferences. He received his master’s degree, advised by Professor Abhinav Gupta, from the Robotics Institute at CMU, and his bachelor’s degree from Shanghai Jiao Tong University.

Jie XuJie Xu is a Ph.D. student at MIT CSAIL, advised by Professor Wojciech Matusik in the Computational Design and Fabrication Group (CDFG). He obtained a bachelor’s degree from Department of Computer Science and Technology at Tsinghua University with honours in 2016. During his undergraduate period, he worked with Professor Shi-Min Hu in the Tsinghua Graphics & Geometric Computing Group. His research mainly focuses on the intersection of Robotics, Simulation, and Machine Learning. Specifically, he is interested in the following topics: robotics control, reinforcement learning, differentiable physics-based simulation, robotics control and design co-optimization, and sim-to-real.

Pulkit AgrawalDr Pulkit Agrawal is the Steven and Renee Finn Chair Professor in the Department of Electrical Engineering and Computer Science at MIT. He earned his Ph.D. from UC Berkeley and co-founded SafelyYou Inc. His research interests span robotics, deep learning, computer vision and reinforcement learning. Pulkit completed his bachelor’s at IIT Kanpur and was awarded the Director’s Gold Medal. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Machine Learning Research Award, Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS, and Sridhar Memorial Prize, among others.


Find out more

  • Read the paper on arXiv.
  • The videos of the learned policies are available here, as is a video of the authors’ presentation at CoRL.
  • Read more about the winning and shortlisted papers for the CoRL awards here.