The ERL Smart Cities Robotics Challenge 2019 takes place from 17-21st September in Milton Keynes, United Kingdom. Funded by the European Commission under the SciRoc Horizon 2020 project, this new challenge of the European Robotics League (ERL) focuses on the role of robots in smart cities. The competition includes five different episodes or scenarios under three categories: human-robot interaction & mobility, emergency and manipulation. On the second day of the competition teams kept working on improving their scores to secure a place in the finals.
The SciRoc episodes under this category require robots to achieve manipulation tasks, applying some of the task benchmarks (TBMs) of the ERL Professional and the ERL Consumer Service Robots leagues. The episodes are open to robots able to navigate, and equipped with arms and/or effectors for the manipulation of objects. The episodes in which teams can participate are: Shopping pick and pack (E07) and Through the door (E10).
Episode (E07): Shopping pick and pack
This episode sponsored by OCADO addresses the problem of delivering customised orders in a grocery shop. The customer uses an app on a tablet to select a limited number of items and makes an order. The system processes the order and sends it to the robot that will collect the items from the shop shelves and put them into a delivery area.
The main functionality tested in this episode is mobile manipulation of an autonomous robot, although it also requires object perception.
In the latest years, mobile manipulation has become a problem of interest among researchers due to the variety and complexity of challenges and robot capabilities that are involved.
“Different objects need different type of grasping. There is not a universal type of grasping, so robots must know which grasp to use in each case. For this episode we chose a range of packaged products, from deformable objects such as a pack of seeds to reflective objects like a jar of honey or a metallic tube of tomato puree. On the shelves there are rows of the same product, this makes it more challenging for the robot to grab the object without damaging it or the ones surrounding it. It is also important that the robot is capable of understanding if it failed to achieve the task, requiring it to try again” comments Roberto Capobianco, lecturer at the Sapienza University of Rome and lead member of the technical committee of Episode E07.
The competition arena reproduces a grocery shop. It has shelves with products and a delivery area. The role of the robot in this context is to support and provide customers with the deliveries they ordered. It also can assist the shop staff by connecting to the data hub and automatically updating the list of products, take care of stock or check if the shelves need to be refilled.
“OCADO was interested in this episode from the very beginning and contributed actively to the rulebook. They helped us design the layout of the arena to make it realistic and at the same time closer to their idea of how a smart grocery shop could be in the (near) future” explains Capobianco.
Due to the complexity of the manipulation tasks, only two teams participate in this episode: CATIE Robotics and b-it-bots.
“Yesterday we tested successfully the navigation and taking orders tasks. Today we are focusing on the second part of the episode, which involves detecting the different objects, grabing them and putting them in the red boxes. We expect the robot to be able to complete the full sequence tomorrow” says Clement Pinet, software engineer at CATIE and team member of CATIE Robotics.
Episode (E10): Through the door
This episode aims at evaluating the capability of an autonomous robot to interact with a hinged door. Hinged doors are common in human environments and probably, one of the pieces of engineering most closely matched to human capabilities and limitations. This means that they are easy to use for a standard person but can be difficult or impossible to operate for those who do not match that standard criteria. For instance, people with disabilities, small children, animals and mobile machines such as robots.
“In a long-term view, robots will live together with people in their houses and work in a human environment. It is important that robots can manage things that are common for us, one of those things is doors. If you have a robot at home to assist you, it should be able to open the door to go from a room to another without your help. This episode is interesting for society for this reason, and for researchers because it is a complex manipulation task. The robot has to manipulate an object that has multiple mobile elements that need to be moved at the same time. When we open a door, we move the handle and at the same time we move the door.” explains Giulio Fontana, research engineer at Politecnico di Milano, and lead member of the technical committee of Episode E10.
The main functionality tested in this episode is interaction with physical objects, although navigation and mapping are also required to complete the tasks.
The robots must identify the door, approach it, open it and go through it. In each run the configuration of the handle, the torque (e.g. simulating a small rock behind it or a big box) and the colour of the surface may change; pushing teams to find a more general approach to the problem.
Minh Nguyen, master student at the Hochschule Bonn-Rhein-Sieg and team member of b-it-bots comments “The complexity is solving this problem in a general way. There are many sensors involved in the task of opening a door: vision, tactile, force torque, etc. and humans have a background knowledge of how to open doors or operate them. We have been interacting with different types of doors all our life. Getting information from the interaction between the robot and the door will help us to better integrate all the different aspects necessary for opening it.”
The “Through the door” episode is funded by EUROBENCH Horizon 2020 project with the aim to contribute and consolidate a benchmarking methodology for human-centred robots such as prostheses, exoskeletons and humanoids. Researchers and engineers behind the Modular Active Door for RObot Benchmarking (MADROB) have developed a task benchmark (TBM) that can also be used within the benchmarking at competitions of the European Robotics League.
“For the SciRoc challenge we use only the motion control part. The sensor part is tailored to the requirements of EUROBENCH and is not used in this competition” says Fontana.
Martino Migliavacca, CEO of Nova Labs, a spin-off of the Artificial Intelligence and Robotics laboratory of Politecnico di Milano and partner in MADROB, explains how the sensors on both sides of the door work. “We measure with sensors the distance to the floor, so we can know if the robot has gone through the door or not. In this image you can see the signal of a sensor located above the door handle, the drop on the signal means that a robot or a person has gone through the door.”
The two teams competing in this episode use commercial robotics platforms. The team Leeds Autonomous Service Robots participated with a PAL Robotics TIAGo robot, while the team b-it-bots participated with the current RoboCup@Home domestic standard platform: the Toyota HSR.
Don’t miss tomorrow’s Day Three recap on emergency drones in urban environments.
Each time Boston Dynamics publishes a video about new capabilities of their robots, words are not enough, and we just watch with awe… Such a video was published by Boston Dynamics yesterday…. We didn’t know what title to write, except to say “gymnast robot”…
Video and cover image source: Boston Dynamics – www.bostondynamics.com
The ERL Smart Cities Robotics Challenge (SciRoc Challenge) includes five different episodes around the topic of smart shopping. Ten teams from five different countries have travelled to Milton Keynes, UK, to participate in this unique biennial event that brings together the three European Robotics League (ERL) competitions: consumer, professional and emergency.
SciRoc challenge adds the new concept of episodes to the ERL benchmarking methodology. An episode aims at targeting one or two functionalities (tested during a specific ERL Functionality Benchmark, FBM) within an operational context. For instance, delivering coffee orders or taking an elevator. In order to complete an episode, the robot might also be required to integrate other functionalities commonly used in a social environment, such as navigation or speech recognition.
Teams participate in one or more episodes during the week, and the best two will classify for the finals on Saturday.
Human-Robot interaction and Mobility
The SciRoc challenge is organised in three categories, being one of them the Human-Robot Interaction (HRI) and mobility. The episodes of this category involve robots able to show social behaviours, such as verbally interacting with (human) customers or navigating respecting proxemics, in line with the ERL Consumer Robots league. Any robot, wheeled or legged, with navigational and verbal communication can participate in the two episodes available in this category: Deliver coffee shop orders (E03) and Take the elevator (E04)
Episode (E03): Deliver coffee shop orders
In this episode sponsored by COSTA Coffee the robot assists the staff of a coffee shop by taking care of the customers. The robot must recognise the status of all tables (i.e. Need serving, Already served, Needs cleaning, and Table ready), take orders, report the number of free tables, and deliver objects such as food and beverages to and from the customers’ tables.
“The question I had in mind when designing the episode of Deliver coffee shop orders was: how could a robot best assist humans in a shop? We are not talking about taking their job, but to help them. For example, a catering robot can assist human servers at peak hours by handling the manual works at a fast pace and reducing the customer response time” explains Meysam Basiri, researcher at Instituto Superior Tecnico (IST) and member of the technical committee of episode E03.
The main functionalities evaluated in this episode are people perception and object perception. However, the episode also requires other functionalities such as navigation and speech recognition.
The robot has 15 minutes to execute sequentially and without any interruption three phases: 1) recognising the status of all the tables and report it, 2) serving an order and 3) greeting and guiding a new customer to the table.
As HRI is not the main focus of the episode, the selection of the menu orders has been left open for the teams to decide. They can simply ask the customers to communicate the item numbers through speech, a use QR codes, click the items in a display, etc.
Robots will also report their position in the shop, the table status and the orders to the smart data hub, this way the referees and the person at the counter will have access to the information in real-time.
The four teams competing in this episode are SocRob, Gentlebots, eNTiTy and Leeds Autonomous Service Robots. All the teams participate with the PAL Robotics TIAGo platform, except SocRob, that participates with the Monarch platform from IDmind.
Carlos Azevedo, PhD student and team member of SocRob comments, “This robot was developed to interact with children with cancer in a hospital. The group of researchers of the MONARCH project asked the kids to draw a robot and used the drawings to design the robotics platform. Therefore, it is not very tall and has that kind of cute face. People seem to empathise easily with the robot, which makes easier human-robot interaction“. He adds that the team has specifically developed for the coffee shop episode an artificial intelligence (AI) algorithm to establish conversation with the customers “we thought it would be more interesting to start a human-like conversation when the robot doesn’t, for example, understand a menu order. Sorry, did you ask for A or for B?”
Episode (E04): Take the elevator
In this HRI & mobility episode, the robot must take the elevator together with regular customers of the shopping mall to reach a service located on another floor. The robot must enter and exit the elevator at the right floor in the presence of people and interact with the smart data hub to know the floor it needs to go. Due to safety reasons and with the purpose of showing to the public how interactions inside the elevator is happening, the episode takes place in a realistic mock elevator.
The main functionality tested in this episode is navigation respecting proxemics, and it also requires other functionalities like spoken dialogue beyond command and people detection.
Luca Iocchi, Assistant Professor at Sapienza University of Rome, and lead member of Episode E04 technical committee explains that in this episode there is a focus also on the social aspect. The robot needs to negotiate the space with humans inside the elevator, thus it may need to use human social protocols. “When we take the elevator with other people, we do our best to find a comfortable space in that confined environment. That negotiation of space requires social communication. In this context, a robot needs to use these social protocols to achieve its goal. If there is missing information, ask the human. One important, and novel, element of this competition is that the customers are volunteers with no technical knowledge, and they are also involved in the evaluation of the robots”.
This episode has a Human-robot Interaction questionnaire as part of the performance evaluation and scoring. The questionnaire has been developed by Lu wang, PhD student at Sapienza University of Rome, and member of the technical committee of episode E04; with the external collaboration of Mary Ellen Foster, Senior lecturer on Human-Robot Interaction at the University of Glasgow, UK. “We developed this questionnaire to evaluate users’ perceptions in a real context. We investigated previous work done in this type of questionnaires and filmed an experiment in a similar context to the one of the episode. We asked users to select from a list of behaviours which ones they saw. From their answers we designed what is now the HRI questionnaire we are using in episode E04” explains Lun Wang.
The efforts to generate this questionnaire and the episode rulebook are now part of the scientific paper “Developing a Questionnaire to Evaluate Customers’ Perception in the Smart City Robotic Challenge” that will be presented at the 28th Edition of the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2019).
The award for the best social robot has unleashed teams’ creativity when developing the human-robot interaction. Referees have already seen volunteers laughing at some of the robots’ comments.
“Humans are helpful by nature. They want to take part in activities when they see a robot that reminds them of the future. If the robot is more human-like and behaves more like a human, it will be better accepted by customers. In that context we can also predict some actions or behaviours of the customers. We use the Pepper robot because it has a soft voice and a broad range of gestures, thus is less likely to scare people. If a robot relies on being expressive, it can rely on people to help it.” says Daniel Delgado Bellamy, research associate at the Bristol Robotics Laboratory and team member of HEARTS.
In this first day, the five participating teams (HEARTS, Gentlebots, Leeds Autonomous Service Robots, eNTiTy and Robotics Lab UC3) performed without major problems.
Team Gentlebots completed the episode in the first day with a TIAGO robot they loaned from SciRoc platinum sponsor PAL robotics. “We have been working with TIAGo since June this year. Our research team focuses in cognitive architecture oriented to social robots. We chose the platform without the manipulator, because we were interested in the social navigation and dialogue. TIAGo is definitely an easy and powerful platform to work with” says Francisco Martin Rico, Associate professor at Universidad Rey Juan Carlos, and Team leader of Gentlebots.
Watch this space for Day Two: robot manipulation in human environments!