All posts by NCCR Robotics

Page 1 of 2
1 2

NCCR Robotics: A documentary

This short film documents some of the most innovative projects that emerged from the work of NCCR Robotics, the Swiss-wide consortium coordinated from 2010 to 2022 by EPFL professor Dario Floreano and ETHZ professor Robert Riener, including other major research institutions across Switzerland.

Shot over the course of six months in Lausanne, Geneva, Zurich, Wangen an der Aare, Leysin, Lugano, the documentary is a unique look at the state of the art of medical, educational and rescue robotics, and at the specific contributions that Swiss researchers have given to the field over the last decade. In addition to showing the robots in action, the film features extended interviews with top experts including Stéphanie Lacour, Silvestro Micera, Davide Scaramuzza, Robert Riener, Pierre Dillenbourg, Margarita Chli, Dario Floreano.

Produced by NCCR Robotics and Viven.

12 years of NCCR Robotics

After 12 years of activity, NCCR Robotics officially ended on 30 November 2022.

We can proudly say that NCCR Robotics has had a truly transformational effect on the national robotics research landscape, creating novel synergies, strengthening key areas, and adding a unique signature that made Switzerland prominent and attractive at the international level.

In its 12 years of activity, NCCR Robotics has had a transformational effect on the national robotics research landscape.

Our highlights include:

  • Achieving several breakthroughs in wearable, rescue and educational robotics
  • Creating new master’s and doctoral programmes that will train generations of future robotics engineers
  • Graduating more than 200 PhD students and 100 postdocs, with more than 1’000 peer-reviewed publications
  • Spinning out several projects into companies, many of which have become international leaders and generated more than 400 jobs
  • Improving awareness of gender balance in robotics and substantially increasing the percentage of women in robotics in Switzerland
  • Kick-starting large outreach programs, such as Cybathlon, Swiss Drone Days, and Swiss Robotics Days, which will continue to increase public awareness of robotics for good

It is not the end of the story though: our partner institutions – EPFL, ETH Zurich, the University of Zurich, the University of Bern, the University of Basel, Università della Svizzera Italiana, EMPA – will continue to collaborate through the Innovation Booster Robotics, a new national program aimed at developing technology transfer activities and maintaining the network.

Research

The research programme of NCCR Robotics has been articulated around three Grand Challenges for future intelligent robots that can improve the quality of life: Wearable Robotics, Rescue Robotics, and Educational Robotics.

In the Wearable Robotics Grand Challenge, NCCR Robotics studied and developed a large range of novel prosthetic and orthotic robots, implantable sensors, and artificial intelligence algorithms to restore the capabilities of persons with disabilities and neurological disorders.

For example, researchers developed implantable and assistive technologies that allowed patients with completely paralyzed legs to walk again thanks to a combination of assistive robots (such as Rysen), implantable microdevices that read brain signals and stimulate spinal cord nerves, and artificial intelligence that translate neural signals into gait patterns.

They also developed prosthetic hands with soft sensors and implantable neural stimulators that enable people to feel again the haptic qualities of objects. Along this line, they also studied and developed prototypes of an extra arm and artificial intelligence that could allow humans to control the additional artificial limb in combination with their natural arms for situations that normally require more than one person.

Researchers also developed the MyoSuit textile soft exoskeletons that allow persons on wheelchairs to stand up, take a few steps, and then sit back on the wheelchair without external help.

In the Rescue Robotics Grand Challenge, researchers developed and deployed legged and flying robots with self-learning capabilities for use in disaster mitigation as well as in civil and industrial inspection.

Among the most notable results are ANYMal, a quadruped robot that won the first prize in the international DARPA challenge, by exploring underground tunnels and identifying a number of objects; K-rock, an amphibious robot inspired by salamanders and crocodiles that can swim, walk, and squat under narrow passages; a collision-resilient drone that has become the most widely used robot in the world by rescue teams, governments, and companies for inspection of confined spaces, bridges, boilers, and ship tankers, to mention a few; a whole family of foldable drones that can change shape to squeeze through narrow passage, protect nearby persons from propellers, carry cargo of various size and weight, and twist their arms to get close to surfaces and perform repair operations; an avian-inspired drone with artificial feathers that can approximate the flight agility of birds of prey.

In addition, researchers developed powerful learning algorithms that enabled legged robots to walk up mountains and grassy land by adapting their gait, and flying robots that learn to fly and avoid high-speed moving objects using bio-inspired vision systems, and even learned to race through a circuit beating world-champion humans. Researchers also proposed new methods to let inexperienced humans and rescue officers interact with, and easily control, drones as if they were an extension of their own body.

In the Educational Robotics Grand Challenge, NCCR researchers created Thymio, a mobile robot for teaching programming and elements of robotics that has been deployed in more than 80’000 units in classrooms across Switzerland, and Cellulo, a smaller modular robot that allows richer forms of interaction with pupils, as wells as a broad range of learning activities (physics, mathematics, geography, games) and training methods for teaching teachers how to integrate robots in their lectures.

Researchers also teamed with Canton Vaud on a large-scale project to introduce robotics and computer science into all primary-school classes and have already trained more than one thousand teachers.

Outreach

Communication, knowledge (and technology) transfer to society and the economy

Over 12 years, NCCR Robotics researchers published approximately 500 articles in peer-review journals and 500 articles in peer-reviewed conferences and filed approximately 50 patents (one third of which have already been granted). They also developed a tech-transfer support programme to help young researchers translate research results into commercially viable products. As a result, 16 spin-offs were supported, out of which 14 have been incorporated and are still active. Some of these start-ups have become full scale-up companies with products sold all over the world, have raised more than five times capital than the total funds of the NCCR over 12 years, and generated several hundreds of new high-tech jobs in Switzerland.

Several initiatives were aimed at the public to communicate the importance of robotics for the quality of life.

  • For example, during the first phase NCCR Robotics organized an annual Robotics Festival at EPFL that attracted at its peak 17’000 visitors in one day.
  • In the second phase, Cybathlon was launched, a world-first Olympic-style competition for athletes with disabilities and supported by assistive devices, which was later taken over by ETH Zurich that will ensure its continuation.
  • Additionally, NCCR Robotics launched the Swiss Drone Days at EPFL that combine drone exhibitions, drone races, and public presentations, and were later taken over by EPFL and most recently by University of Zurich.
  • NCCR Robotics also organized the annual Swiss Robotics Day with the aim of bringing together researchers and industry representatives in a full day of high-profile technology presentations from top Swiss and international speakers, demonstrations of research prototypes and robotics products, carousel of pitch presentations by young spin-offs, and several panel discussions and networking events.

Promotion of young scientists and of academic careers of women

The NCCR Robotics helped develop a new master’s programme and a new PhD programme in robotics at EPFL, created exchange programmes and fellowships with ETH Zurich and top international universities with a program in robotics, and issued several awards for excellence in study, research, technology transfer, and societal impact.

NCCR Robotics has also been very active in addressing equal opportunities. Activities aimed at improving gender balance included dedicated exchange and travel grants, awards for supporting career development, master study fellowships, outreach campaigns, promotional movies and surveys for continuous assessment of the effectiveness of actions. As a result, the percentage of women in the EPFL robotics masters almost doubled in four years, and the number of women postgraduate and assistant professors doubled too. Although much remains to be done in Switzerland, the initial results of these actions are promising and the awareness of the importance of equal opportunity has become pervasive throughout the NCCR Robotics community and in all its research, outreach, and educational activities.

Beyond NCCR Robotics

In order to sustain the long-term impact of NCCR Robotics, EPFL launched a new Center of Intelligent Systems where robotics is a major research pillar and ETH Zurich created a Center for Robotics that includes large research facilities, annual summer schools, and activities to favor collaborations with industry.

Furthermore, EPFL built on NCCR Robotics’ educational technologies and further programmes to create the LEARN Center that will continue training teachers in the use of robots and digital technologies in schools. Similarly, ETH Zurich built on the research and competence developed in the Wearable Robotics Grand Challenge to create the Competence Centre for Rehabilitation Engineering and Science with the goal of restoring and maintaining independence, productivity, and quality of life for people with physical disabilities and contribute towards an inclusive society.

As the project approaches its conclusion, NCCR Robotics members applied for additional funding for the National Thematic Network Innovation Booster Robotics that was launched in 2022 and will continue supporting networking activities and technology transfer in medical and mobile robotics for the next 4 years. Finally, a Swiss Robotics Association comprising stakeholders from academia and industry will be created in order to manage the Innovation Booster programme and offer a communication and collaboration platform for the transformed and enlarged robotics community that NCCR Robotics has contributed to create.

Robots come out of the research lab

This year’s Swiss Robotics Day – an annual event run by the EPFL-led National Centre of Competence in Research (NCCR) Robotics – will be held at the Beaulieu convention center in Lausanne. For the first time, this annual event will take place over two days: the first day, on 4 November, will be reserved for industry professionals, while the second, on 5 November, will be open to the public.

Visitors at this year’s Swiss Robotics Day are in for a glimpse of some exciting new technology: a robotic exoskeleton that enables paralyzed patients to ski, a device the width of a strand of hair that can be guided through a human vein, a four-legged robot that can walk over obstacles, an artificial skin that can diagnose early-stage Parkinson’s, a swarm of flying drones, and more.

The event, now in its seventh year, was created by NCCR Robotics in 2015. It has expanded into a leading conference for the Swiss robotics industry, bringing together university researchers, businesses and citizens from across the country. For Swiss robotics experts, the event provides a chance to meet with peers, share ideas, explore new business opportunities and look for promising new hires. That’s what they’ll do on Friday, 4 November – the day reserved for industry professionals.

On Saturday, 5 November, the doors will open to the general public. Visitors of all ages can discover the latest inventions coming out of Swiss R&D labs and fabricated by local companies – including some startups. The event will feature talks and panel discussions on topics such as ethics in robotics, space robotics, robotics in art and how artificial intelligence can be used to promote sustainable development – all issues that will shape the future of the industry. PhD students will provide a snapshot of where robotics research stands today, while school-age children can sign up for robot-building workshops. Teachers can take part in workshops given by the Roteco robot teaching community and see how robotics technology can support learning in the classroom.

In the convention center’s 5,000 m² exhibit hall, some 70 booths will be set up with all sorts of robot demonstrations, complete with an area for flying drones. Technology developed as part of the Cybathlon international competition will be on display; this competition was introduced by NCCR Robotics in 2016 to encourage research on assistance systems for people with disabilities. Silke Pan will give a dance performance with a robotic exoskeleton, choregraphed by Antoine Le Moal of the Béjart ballet company. Talks will be given in French and English. Entrance is free of charge but registration is required.

Laying the foundation for future success

The 2022 Swiss Robotics Day will mark the end of NCCR Robotics, capping 12 years of cutting-edge research. The center was funded by the Swiss National Science Foundation and has sponsored R&D at over 30 labs in seven Swiss institutions: EPFL, ETH Zurich, the University of Zurich, IDSIA-SUPSI-USI in Lugano, the University of Bern, EMPA and the University of Basel. NCCR Robotics has given rise to 16 spin-offs in high-impact fields like portable robots, drones, search-and-rescue systems and education. Together the spin-offs have raised over CHF 100 million in funding and some of them, like Flyability and ANYbotics, have grown into established businesses creating hundreds of high-tech jobs. The center has also rolled out several educational and community initiatives to further the teaching of robotics in Switzerland.

After the center closes, some of its activities – especially those related to technology transfer – will be carried out by the Innovation Booster Robotics program sponsored by Innosuisse and housed at EPFL. This program, initially funded for three years, is designed to promote robotics in universities and the business world.

A day for industry professionals only

The first day of the event, 4 November, is intended for robotics-industry businesses, investors, researchers, students and journalists. It will kick off with a talk by Robin Murphy, a world-renowned expert in rescue robotics and a professor at Texas A&M University; she will be followed by Auke Ijspeert from EPFL’s Biorobotics Laboratory, Elena García Armada from the Center for Automation and Robotics in Spain, Raffaello D’Andrea (a pioneer in robotics-based inventory management) from ETH Zurich, Thierry Golliard from Swiss Post and Adrien Briod, the co-founder of Flyability.

In the afternoon, a panel discussion will explore how robots and artificial intelligence are changing the workplace. Experts will include Dario Floreano from NCCR Robotics and EPFL, Rafael Lalive from the University of Lausanne, Alisa Rupenyan-Vasileva from ETH Zurich, Agnès Petit Markowski from Mobbot and Pierre Dillenbourg from EPFL. Event participants will also have a chance to network that afternoon. The day will conclude with an awards ceremony to designate Switzerland’s best Master’s thesis on robotics. The booths and robot demonstrations will take place on both days of the event.

A virtual glimpse of NCCR Robotics research

At NCCR Robotics, a new generation of robots that can work side by side with humans (fighting disabilities, facing emergencies and transforming education) is developed. Check out the videos below to see them in more detail.

Swiss Robotics Day showcases innovations and collaborations between academia and industry

As the next edition of the Swiss Robotics Day is in preparation in Lausanne, let’s revisit the November 2021 edition, where the vitality and richness of Switzerland’s robotics scene was on full display at StageOne Event and Convention Hall in Zurich. It was the first edition of NCCR Robotics’s flagship event after the pandemic, and it surpassed the scale of previous editions, drawing in almost 500 people. You can see the photo gallery here.

Welcome notes from ETH President Joël Mesot and NCCR Robotics Director Dario Floreano opened a dense conference programme, chaired by NCCR Robotics co-Director Robert Riener and that included scientific presentations from Marco Hutter (ETH Zurich), Stéphanie Lacour and Herb Shea (both from EPFL), as well as the industry perspective from ABB’s Marina Bill, Simon Johnson from the Drone Industry Association and Hocoma co-founder Gery Colombo. A final roundtable – including Robert Riener, Hocoma’s Serena Maggioni, Liliana Paredes from Rehaklinik and Georg Rauter from the University of Basel – focused on the potential and the challenges of innovation in healthcare robotics.

Over 50 exhibitors – including scientific laboratories as well as start-ups and large companies – filled the 3,300 square-meter venue, demonstrating technologies ranging from mobile robots to wearable exoskeletons, from safe delivery drones to educational robots and much more. Sixteen young companies presented their innovations in a start-up carousel. Dozens of professional meetings took place throughout the day, allowing a diverse audience of entrepreneurs, funders, academics and policy makers to network and explore possible collaborations. A crowd of young researchers participated in a mentoring session where Marina Bill (ABB), Auke Ijspeert (EPFL) and Iselin Frøybu (Emovo Care) provided advice on academic and industrial careers. Sixteen students participated in the Cybathlon @school competition, experimenting with robotic technologies for disability, and in the end officially announcing CYBATHLON 2024.

During the event, the next chapter for Swiss robotics was also announced: the launch of the NTN Innovation Booster on robotics, which will run from 2022 to 2025 and will be led by EPFL’s Aude Billard. Funded by Innosuisse, the NTN will act as a platform for new ideas and partnerships, supporting innovation through “idea generator bubbles” and specific funding calls.

The 2021 Swiss Robotics Day marked the beginning of NCCR Robotics’s final year. The project, launched in 2010, is on track to meet all its scientific goals in the three areas of wearable, rescue and educational robotics, while continuing to focus on supporting spin-offs, advancing robotics education and improving Swiss Robotics Day showcases innovation equality of opportunities for all robotics researchers. The conclusion of NCCR Robotics will be marked by the next edition of the Swiss Robotics Day as larger, two-days public event that will take place in Lausanne on 4 and 5 November 2022.

Wearable robotics

The goal of the NCCR Grand Challenge on Wearable Robotic is to develop a novel generation of wearable robotic systems, which will be more comfortable for patients and more extensively usable in a clinical environment. These new technological solutions will help in the recovery of movement and grasping after cardiovascular accidents and spinal cord lesions. They can be used to enhance physiotherapy by improving training, thus encouraging the brain to repair networks (neurorehabilitation). And they can be used as assistive devices (e.g. prosthetic limbs and exoskeletons) to support paralysed people in daily life situations.

While current wearable robots are making huge advances in the lab, there is some way to go before they become part of everyday life for people with disabilities. In order to be functional, robots must work with the user and not cause damage or irritation (in the case of externally worn devices) or be rejected by the host (in the case of implants), they must have their own energy source that does not need to be constantly plugged in or re-charged, and they need to be affordable.

Rescue robotics

After a natural disaster such as an earthquake or flood, it is often very dangerous for teams of rescue workers to go into affected areas to look for victims and survivors.

The idea behind robots for rescue activities is to create robust robots that can travel into areas too dangerous for humans and rescue dogs. Robots can be used to assess the situation and to locate people who may be trapped and to relay the location back to the rescue teams, so that all efforts can be concentrated on areas where victims are known to be. Robots are also being developed to carry medical supplies and food, thereby focusing resources where they are most needed.

The main research issues within the field of mobile robotics for search and rescue mission are durability and usability of robots – how to design robots that are easily transported, can function efficiently in all weather conditions and that have long lasting power, and robots that can navigate themselves and have effective enough sensors to pick out victims.

Educational robotics

In the 1970’s and 1980’s, robots were typically introduced in schools as a tool for teaching robotics or other Science, Technology, Engineering and Mathematics (STEM) subjects. However, this specificity held back their adoption for wider educational purposes. This early failure of adoption of robots in classrooms happened because they were unreliable, expensive and with limited applications.

Nowadays, with robots being cheaper and more easily deployable, applications in education have become easier. In the past fifteen years, there have been an increasing number of extracurricular robotics activities showing the popularity of robotics in an informal educational context. However, robots are still underused in schools for formal education. Although there is no agreement over the exact reasons for this situation, it seems clear, from different studies, that teachers play a key role in the introduction of technology in schools.

During the first two phases of NCCR Robotics two products were developed: the Thymio robot— a mobile robot increasingly used to teach robotics and programming, and Cellulo — a small, inexpensive and robust robot that kids can move with their hands and use in groups.

Current research focuses on two aspects. The first one is inventing new forms of interactions between learners and tangible swarms based on the Cellulo robot, and studying the learning outcomes enabled by these interactions.

The second aspect is investigating teacher adoption of robotics from two points of view: platform usability and teacher training. Research will show how to train teachers and exploit Thymio and Cellulo in their daily activities and how to minimize their orchestration load. Activities relating to computational thinking skills are the main target, with school topics outside the STEM domains also included.

New implant offers promise for the paralyzed

Michel Roccati stands up and walks in Lausanne. © EPFL / Alain Herzog 2021

The images made headlines around the world in late 2018. David Mzee, who had been left paralyzed by a partial spinal cord injury suffered in a sports accident, got up from his wheelchair and began to walk with the help of a walker. This was the first proof that Courtine and Bloch’s system – which uses electrical stimulation to reactivate spinal neurons – could work effectively in patients.

Fast forward three years, and a new milestone has just been reached. The research team led by both Courtine, a professor at EPFL and member of NCCR Robotics, and Bloch, a professor and neurosurgeon at CHUV, has enhanced their system with more sophisticated implants controlled by artificial-intelligence software. These implants can stimulate the region of the spinal cord that activates the trunk and leg muscles. Thanks to this new technology, three patients with complete spinal cord injury were able to walk again outside the lab. “Our stimulation algorithms are still based on imitating nature,” says Courtine. “And our new, soft implanted leads are designed to be placed underneath the vertebrae, directly on the spinal cord. They can modulate the neurons regulating specific muscle groups. By controlling these implants, we can activate the spinal cord like the brain would do naturally to have the patient stand, walk, swim or ride a bike, for example.”

Patient with complete spinal cord injury (left) and incomplete spinal cord injury (right) walking in Lausanne, Switzerland. ©NeuroRestore – Jimmy Ravier

The new system is described in an article appearing in Nature Medicine that was also co-authored by Silvestro Micera, who leads the NCCR Robotics Wearable Robotics Grand Challenge. “Our breakthrough here is the longer, wider implanted leads with electrodes arranged in a way that corresponds exactly to the spinal nerve roots,” says Bloch. “That gives us precise control over the neurons regulating specific muscles.” Ultimately, it allows for greater selectivity and accuracy in controlling the motor sequences for a given activity.

Extensive training is obviously necessary for patients to get comfortable using the device. But the pace and scope of rehabilitation is amazing. “All three patients were able to stand, walk, pedal, swim and control their torso movements in just one day, after their implants were activated!” says Courtine. “That’s thanks to the specific stimulation programs we wrote for each type of activity. Patients can select the desired activity on the tablet, and the corresponding protocols are relayed to the pacemaker in the abdomen.”

Read the full story on the EPFL website.

Flying high-speed drones into the unknown with AI

When it comes to exploring complex and unknown environments such as forests, buildings or caves, drones are hard to beat. They are fast, agile and small, and they can carry sensors and payloads virtually everywhere. However, autonomous drones can hardly find their way through an unknown environment without a map. For the moment, expert human pilots are needed to release the full potential of drones.

“To master autonomous agile flight, you need to understand the environment in a split second to fly the drone along collision-free paths,” says Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich and the NCCR Robotics Rescue Robotics Grand Challenge. “This is very difficult both for humans and for machines. Expert human pilots can reach this level after years of perseverance and training. But machines still struggle.”

The AI algorithm learns to fly in the real world from a simulated expert

In a new study, Scaramuzza and his team have trained an autonomous quadrotor to fly through previously unseen environments such as forests, buildings, ruins and trains, keeping speeds of up to 40 km/h and without crashing into trees, walls or other obstacles. All this was achieved relying only on the quadrotor’s on-board cameras and computation.

The drone’s neural network learned to fly by watching a sort of “simulated expert” – an algorithm that flew a computer-generated drone through a simulated environment full of complex obstacles. At all times, the algorithm had complete information on the state of the quadrotor and readings from its sensors, and could rely on enough time and computational power to always find the best trajectory.

Such a “simulated expert” could not be used outside of simulation, but its data were used to teach the neural network how to predict the best trajectory based only on the data from the sensors. This is a considerable advantage over existing systems, which first use sensor data to create a map of the environment and then plan trajectories within the map – two steps that require time and make it impossible to fly at high-speeds.

No exact replica of the real world needed

After being trained in simulation, the system was tested in the real world, where it was able to fly in a variety of environments without collisions at speeds of up to 40 km/h. “While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight,” says Antonio Loquercio, a PhD student and co-author of the paper. “Interestingly these simulators do not need to be an exact replica of the real world. If using the right approach, even simplistic simulators are sufficient,” adds Elia Kaufmann, another PhD student and co-author.

The applications are not limited to quadrotors. The researchers explain that the same approach could be useful for improving the performance of autonomous cars, or could even open the door to a new way of training AI systems for operations in domains where collecting data is difficult or impossible, for example on other planets.

According to the researchers, the next steps will be to make the drone improve from experience, as well as to develop faster sensors that can provide more information about the environment in a smaller amount of time – thus allowing drones to fly safely even at speeds above 40 km/h.

Literature

An open-source version of the paper can be found here.

Media contacts

Prof. Dr. Davide Scaramuzza – Robotics and Perception Group
Department of Informatics
University of Zurich
Phone +41 44 635 24 09
E-mail: sdavide@ifi.uzh.ch

Antonio Loquercio – Robotics and Perception Group
Department of Informatics
University of Zurich
Phone +41 44 635 43 73
E-mail: loquercio@ifi.uzh.ch

Elia Kaufmann – Robotics and Perception Group
Institut für Informatik
Universität Zürich
Tel. +41 44 635 43 73
E-Mail: ekaufmann@ifi.uzh.ch

Media Relations University of Zurich

Phone +41 44 634 44 67
E-mail: mediarelations@kommunikation.uzh.ch

Swimming robot gives fresh insight into locomotion and neuroscience

Scientists at the Biorobotics Laboratory (BioRob) in EPFL’s School of Engineering are developing innovative robots in order to study locomotion in animals and, ultimately, gain a better understanding of the neuroscience behind the generation of movement. One such robot is AgnathaX, a swimming robot employed in an international study with researchers from EPFL as well as Tohoku University in Japan, Institut Mines-Télécom Atlantique in Nantes, France, and Université de Sherbrooke in Canada. The study has just been published in Science Robotics.

A long, undulating swimming robot

“Our goal with this robot was to examine how the nervous system processes sensory information so as to produce a given kind of movement,” says Prof. Auke Ijspeert, the head of BioRob and a member of the Rescue Robotics Grand Challenge at NCCR Robotics. “This mechanism is hard to study in living organisms because the different components of the central and peripheral nervous systems* are highly interconnected within the spinal cord. That makes it hard to understand their dynamics and the influence they have on each other.”

AgnathaX is a long, undulating swimming robot designed to mimic a lamprey, which is a primitive eel-like fish. It contains a series of motors that actuate the robot’s ten segments, which replicate the muscles along a lamprey’s body. The robot also has force sensors distributed laterally along its segments that work like the pressure-sensitive cells on a lamprey’s skin and detect the force of the water against the animal.

The research team ran mathematical models with their robot to simulate the different components of the nervous system and better understand its intricate dynamics. “We had AgnathaX swim in a pool equipped with a motion tracking system so that we could measure the robot’s movements,” says Laura Paez, a PhD student at BioRob. “As it swam, we selectively activated and deactivated the central and peripheral inputs and outputs of the nervous system at each segment, so that we could test our hypotheses about the neuroscience involved.”

Two systems working in tandem

The scientists found that both the central and peripheral nervous systems contribute to the generation of robust locomotion. The benefit of having the two systems work in tandem is that it provides increased resilience against neural disruptions, such as failures in the communication between body segments or muted sensing mechanisms. “In other words, by drawing on a combination of central and peripheral components, the robot could resist a larger number of neural disruptions and keep swimming at high speeds, as opposed to robots with only one kind of component,” says Kamilo Melo, a co-author of the study. “We also found that the force sensors in the skin of the robot, along with the physical interactions of the robot’s body and the water, provide useful signals for generating and synchronizing the rhythmic muscle activity necessary for locomotion.” As a result, when the scientists cut communication between the different segments of the robot to simulate a spinal cord lesion, the signals from the pressure sensors measuring the pressure of the water pushing against the robot’s body were enough to maintain its undulating motion.

These findings can be used to design more effective swimming robots for search and rescue missions and environmental monitoring. For instance, the controllers and force sensors developed by the scientists can help such robots navigate through flow perturbations and better withstand damage to their technical components. The study also has ramifications in the field of neuroscience. It confirms that peripheral mechanisms provide an important function which is possibly being overshadowed by the well-known central mechanisms. “These peripheral mechanisms could play an important role in the recovery of motor function after spinal cord injury, because, in principle, no connections between different parts of the spinal cord are needed to maintain a traveling wave along the body,” says Robin Thandiackal, a co-author of the study. “That could explain why some vertebrates are able to retain their locomotor capabilities after a spinal cord lesion.”

Literature

New algorithm flies drones faster than human racing pilots

To be useful, drones need to be quick. Because of their limited battery life they must complete whatever task they have – searching for survivors on a disaster site, inspecting a building, delivering cargo – in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.

Algorithm outperforms professional pilots

The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor – a drone with four propellers – through a series of waypoints on a circuit. “Our drone beat the fastest lap of two world-class human pilots on an experimental race track”, says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.

“The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations”, says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. “The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that”, adds Philipp Foehn, PhD student and first author of the paper in Science Robotics.

External cameras provide position information in real-time

The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and – in the case of the autonomous drone – to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.

Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. “This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more”, says Scaramuzza.

Literature

Philipp Foehn, Angel Romero, Davide Scaramuzza. “Time-Optimal Planning for Quadrotor Waypoint Flight”. Science Robotics. July 21, 2021. DOI: 10.1126/scirobotics.abh1221

How to keep drones flying when a motor fails

Drone with event camera

Robotics researchers at the University of Zurich show how onboard cameras can be used to keep damaged quadcopters in the air and flying stably – even without GPS.

As anxious passengers are often reassured, commercial aircrafts can easily continue to fly even if one of the engines stops working. But for drones with four propellers – also known as quadcopters – the failure of one motor is a bigger problem. With only three rotors working, the drone loses stability and inevitably crashes unless an emergency control strategy sets in.

Researchers at the University of Zurich and the Delft University of Technology have now found a solution to this problem: They show that information from onboard cameras can be used to stabilize the drone and keep it flying autonomously after one rotor suddenly gives out.

Spinning like a ballerina

“When one rotor fails, the drone begins to spin on itself like a ballerina,” explains Davide Scaramuzza, head of the Robotics and Perception Group at UZH and of the Rescue Robotics Grand Challenge at NCCR Robotics, which funded the research. “This high-speed rotational motion causes standard controllers to fail unless the drone has access to very accurate position measurements.” In other words, once it starts spinning, the drone is no longer able to estimate its position in space and eventually crashes.

One way to solve this problem is to provide the drone with a reference position through GPS. But there are many places where GPS signals are unavailable. In their study, the researchers solved this issue for the first time without relying on GPS, instead using visual information from different types of onboard cameras.

Event cameras work well in low light

The researchers equipped their quadcopters with two types of cameras: standard ones, which record images several times per second at a fixed rate, and event cameras, which are based on independent pixels that are only activated when they detect a change in the light that reaches them.

The research team developed algorithms that combine information from the two sensors and use it to track the quadrotor’s position relative to its surroundings. This enables the onboard computer to control the drone as it flies – and spins – with only three rotors. The researchers found that both types of cameras perform well in normal light conditions. “When illumination decreases, however, standard cameras begin to experience motion blur that ultimately disorients the drone and crashes it, whereas event cameras also work well in very low light,” says first author Sihao Sun, a postdoc in Scaramuzza’s lab.

Increased safety to avoid accidents

The problem addressed by this study is a relevant one, because quadcopters are becoming widespread and rotor failure may cause accidents. The researchers believe that this work can improve quadrotor flight safety in all areas where GPS signal is weak or absent.

A raptor-inspired drone with morphing wing and tail

By Nicola Nosengo

NCCR Robotics researchers at EPFL have developed a drone with a feathered wing and tail that give it unprecedented flight agility.

The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientists of the Laboratory of Intelligent Systems of EPFL led by Dario Floreano. They carefully studied the shape of the bird’s wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.

“Goshawks move their wings and tails in tandem to carry out the desired motion, whether it is rapid changes of direction when hunting in forests, fast flight when chasing prey in the open terrain, or when efficiently gliding to save energy,” says Enrico Ajanic, the first author and PhD student in Floreano’s lab. Floreano adds: “our design extracts principles of avian agile flight to create a drone that can approximate the flight performance of raptors, but also tests the biological hypothesis that a morphing tail plays an important role in achieving faster turns, decelerations, and even slow flight.”

A drone that moves its wings and tail

The engineers already designed a bird-inspired drone with morphing wing back in 2016. In a step forward, their new model can adjust the shape of its wing and tail thanks to its artificial feathers. “It was fairly complicated to design and build these mechanisms, but we were able to improve the wing so that it behaves more like that of a goshawk,” says Ajanic. “Now that the drone includes a feathered tail that morphs in synergy with the wing, it delivers unparalleled agility.” The drone changes the shape of its wing and tail to change direction faster, fly slower without falling to the ground, and reduce air resistance when flying fast. It uses a propeller for forward thrust instead of flapping wings because it is more efficient and makes the new wing and tail system applicable to other winged drones and airplanes.

The advantage of winged drones over quadrotor designs is that they have a longer flight time for the same weight. However, quadrotors tend to have greater dexterity, as they can hover in place and make sharp turns. “The drone we just developed is somewhere in the middle. It can fly for a long time yet is almost as agile as quadrotors,” says Floreano. This combination of features is especially useful for flying in forests or in cities between buildings, as it can be necessary during rescue operation. The project is part of the Rescue Robotics Grand Challenge of NCCR Robotics.

Opportunities for artificial intelligence

Flying this new type of drone isn’t easy, due to the large number of wing and tail configurations possible. To take full advantage of the drone’s flight capabilities, Floreano’s team plans to incorporate artificial intelligence into the drone’s flight system so that it can fly semi-automatically. The team’s research has been published in Science Robotics.

Drones learn acrobatics by themselves


Researchers from NCCR Robotics at the University of Zurich and Intel developed an algorithm that pushes autonomous drones to their physical limit.
Since the dawn of flight, acrobatics has been a way for pilots to prove their bravery and worth. It is also a way to push the envelope of what can be done with an aircraft, learning lessons that are useful to all pilots and engineers. The same is true for unmanned flight. Professional drone pilots perform acrobatic maneuvers in dedicated competitions, pushing drones to their physical limits and perfecting their control and efficiency.

Now a collaboration between researchers from the University of Zurich (part of the NCCR Robotics consortium) and Intel has developed a quadcopter that can learn to fly acrobatics autonomously, paving the way to drones that can fully exploit their agility and speed, and cover more distance within their battery life. Though no drone mission will probably ever require a power loop or a Matty flip – the typical acrobatic maneuvers – a drone that can perform them autonomously is likely to be more efficient at all times.

A step forward towards integrating drones in our everyday life

Researchers of the University of Zurich and Intel developed a novel algorithm that pushes autonomous drones with only on-board sensing and computation close to their physical limits. To prove the efficiency of the developed algorithm, the researchers made an autonomous quadrotor fly acrobatic maneuvers such as the Power Loop, the Barrel Roll, and the Matty Flip, during which the drone incurs accelerations of up to 3g. “Several applications of drones, such as search-and-rescue or delivery, will strongly benefit from faster drones, which can cover large distances in limited time. With this algorithm we have taken a step forward towards integrating autonomously navigating drones into our everyday life”, says Davide Scaramuzza, Professor and Director of the Robotics and Perception Group at the University of Zurich, and head of the Rescue Robotics Grand Challenge for NCCR Robotics.

Simulation for training, real-world for testing
The navigation algorithm that allows drones to fly acrobatic maneuvers is represented by an artificial neural network that directly converts observations from the on-board camera and inertial sensors, to control commands. This neural network is trained exclusively in simulation. Learning agile maneuvers entirely in simulation has several advantages: (i) Maneuvers can be simply specified by reference trajectories in simulation and do not require expensive demonstrations by a human pilot, (ii) training is safe and does not pose any physical risk to the quadrotor, and (iii) the approach can scale to a large number of diverse maneuvers, including ones that can only be performed by the very best human pilots.

The algorithm transfers its knowledge to reality by using appropriate abstractions of the visual and inertial inputs (i.e., feature tracks and integrated inertial measurements), which decreases the gap between the simulated and physical world. Indeed, without physically-accurate modeling of the world or any fine-tuning on real-world data, the trained neural network can be deployed on a real quadrotor to perform acrobatic maneuvers.

Towards fully autonomous drones
Within a few hours of training in simulation, our algorithm learns to fly acrobatic maneuvers with an accuracy comparable to professional human pilots. Nevertheless, the research team warns that there is still a significant gap between what human pilots and autonomous drones can do. “The best human pilots still have an edge over autonomous drones given their ability to quickly interpret and adapt to unexpected situations and changes in the environment,” says Prof. Scaramuzza.

Paper: E. Kaufmann*, A. Loquercio*, R. Ranftl, M. Müller, V. Koltun, D. Scaramuzza “Deep Drone Acrobatics”, Robotics: Science and Systems (RSS), 2020
Paper
Video
Code

This drone can play dodgeball – and win

By Nicola Nosengo

Drones can do many things, but avoiding obstacles is not their strongest suit yet – especially when they move quickly. Although many flying robots are equipped with cameras that can detect obstacles, it typically takes from 20 to 40 milliseconds for the drone to process the image and react. It may seem quick, but it is not enough to avoid a bird or another drone, or even a static obstacle when the drone itself is flying at high speed. This can be a problem when drones are used in unpredictable environments, or when there are many of them flying in the same area.

Reaction of a few milliseconds
In order to solve this problem, researchers at the University of Zurich have equipped a quadcopter (a drone with four propellers) with special cameras and algorithms that reduced its reaction time down to a few milliseconds – enough to avoid a ball thrown at it from a short distance. The results, published in Science Robotics, can make drones more effective in situations such as the aftermath of a natural disaster. The work was funded by the Swiss National Science Foundation through the National Center of Competence in Research (NCCR) Robotics.

“For search and rescue applications, such as after an earthquake, time is very critical, so we need drones that can navigate as fast as possible in order to accomplish more within their limited battery life” explains Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich as well as the NCCR Robotics Search and Rescue Grand Challenge. “However, by navigating fast drones are also more exposed to the risk of colliding with obstacles, and even more if these are moving. We realised that a novel type of camera, called Event Camera, are a perfect fit for this purpose”.

Event cameras have smart pixels
Traditional video cameras, such as the ones found in every smartphone, work by regularly taking snapshots of the whole scene. This is done by exposing the pixels of the image all at the same time. This way, though, a moving object can only be detected after all the pixels have been analysed by the on-board computer. Event cameras, on the other hand, have smart pixels that work independently of each other. The pixels that detect no changes remain silent, while the ones that see a change in light intensity immediately send out the information. This means that only a tiny fraction of the all pixels of the image will need to be processed by the onboard computer, therefore speeding up the computation a lot.

Event cameras are a recent innovation, and existing object-detection algorithms for drones do not work well with them. So the researchers had to invent their own algorithms that collect all the events recorded by the camera over a very short time, then subtracts the effect of the drone’s own movement – which typically account for most of the changes in what the camera sees.

Only 3.5 milliseconds to detect incoming objects
Scaramuzza and his team first tested the cameras and algorithms alone. They threw objects of various shapes and sizes towards the camera, and measured how efficient the algorithm was in detecting them. The success rate varied between 81 and 97 per cent, depending on the size of the object and the distance of the throw, and the system only took 3.5 milliseconds to detect incoming objects.

Then the most serious test began: putting cameras on an actual drone, flying it both indoor and outdoor and throwing objects directly at it. The drone was able to avoid the objects – including a ball thrown from a 3-meter distance and travelling at 10 meters per second – more than 90 per cent of the time. When the drone “knew” the size of the object in advance, one camera was enough. When, instead, it had to face objects of varying size, two cameras were used to give it stereoscopic vision.

According to Scaramuzza, these results show that event cameras can increase the speed at which drones can navigate by up to ten times, thus expanding their possible applications. “One day drones will be used for a large variety of applications, such as delivery of goods, transportation of people, aerial filmography and, of course, search and rescue,” he says. “But enabling robots to perceive and make decision faster can be a game changer for also for other domains where reliably detecting incoming obstacles plays a crucial role, such as automotive, good delivery, transportation, mining, and remote inspection with robots”.

Nearly as reliable as human pilots
In the future, the team aims to test this system on an even more agile quadrotor. “Our ultimate goal is to make one day autonomous drones navigate as good as human drone pilots. Currently, in all search and rescue applications where drones are involved, the human is actually in control. If we could have autonomous drones navigate as reliable as human pilots we would then be able to use them for missions that fall beyond line of sight or beyond the reach of the remote control”, says Davide Falanga, the PhD student who is the primary author of the article.

An origami robot for touching virtual reality objects

A group of EPFL researchers have developed a foldable device that can fit in a pocket and can transmit touch stimuli when used in a human-machine interface.

When browsing an e-commerce site on your smartphone, or a music streaming service on your laptop, you can see pictures and hear sound snippets of what you are going to buy. But sometimes it would be great to touch it too – for example to feel the texture of a garment, or the stiffness of a material. The problem is that there are no miniaturized devices that can render touch sensations the way screens and loudspeakers render sight and sound, and that can easily be coupled to a computer or a mobile device.

Researchers in Professor Jamie Paik’s lab at EPFL have made a step towards creating just that – a foldable device that can fit in a pocket and can transmit touch stimuli when used in a human-machine interface. Called Foldaway, this miniature robot is based on the origami robotic technology, which makes it easy to miniaturize and manufacture. Because it starts off as a flat structure, it can be printed with a technique similar to the one employed for electronic circuits, and can be easily stored and transported. At the time of deployment, the flat structure folds along a pre-defined pattern of joints to take the desired 3D, button-like shape. The device includes three actuators that generate movements, forces and vibrations in various directions; a moving origami mechanism on the tip that transmit sensations to the user’s finger; sensors that track the movements of the finger and electronics to control the whole system. This way the device can render different touch sensations that reproduce the physical interaction with objects or forms.

The Foldaway device, that is described in an article in the December issue of Nature Machine Intelligence and is featured on the journal’s cover, comes in two versions, called Delta and Pushbutton. “The first one is more suitable for applications that require large movements of the user’s finger as input” says Stefano Mintchev, a member of Jamie Paik’s lab and co-author of the paper. “The second one is smaller, therefore pushing portability even further without sacrificing the force sensations transmitted to the user’s finger”.

Education, virtual reality and drone control
The researchers have tested their devices in three situations. In an educational context, they have shown that a portable interface, measuring less than 10 cm in length and width and 3 cm in height, can be used to navigate an atlas of human anatomy: the Foldaway device gives the user different sensations upon passing on various organs: the different stiffness of soft lungs and hard bones at the rib cage; the up-and-down movement of the heartbeat; sharp variations of stiffness on the trachea.

As a virtual reality joystick, the Foldaway can give the user the sensation of grasping virtual objects and perceiving their stiffness, modulating the force generated when the interface is pressed 


As a control interface for drones, the device can contribute to solve the sensory mismatch created when users control the drone with their hands, but can perceive the response of drones only through visual feedback. Two Pushbuttons can be combined, and their rotation can be mapped into commands for altitude, lateral and forward/backward movements. The interface also provides force feedback to the user’s fingertips in order to increase the pilot’s awareness on drone’s behaviour and of the effects of wind or other environmental factors.

The Foldaway device was developed by the Reconfigurable Robotics Lab at EPFL, and is currently being developed for commercialisation by a spin-off (called FOLDAWAY Haptics) that was supported by NCCR Robotics’s Spin Fund grant.

“Now that computing devices and robots are more ubiquitous than ever, the quest for human machine-interactions is growing rapidly” adds Marco Salerno, a member of Jamie Paik’s lab and co-author of the paper. “The miniaturization offered by origami robots can finally allow the integration of rich touch feedback into everyday interfaces, from smartphones to joysticks, or the development of completely new ones such as interactive morphing surfaces”.

Literature
Mintchev, S., Salerno, M., Cherpillod, A. et al., “A portable three-degrees-of-freedom force feedback origami robot for human–robot interactions“, Nature Machine Intelligence 1, 584–593 (2019) doi:10.1038/s42256-019-0125-1

A miniature stretchable pump for the next generation of soft robots

By Laure-Anne Pessina and Nicola Nosengo
Scientists at EPFL have developed a tiny pump that could play a big role in the development of autonomous soft robots, lightweight exoskeletons and smart clothing. Flexible, silent and weighing only one gram, it is poised to replace the rigid, noisy and bulky pumps currently used. The scientists’ work has just been published in Nature.

Soft robots have a distinct advantage over their rigid forebears: they can adapt to complex environments, handle fragile objects and interact safely with humans. Made from silicone, rubber or other stretchable polymers, they are ideal for use in rehabilitation exoskeletons – such as the ones being developed in the NCCR Robotics “Wearable Robotics” research line – and robotic clothing. Soft bio-inspired robots could one day be deployed to explore remote or dangerous environments.

Most soft robots are actuated by rigid, noisy pumps that push fluids into the machines’ moving parts. Because they are connected to these bulky pumps by tubes, these robots have limited autonomy and are cumbersome to wear at best.

Cutting soft robots’ tether
Researchers in EPFL’s Soft Transducers Laboratory (LMTS) and Laboratory of Intelligent Systems (LIS – led by NCCR Robotics Director Dario Floreano), in collaboration with researchers at the Shibaura Institute of Technology in Tokyo, Japan, have developed the first entirely soft pump – even the electrodes are flexible. Weighing just one gram, the pump is completely silent and consumes very little power, which it gets from a 2 cm by 2 cm circuit that includes a rechargeable battery. “If we want to actuate larger robots, we connect several pumps together,” says Herbert Shea, the director of the LMTS.
This innovative pump could rid soft robots of their tethers. “We consider this a paradigm shift in the field of soft robotics,” adds Shea.

Soft pumps can also be used to circulate liquids in thin flexible tubes embedded in smart clothing, leading to garments that can actively cool or heat different regions of the body. That would meet the needs of surgeons, athletes and pilots, for example.

How does it work?
The soft and stretchable pump is based on the physical mechanism used today to circulate the cooling liquid in systems like supercomputers. The pump has a tube-shaped channel, 1mm in diameter, inside of which rows of electrodes are printed. The pump is filled with a dielectric liquid. When a voltage is applied, electrons jump from the electrodes to the liquid, giving some of the molecules an electrical charge. These molecules are subsequently attracted to other electrodes, pulling along the rest of the fluid through the tube with them. “We can speed up the flow by adjusting the electric field, yet it remains completely silent,” says Vito Cacucciolo, a post-doc at the LMTS and the lead author of the study.

Developing artificial muscles in Japan
The researchers have successfully implanted their pump in a type of robotic finger widely used in soft robotics labs. They are now collaborating with Koichi Suzumori’s laboratory in Japan, which is developing fluid-driven artificial muscles and flexible exoskeletons.

The EPFL team has also fitted a fabric glove with tubes and shown that it is possible to heat or cool regions of the glove as desired using the pump. “It works a little like your home heating and cooling system” says Cacucciolo. This application has already sparked interest from a number of companies.

Literature
V. Cacucciolo, J. Shintake, Y. Kuwajima, S. Maeda, D. Floreano, H. Shea, “Stretchable pumps for soft machines”, Nature, vol. 572, no 7769, Aug. 2019.
doi: 10.1038/s41586-019-1479-6

The future of rescue robotics

By Nicola Nosengo

Current research is aligned with the need of rescue workers but robustness and ease of use remain significant barriers to adoption, NCCR Robotics researchers find after reviewing the field and consulting with field operators.

Robots for search and rescue are developing at an impressive pace, but they must become more robust and easier to use in order to be widely adopted, and researchers in the field must devote more effort to these aspects in the future. This is one of the main findings by a group of NCCR Robotics researchers who focus on search-and-rescue applications. After reviewing the recent developments in technology and interviewing rescue workers, they have found that the work by the robotics research community is well aligned with the needs of those who work in the field. Consequently, although current adoption of state-of-the-art robotics in disaster response is still limited, it is expected to grow quickly in the future. However, more work is needed from the research community to overcome some key barriers to adoption.
The analysis is the result of a group effort from researchers who participate in the Rescue Robotics Grand Challenge, one of the main research units of NCCR Robotics, and has been published in the Journal of Field Robotics.

With this paper, the researchers wanted to take stock of the current state-of-the-art of research on rescue robotics, and in particular of the advancements published between 2014 and 2018, a period that had not yet been covered by previous scientific reviews.

“Although previous surveys were only a few years old, the rapid pace of development in robotics research and robotic deployment in search and rescue means that the state-of-the-art is already very different than these earlier assessments” says Jeff Delmerico, first author of the paper and formerly a NCCR Robotics member in Davide Scaramuzza’s Robotics and Perception Group at the University of Zurich. “More importantly, rather than just documenting the current state of the research, or the history of robot deployments after real-world disasters, we were trying to analyze what is missing from the output of the research community in order to target real-world needs and provide the maximum benefit to actual rescuers”.

The paper offers a comprehensive review of research on legged, wheeled, flying and amphibious robots, as well as of perception, control systems and human-robot interfaces. Among many recent advancements, it highlights how learning algorithms and modular designs have been applied to legged robots to make them capable of adapting to different missions and of being resilient to damages; how wheeled and tracked robots are being tested in new applications such as telepresence for interacting with victims or for remote firefighting; how a number of strategies are being investigated for making drones more easily transportable and able to change locomotion mode when necessary. It details advancements in using cameras for localization and mapping in areas not covered by GPS signals, and new strategies for human-robot interaction that allow users to pilot a drone by pointing a finger or with the movements of the torso.
In order to confirm whether these research directions are aligned with the demands coming from the field, the study authors have interviewed seven rescue experts from key agencies in the USA, Italy, Switzerland, Japan and the Netherlands. The interviews have revealed that the key factors guiding adoption decisions are robustness and ease of use.

“Disaster response workers are reluctant to adopt new technologies unless they can really depend on them in critical situations, and even then, these tools need to add new capabilities or outperform humans at the same task” Delmerico explains. “So the bar is very high for technology before it will be deployed in a real disaster. Our goal with this article was to understand the gap between where our research is now and where it needs to be to reach that bar. This is critical to understand in order to work towards the technologies that rescue workers actually need, and to ensure that new developments from our NCCR labs, and the robotics research community in general, can move quickly out of the lab and into the hands of rescue professionals”.

Robotic research platforms have features that are absent from commercial platforms and that are highly appreciated by rescue workers, such as the possibility to generate 3D maps of a disaster scene. Academic efforts to develop new human-robot interfaces that reduce the operator’s attention load are also consistent with the needs of stakeholder. Another key finding is that field operators see robotic systems as tools to support and enhance their performance rather than as autonomous systems to replace them. Finally, an important aspect of existing research work is the emphasis on human-robot teams, which meets the desire of stakeholders to maintain a human in the loop during deployments in situations where priorities may change quickly.

On the critical side, though, robustness keeps rescuers from adopting technologies that are “hot” for researchers but are not yet considered reliable enough, such as Artificial Intelligence. Similarly, the development of integrated, centrally organized robot teams is interesting for researchers, but not so much for SAR personnel, who prefer individual systems that can more easily be deployed independently of each other.

NCCR Robotics researchers note that efforts to develop systems that are robust and capable enough for real-world rescue scenarios have been hitherto insufficient. “While it is unrealistic to expect robotic systems with a high technology readiness level to come directly from the academic domain without involvement from other organizations” they write, “more emphasis on robustness during the research phase may accelerate the process of reaching a high level for use in deployment”. The ease of use, endurance, and the capabilities to collection and quickly transmit data to rescuers are also important barriers to adoption that the research community must focus on in the future.

Literature
J. Delmerico, S. Mintchev, A. Giusti, B. Gromov, K. Melo, T. Horvat, C. Cadena, M. Hutter, A. Ijspeert, D. Floreano, L. M. Gambardella, R. Siegwart, D. Scaramuzza, “The current state and future outlook of rescue robotics“, Journal of Field Robotics, DOI: 10.1002/rob.21887

Page 1 of 2
1 2