A Guide to Lasers for Robots (Part 2)
What Are Autonomous Robots, and Why Should We Care?
Cheetah III robot preps for a role as a first responder
Arizona bans Uber self-driving cars
The governor of Arizona has told Uber to “get an Uber” and stop testing in the state. With no instructions on how to come back.
Unlike the early positive statements from Tempe police, this letter is harsh and to the point. It’s even more bad news for Uber, and the bad news is not over. Uber has not released any log data that makes them look better, the longer they take to do that, the more it seems that the data don’t tell a good story for them.
In other news, both MobilEye and Velodyne have issued releases that their systems would have seen the pedestrian. Waymo has said the same, and I believe that all of them are correct. Waymo has a big press event scheduled for this week in New York, rumoured to announce some new shuttle operations there. I wonder how much consideration they gave to delaying it, because in spite of their superior performance, a lot of the questions they will get at the press conference won’t be about their new project.
There are more signs that Uber’s self-driving project may receive the “death penalty,” or at the very least a very long and major setback. A long and major setback in a field where Uber thought “second place is first loser” to quote Anthony Levandowski.
Sherlock Drones – automated investigators tackle toxic crime scenes
by Anthony King
Crimes that involve chemical, biological, radiological or nuclear (CBRN) materials pose a deadly threat not just to the target of the attack but to innocent bystanders and police investigators. Often, these crimes may involve unusual circumstances or they are terrorist-related incidents, such as an assassination attempt or the sending of poisons through the mail.
In the recent notorious case of poisoning in the UK city of Salisbury in March 2018, a number of first responders and innocent bystanders were treated in hospital after two victims of chemical poisoning were found unconscious on a park bench. One policeman who attended the scene became critically ill after apparent exposure to a suspected chemical weapon, said to be a special nerve agent called novichok. Police said a total of 21 people required medical care after the incident.
Past examples of rare but toxic materials at crime scenes include the 2001 anthrax letter attacks in the US and the Tokyo subway sarin gas attack in 1995. Following the radioactive poisoning of the Russian former spy, Alexander Litvinenko in London, UK, in 2006, investigators detected traces of the toxic radioactive material polonium in many locations around the city.
Despite these dangers, crime scene investigators must begin their forensic investigations immediately. European scientists are developing robot and remote-sensing technology to provide safe ways to assess crime or disaster scenes and begin gathering forensic evidence.
Harm’s way
‘We will send robots into harm’s way instead of humans,’ explained Professor Michael Madden at the National University of Ireland Galway, who coordinates a research project called ROCSAFE. ‘The goal is to improve the safety of crime scene investigators.’
The ROCSAFE project, which ends in 2019, will deploy remote-controlled aerial and ground-based drones equipped with sensors to assess the scene of a CBRN event without exposing investigators to risk. This will help to determine the nature of the threat and gather forensics.
In the first phase of a response, a swarm of drones with cameras will fly into an area to allow investigators to view it remotely. Rugged sensors on the drones will check for potential CBRN hazards. In the second phase, ground-based robots will roll in to collect evidence, such as fingerprint or DNA samples.
The ROCSAFE aerial drone could assess crime or disaster scenes such as that of a derailed train carrying radioactive material. It will deploy sensors for immediate detection of radiation or toxic agents and collect air samples to test later in a laboratory. Meanwhile, a miniature lab-on-a-chip on the drone will screen returned samples for the presence of viruses or bacteria, for instance.
An incident command centre usually receives a huge volume of information in a short space of time – including real-time video and images from the scene. Commanders need to process a lot of confusing information in an extreme situation quickly and so ROCSAFE is also developing smart software to lend a hand.
Rare events
‘These are rare events. This is nobody’s everyday job,’ said Prof. Madden. ‘We want to use artificial intelligence and probabilistic reasoning to reduce cognitive load and draw attention to things that might be of interest.’
As an example, image analysis software might flag an area with damaged vegetation, suggest a possible chemical spill and suggest that samples be taken. Information such as this could be presented to the commander on a screen in the form of a clickable map, in a way that makes their job easier.
Sometimes, the vital evidence itself could be contaminated. ‘There may be some physical evidence we need to collect – a gun or partly exploded material or a liquid sample,’ said Prof. Madden. ‘The robot will pick up, tag and bag the evidence, all in a way that will stand up in court.’ The researchers are constructing a prototype six-wheeled robot for this task that is about 1.5-metres-long and that can handle rough terrain.
Helping forensic teams to deal with hazardous evidence is a new forensic toolbox called GIFT CBRN. The GIFT researchers have devised standard operating procedures on how to handle, package and analyse toxins such as the nerve-agent ricin, which is deadly even in minute amounts.
When the anthrax powder attacks took place in the US in 2001, the initial response of the security services was slow, partly because of the unprecedented situation. GIFT scientists have drawn up ‘how to’ guides for investigators, so they can act quickly in response to an incident.
‘We want to find the bad guys quickly so we can stop them and arrest those involved,’ said Ed van Zalen at the Netherlands Forensic Institute, who coordinated the GIFT project.
Nerve agents
As well as containment, GIFT devised sensing technology such as a battery-powered boxed device that can be brought to a crime scene to identify nerve agents like sarin within an hour or two. This uses electrophoresis, a chemical technique that identifies charged molecules by applying an electric field and analysing their movements. Usually, samples must be collected and returned to the lab for identification, which takes far longer.
Additionally, they developed a camera to detect radioactive material that emits potentially damaging radiation called alpha particles. This form of radiation is extremely difficult to detect and even a Geiger counter – used to detect most radiation – cannot pick it up. The substance polonium, used to murder Litvinenko, emits alpha particles which create radiation poisoning but because it was a novel attack, the failure to detect it initially slowed the police investigation.
‘That detector would have been very helpful at the time of the Litvinenko case,’ said van Zalen.
The research in this article is funded by the EU.
Almost every thing that went wrong in the Uber fatality is both terrible and expected
Today I’m going to examine how you attain safety in a robocar, and outline a contradiction in the things that went wrong for Uber and their victim. Each thing that went wrong is both important and worthy of discussion, but at the same time unimportant. For almost every thing that went wrong Is something that we want to prevent going wrong, but it’s also something that we must expect will go wrong sometimes, and to plan for it.
In particular, I want to consider how things operate in spite of the fact that people will jaywalk, illegal or not, car systems will suffer failures and safety drivers will sometimes not be looking.
What’s new
First, an update on developments.
Uber has said it is cooperating fully, but we certainly haven’t heard anything more from them, or from the police. That’s because:
- Police have indicated that the accident has been referred for criminal investigation, and the NTSB is also present.
- The family (only a stepdaughter is identified) have retained counsel, and are demanding charges and considering legal action.
A new story in the New York Times is more damning for Uber. There we learn:
- Uber’s performance has been substandard in Arizona. They are needing an intervention after 13 miles of driving on average. Other top companies like Waymo go many thousands of miles.
- Uber just recently switched to having one safety driver instead of two, though it was still using two in the more difficult test situations. Almost all companies use two safety drivers, though Waymo has operations with just one, and quite famously, zero.
- Uber has safety drivers use a tablet to log interventions and other data, and there are reports of safety drivers doing this while in self-drive mode. Wow.
- Uber’s demos of this car have shown the typical “what the car sees” view to both passengers and the software operator. It shows a good 360 degree view as you would expect from this sort of sensor suite, including good renderings of pedestrians on the sidewalk. Look at this youtube example from two years ago.
- Given that when operating normally, Uber’s system has the expected pedestrian detection, it is probably not the case that Uber’s car just doesn’t handle this fairly basic situation. We need to learn what specifically went wrong.
- If Uber did have a general pedestrian detection failure at night, there should have been near misses long before this impact. Near misses are viewed as catastrophic by most teams, and get immediate attention, to avoid situations like this fatality.
- If equipment such as LIDARs or cameras or radars were to fail, this would generally be obvious to the software system, which should normally cause an immediate alert asking the safety driver to take over. In addition, most designs have some redundancies, so they can still function at some lower level with a failed sensor while they wait for the safety driver — or even attempt to get off the road.
- The NYT report indicates that new CEO Dara Khosrowshahi had been considering cancelling the whole project, and a big demo for him was pending. There was pressure on to make the demo look good.
In time, either in this investigation or via lawsuits, we should see:
- Logs of raw sensor data from the LIDAR, radar and camera arrays, along with tools to help laypeople interpret this data.
- If those logs clearly show the victim well in advance of impact, why did the vehicle not react? What sort of failure was it?
- Why was the safety driver staring at something down and to the right for so long? Why was there only one safety driver on this road at night at 40mph?
That the victim was jaywalking is both important and unimportant
The law seems to be clear that the Uber had the right of way. The victim was tragically unwise to cross there without looking. The vehicle code may find no fault with Uber. In addition, as I will detail later, crosswalk rules exist for a reason, and both human drivers and robocars will treat crosswalks differently from non-crosswalks.
Even so, people will jaywalk, and robocars need to be able to handle that. Nobody can handle somebody leaping quickly off the sidewalk into your lane, but a person crossing 3.5 lanes of open road is something even the most basic cars should be able to handle, and all cars should be able to perceive and stop for a pedestrian standing in their lane on straight non-freeway road. (More on this in a future article.)
The law says this as well. While the car has right of way, the law still puts a duty on the driver to do what they reasonably can to avoid hitting a jaywalker in the middle of the road.
That Uber’s system failed to detect the pedestrian is both important and unimportant
We are of course very concerned as to why the system failed. In particular, this sort of detect-and-stop is a very basic level of operation, expected of even the most simple early prototypes, and certainly of a vehicle from a well funded team that’s logged a million miles.
At the same time, cars must be expected to have failures, even failures as bad as this. In the early days of robocars, even at the best teams, major system failures happened. I’ve been in cars that suddenly tried to drive off the road. It happens, and you have to plan for it. The main fallback is the safety driver, though now that the industry is slightly more mature, it is also possible to use simpler automated systems (like ADAS “forward collision warning” and “lanekeeping” tools) to also guard against major failures.
We’re going to be very hard on Uber, and with justification, for having such a basic failure. “Spot a pedestrian in front of you and stop” have been moving into the “solved problem” category, particularly if you have a high-end LIDAR. But we should not forget there are lots of other things that can, and do go wrong that are far from solved, and we must expect them to happen. These are prototypes. They are on the public roads because we know no other way to make them better, to find and solve these problems.
That the safety driver wasn’t looking is both important and unimportant
She clearly was not doing her job. The accident would have been avoided if she had been vigilant. But we must understand that safety drivers will sometimes look away, and miss things, and make mistakes.
That’s true for all of us when we drive, with our own life and others at stake. Many of us do crazy things like send texts, but even the most diligent are sometimes not paying enough attention for short periods. We adjust controls, we look at passengers, we look behind us and (as we should) check blindspots. Yet the single largest cause of accidents is “not paying attention.” What that really means is that two things went wrong at once — something bad happened while we were looking somewhere else. For us the probability of an accident is highly related to the product of those two probabilities.
The same is true for robocars with safety drivers. The cars will make mistakes. Sometimes the driver will not catch it. When both happen, an accident is possible. If the total probability of that is within the acceptable range (which is to say, the range for good human drivers) then testing is not putting the public at any extraordinary risk.
This means a team should properly have a sense of the capabilities of its car. If it’s needing interventions very frequently, as Uber was reported to, it needs highly reliable safety driving. In most cases, the answer is to have two safety drivers, 2 sets of eyes potentially able to spot problems. Or even 1.3 sets of eyes, because the 2nd operator is, on most teams, including Uber, mostly looking at a screen and only sometimes at the road. Still better than just one pair.
At the same time, since the goal is to get to zero safety drivers, it is not inherently wrong to just have one. There has to be a point where a project graduates to needing only one. Uber’s fault is, possibly, graduating far, far too soon.
To top all this, safety drivers, if the company is not careful, are probably more likely to fatigue and look away from the road than ordinary drivers in their own cars. After all, it is actually safer to do so than it is to do in your own car. Tesla autopilot owners are also notoriously bad at this. Perversely, the lower the intervention rate, the more likely it is people will get tempted. Companies have to combat this.
If you’re a developer trying out some brand new and untrusted software, you safety drive with great care. You keep your hands near the wheel. Your feet near the pedals. Your eyes on the lookout. You don’t do it for very long, and you are “rewarded” by having to do an intervention often enough that you never tire. To consider the extreme view of that, think about driving adaptive cruise control. You still have to steer, so there’s no way you take your eyes off the road even though your feet can probably relax.
Once your system gets to a high level (like Tesla’s autopilot in simple situations or Waymo’s car) you need to find other ways to maintain that vigilance. Some options include gaze-tracking systems that make sure eyes are on the road. I have also suggested that systems routinely simulate a failure, but drifting out of their lane when it is safe to do so, but correcting it before it gets dangerous if for some reason the safety driver does not intervene. A safety driver who is grabbing the wheel 3 times an hour and scored on it is much less likely to miss the one time a week they actually have to grab it for real.
That it didn’t brake at all — that’s just not acceptable
While we don’t have final confirmation, reports suggest the vehicle did not slow at all. Even if study of the accident reveals a valid reason for not detecting the victim 1.4 seconds out (as needed to fully stop) there are just too many different technologies that are all, independently, able to detect her at a shorter distance which should have at least triggered some braking and reduced severity.
They key word is independently. As explained above, failures happen. A proper system is designed to still do the best it can in the event of failures of independent components. Failure of the entire system should be extremely unlikely, because the entire system should not be a monolith. Even if the main perception system of the car fails for some reason (as may have happened here) that should result in alarm bells going off to alert the safety driver, and it should also result in independent safety systems kicking in to fire those alarms or even hit the brakes. The Volvo comes with such a system, but that system is presumably disabled. Where possible, a system like that should be enabled, but used only to beep warnings at the safety driver. There should be a “reptile brain” at the low level of the car which, in the event of complete failure of all high level systems, knows enough to look at raw radar, LIDAR or camera data and sound alarms or trigger braking if the main system can’t.
All the classes of individual failures that happened to Uber could happen to a more sophisticated team in some fashion. In extreme bad luck they could even happen all at once. The system should be designed to make it very unlikely that they won’t all happen at once, and that the probability of that is less than the probability of a human having a crash.
More to come
So much to write here, so in the future look for thoughts on:
- How humans and robocars will act differently when approaching a crosswalk and not approaching one, and why
- More about how many safety drivers you have
- What are reasonable “good practices” that any robocar should have, and what are exceptions to them
- How do we deal with the fact that we probably have to overdrive our sensor range on high speed highways, as humans usually do?
- More about interventions and how often they happen and why they happen
- Does this mean the government should step in to stop bad actors like Uber? Or will the existing law (vehicle code, criminal law and tort law) punish them so severely — possibly with a “death penalty” for their project — that we can feel it’s working?
Quiet inroads in robotics: the Vecna story
Robotics is undergoing fundamental change in three core areas: collaboration, autonomous mobility and increasing intelligence.
Autonomous mobility technology is entering the industrial vehicle marketplace of AGVs, forklifts and tugs with new products, better navigation technologies and lower costs.
Forecasters Grandview Research and IDTechEx suggest that autonomous forklifts and tugs will emerge as the standard from 2022/2023 onwards, ultimately growing to represent 70% of annual mobile material handling equipment by 2037. The key to this transformation is unmanned mobile autonomy. These new mobile autonomous robots can achieve higher productivity and cost efficiencies because the technology largely reduces the driver labor costs, increases safety, and lowers insurance rates and spoilage.
The Vecna Story
Cambridge, MA-based Vecna Technologies, founded in 1998 by a group of MIT scientists on a $5,000 shoe-string investment from the founders, has self-funded itself into a profitable ongoing manufacturer, researcher and software firm serving the healthcare, logistics and remote presence marketplaces. They have amassed more than a hundred issued and pending patents and employ more than 200.
Earlier this year Vecna Technologies spun off 60 employees and the robotics business to found and operate Vecna Robotics working with a large number of partners and contractors. The new entity’s primary applications are to provide mixed fleets of mobile robotic solutions for:
- Goods to person
- Receiving to warehouse
- Production cell to cell
- Point to point gofering
- Zone picking transport
- Tote and case picking transport
Vecna already has a broad range of products serving these applications: from tuggers like at FedEx (see video below) to RC20s which are the lowest cost per performance mobile robot on the market and several models in between. Thousands of Vecna robots are deployed worldwide in (1) major manufacturing facilities doing line-side replenishment; (2) in major shipping companies moving non-conveyables and automating indoor and outdoor tuggers and lifts; and (3) in major 3PLs and retailers doing order fulfillment transport both for store replenishment and for e-commerce.
A recent NY Times story exemplifies how these new Vecna Robotics autonomous mobile robots are impacting the world of material handling. In this case, Vecna robots are used by FedEx to handle large items that don’t fit on conveyor belts.
“When a truck filled with packages arrives, workers load the bulky items onto trailers hitched to a robot. Once these trailers are full, they press a button that sends the vehicle on its way. Equipped with laser-based sensors, cameras and other navigation tools, the robots stop when people or other vehicles get in the way. In some cases, they even figure out a new way to go.”
Vecna robots have vision systems that allow them to navigate safely around humans so that they can share common paths. And they have Autonomy Kit, a general purpose robot brain that can turn any piece of equipment into a safe and efficient mobile robot. Everything from large earth moving and construction equipment to forklifts, tuggers, floor cleaners, and even small order fulfillment and each picking systems can easily be automated and operate in collaborative human-filled environments. Further, all Vecna systems are directed by a smart centralized controller for optimization, traffic control and service. Because Vecna Robotics is finding so much demand (and success) in this sector, it is considering bringing in outside money to fund a more rapid expansion into the marketplace.
Meanwhile, Vecna Technologies, sans the robotics group, remains a leader in healthcare information technology providing patient portals, payment solutions, kiosks, mobile apps, telepresence and medical logistics, and “will continue to innovate and accelerate cutting edge solutions to our customers in the commercial and government healthcare markets,” says Vecna CTO Daniel Theobald.
Marketplace full of competitors, many from China
As competitors sense the growing demand from distribution and fulfillment center executives in need of solutions to pick, pack and ship more parcels quickly, there are many startups and companies inventing or modifying their products to solve those problems and take advantage of the demand.
There is also increasing demand from factory managers who need flexibility to move goods within their facilities that cannot be handled economically by human workers or fixed conveyor systems.
Both markets are growing exponentially and, as can be seen by the two charts above, there are many players competing in the field. Further, the market is also fueled by approved investment priorities in capital purchases that were put off during and after the financial crisis of 2008-9. This can be seen in the VDC Research graphic on the right which surveyed manufacturing executives about their capital purchasing plans for 2018-2020.
Vecna responded to those demands years ago when it began developing and expanding its line of robots and accompanying software. The refocusing that went into spinning off Vecna Robotics will help enable Vecna to continue to be a big, innovative and progressive player in the mobile robotics market.
Robotic collaboration in timber construction
NCCR Researchers are using a new method for digital timber construction in a real project for the first time. The load-bearing timber modules, which are prefabricated by robots, will be assembled on the top two floors at the DFAB HOUSE construction site.
Digitalisation has found its way into timber construction, with entire elements already being fabricated by computer-aided systems. The raw material is cut to size by the machines, but in most cases it still has to be manually assembled to create a plane frame. In the past, this fabrication process came with many geometric restrictions.
Under the auspices of the National Centre of Competence in Research (NCCR) Digital Fabrication, researchers from ETH Zurich’s Chair of Architecture and Digital Fabrication have developed a new, digital timber construction method that expands the range of possibilities for traditional timber frame construction by enabling the efficient construction and assembly of geometrically complex timber modules. Spatial Timber Assemblies evolved from a close collaboration with Erne AG Holzbau and will be used for the first time in the DFAB HOUSE project at the Empa and Eawag
With robotic precision
The robot first takes a timber beam and guides it while it is sawed to size. After an automatic tool change, a second robot drills the required holes for connecting the beams. In the final step, the two robots work together and position the beams in the precise spatial arrangement based on the computer layout. To prevent collisions when positioning the individual timber beams, the researchers have developed an algorithm that constantly recalculates the path of motion for the robots according to the current state of construction. Workers then manually bolt the beams together.
Longer lasting, more individual construction
Unlike traditional timber frame construction, Spatial Timber Assemblies can manage without reinforcement plates because the required rigidity and load-bearing result from the geometric structure. Not only does this save material; it also opens up new creative possibilities. A total of six spatial, geometrically unique timber modules will be prefabricated in this way for the first time. Lorries will then transport them to the DFAB HOUSE construction site at the NEST in Dübendorf, where they will be joined to build a two-storey residential unit with more than 100 m2 of floor space. The complex geometry of the timber construction will remain visible behind a transparent membrane façade.
Integrated digital architecture
The robots use information from a computer-aided design model to cut and arrange the timber beams. This method was specially developed during the project and uses various input parameters to create a geometry consisting of 487 timber beams in total.
The fact that Spatial Timber Assemblies is being used for digital fabrication and also in design and planning offers a major advantage according to Matthias Kohler, Professor of Architecture and Digital Fabrication at ETH Zurich and the man spearheading the DFAB HOUSE project: “If any change is made to the project overall, the computer model can be constantly adjusted to meet the new requirements. This kind of integrated digital architecture is closing the gap between design, planning and execution.”
More Information in ETH Zurich Press Release
Detailed information about the building process, quotes as well as image and video material can be found in the extended press release by ETH Zurich.
Projects that will shape the future of robotics win the euRobotics Awards 2018
The European Robotics Forum 2018 (ERF2018) in Tampere brought together over 900 attendees from robotics academia and industry. To bridge the two, euRobotics hosted the Georges Giralt PhD Award 2017 & 2018 and the TechTransfer Award 2018, during a Gala Dinner event on 14 March, in Tampere, Finland.
The aim of the euRobotics Technology Transfer Award (now in its 15th year) is to showcase the impact of robotics research and to raise the profile of technology transfer between science and industry. Outstanding innovations in robot technology and automation that result from cooperative efforts between research and industry are eligible for the prize.
The First prize went to Germany’s Roboception: rc_visard – 3D perception & manipulation for robots made easy team, composed of Michael Suppa, and Heiko Hirschmueller from Roboception GmbH, and Alin Albu-Schaeffer from the Institute of Robotics and German Aerospace Center (DLR).
“This award is a recognition of our institute’s continued efforts of supporting the go-to-market of technologies developed at our institute or – as in this case – derived thereof,” said Prof. Albu-Schaeffer. Dr. Suppa added that “since spinning Roboception off the DLR in 2015, a significant amount of thought, hard work and – first and foremost – unfailing commitment of our team have gone into bringing this technology from a research state to a market-ready product. And we are very proud to see these efforts recognized by the jury, and rewarded with this prestigious award.” The rc_visard is already in operational use in a number of customer projects across a variety of robotic domains. Prof. Albu-Schaeffer, an enthused rc_visard user at his DLR Institute himself, is convinced that “this product is one that will shape the future of robotics, thanks to its unique versatility.”
Watch the an interview with rc_visard filmed at Hannover Messe
The Second prize went to Germany, to the project Mobile Agricultural Robot Swarms (MARS), created by a team made up of Timo Blender and Christian Schlegel, from Hochschule Ulm, and Benno Pichlmaier from AGCO GmbH. The MARS experiment aims at the development of small and stream-lined mobile agricultural robot units to fuel a paradigm shift in farming practices.
The Third prize went to Smart Robots from Italy, a team made up of Paolo Rocco and Andrea Maria Zanchettin, from Politecnico di Milano, and Roberto Rossi, from Smart Robots s.r.l., Italy. Smart robots provides advanced perception and intelligent capabilities to robots, enabling new disruptive forms of collaboration with human beings.
The Finalist was the In situ Fabricator (IF), an autonomous construction robot from Switzerland, with a team made up of Kathrin Dörfler from Gramazio Kohler Research, ETH Zurich, NCCR Digital Fabrication, Markus Giftthaler and Timothy Sandy from Agile & Dexterous Robotics Lab, ETH Zurich, NCCR Digital Fabrication.
The members of jury of the TechTransfer Award were: Susanne Bieller (Eunited Robotics), Rainer Bischoff (KUKA), Georg von Wichert (Siemens), Herman Bruyninckx (KU Leuven), Martin Haegele (Fraunhofer IPA). The euRobotics TechTransfer Award 2018 is funded by the EU’s Horizon 2020 Programme.
The Georges Giralt PhD Award showcased the best PhD theses defended during 2017 and 2018 in European universities from all areas of robotics
The 2017 edition saw 41 submissions from 11 countries. The jury composed of 26 academics from 16 European countries awarded:
Winner: Johannes Englsberger, Technical University of Munich, Germany – Combining reduced dynamics models and whole-body control for agile humanoid locomotion
Finalists:
- Christian Forster, University of Zurich, Switzerland – Visual Inertial Odometry and Active Dense Reconstruction for Mobile Robots
- Stefan Groothuis, University of Twente, the Netherlands – On the Modeling, Design, and Control of Compliant Robotic Manipulators
- Meng Guo, KTH Royal Institute of Technology, Sweden – Hybrid Control of Multi-robot Systems under Complex Temporal Tasks
- Nanda van der Stap, University of Twente, the Netherlands – Image-based endoscope navigation and clinical applications
The 2018 edition saw 27 submissions from 11 countries. The jury composed of 26 academics from 16 European countries awarded:
Winners:
- Frank Bonnet, École polytechnique fédérale de Lausanne (EPFL), Switzerland – Shoaling with fish: using miniature robotic agents to close the interaction loop with groups of zebrafish Danior rerio
- Daniel Leidner, University of Bremen, Germany – Cognitive Reasoning for Compliant Robot Manipulation
Finalists:
- Adrià Colomé Figueras, University Politècnica de Catalunya, Spain – Bimanual Robot Skills: MP Encoding and Dimensionality Reduction
- Đula Nađ, University of Zagreb, Croatia – Guidance and control of autonomous underwater agents with acoustically aided navigation
- Angel Santamaria-Navarro, University Politècnica de Catalunya, Spain – Visual Guidance of Unmanned Aerial Manipulators
The Ceremony saw the handing of the European Robotics League (ERL) Awards: ERL Industrial Robots and ERL Service Robots, and ERL Emergency Robots. Points are awarded by attending local and major tournaments, not a central event, with three main objectives: strengthening the European robotics industry, pushing innovative autonomous systems for emergency response, and addressing societal challenges of aging populations in Europe. The European Robotics League is funded by the EU’s Horizon 2020 Programme.
Read the ERL Awards press release: European Robotics League winners revealed in Tampere, Finland (Season 2017/18)