Page 337 of 338
1 335 336 337 338

Security for multirobot systems

Researchers including MIT professor Daniela Rus (left) and research scientist Stephanie Gil (right) have developed a technique for preventing malicious hackers from commandeering robot teams’ communication networks. To verify the theoretical predictions, the researchers implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter. Image: M. Scott Brauer.

Distributed planning, communication, and control algorithms for autonomous robots make up a major area of research in computer science. But in the literature on multirobot systems, security has gotten relatively short shrift.

In the latest issue of the journal Autonomous Robots, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique for preventing malicious hackers from commandeering robot teams’ communication networks. The technique could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.

“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper.

“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “If you take over a computer system, you can make it release private data — and you can do a lot of other bad things. A cybersecurity attack on a robot has all the perils of attacks on computer systems, plus the robot could be controlled to take potentially damaging action in the physical world. So in some sense there is even more urgency that we think about this problem.”

Identity theft

Most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each robot makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.

A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.” The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.

“There are two ways to think of it,” says Stephanie Gil, a research scientist in Rus’ Distributed Robotics Lab and a co-author on the new paper. “In some cases cryptography is too difficult to implement in a decentralized form. Perhaps you just don’t have that central key authority that you can secure, and you have agents continually entering or exiting the network, so that a key-passing scheme becomes much more challenging to implement. In that case, we can still provide protection.

“And in case you can implement a cryptographic scheme, then if one of the agents with the key gets compromised, we can still provide  protection by mitigating and even quantifying the maximum amount of damage that can be done by the adversary.”

Hold your ground

In their paper, the researchers consider a problem known as “coverage,” in which robots position themselves to distribute some service across a geographic area — communication links, monitoring, or the like. In this case, each robot’s “vote” is simply its report of its position, which the other robots use to determine their own.

The paper includes a theoretical analysis that compares the results of a common coverage algorithm under normal circumstances and the results produced when the new system is actively thwarting a spoofing attack. Even when 75 percent of the robots in the system have been infiltrated by such an attack, the robots’ positions are within 3 centimeters of what they should be. To verify the theoretical predictions, the researchers also implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter.

“This generalizes naturally to other types of algorithms beyond coverage,” Rus says.

The new system grew out of an earlier project involving Rus, Gil, Dina Katabi — who is the other Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT — and Swarun Kumar, who earned master’s and doctoral degrees at MIT before moving to Carnegie Mellon University. That project sought to use Wi-Fi signals to determine transmitters’ locations and to repair ad hoc communication networks. On the new paper, the same quartet of researchers is joined by MIT Lincoln Laboratory’s Mark Mazumder.

Typically, radio-based location determination requires an array of receiving antennas. A radio signal traveling through the air reaches each of the antennas at a slightly different time, a difference that shows up in the phase of the received signals, or the alignment of the crests and troughs of their electromagnetic waves. From this phase information, it’s possible to determine the direction from which the signal arrived.

Space vs. time

A bank of antennas, however, is too bulky for an autonomous helicopter to ferry around. The MIT researchers found a way to make accurate location measurements using only two antennas, spaced about 8 inches apart. Those antennas must move through space in order to simulate measurements from multiple antennas. That’s a requirement that autonomous robots meet easily. In the experiments reported in the new paper, for instance, the autonomous helicopter hovered in place and rotated around its axis in order to make its measurements.

When a Wi-Fi transmitter broadcasts a signal, some of it travels in a direct path toward the receiver, but much of it bounces off of obstacles in the environment, arriving at the receiver from different directions. For location determination, that’s a problem, but for radio fingerprinting, it’s an advantage: The different energies of signals arriving from different directions give each transmitter a distinctive profile.

There’s still some room for error in the receiver’s measurements, however, so the researchers’ new system doesn’t completely ignore probably fraudulent transmissions. Instead, it discounts them in proportion to its certainty that they have the same source. The new paper’s theoretical analysis shows that, for a range of reasonable assumptions about measurement ambiguities, the system will thwart spoofing attacks without unduly punishing valid transmissions that happen to have similar fingerprints.

“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” says David Hsu, a professor of computer science at the National University of Singapore. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”

If you enjoyed this article from CSAIL, you might also be interested in:

See all the latest robotics news on Robohub, or sign up for our weekly newsletter.

The Drone Center’s Weekly Roundup: 3/13/17

Norway has established a test site at Trondheim Fjord for unmanned and autonomous vessels like these concept container ships of the future. Credit: Kongsberg Seatex

March 6, 2017 – March 12, 2017


Germany reportedly intends to acquire the Northrop Grumman MQ-4C Triton high-altitude surveillance drone, according to a story in Sueddeutsche Zeitung. In 2013, Germany cancelled a similar program to acquire Northrop Grumman’s RQ-4 Global Hawk, a surveillance drone on which the newer Triton is based, due to cost overruns. The Triton is a large, long-endurance system that was originally developed for maritime surveillance by the U.S. Navy. (Reuters)

The U.S. Army released a report outlining its strategy for obtaining and using unmanned ground vehicles. The Robotics and Autonomous Systems strategy outlines short, medium, and long-term goals for the service’s ground robot programs. The Army expects a range of advanced unmanned combat vehicles to be fielded in the 2020 to 2030 timeframe. (IHS Jane’s 360)

The U.S. Air Force announced that there are officially more jobs available for MQ-1 Predator and MQ-9 Reaper pilots than for any manned aircraft pilot position. Following a number of surges in drone operations, the service had previously struggled to recruit and retain drone pilots. The Air Force is on track to have more than 1,000 Predator and Reaper pilots operating its fleet. (

Commentary, Analysis, and Art

At Shephard Media, Grant Turnbull writes that armed unmanned ground vehicles are continuing to proliferate.

At Wired, Paul Sarconi looks at how the introduction of cheap, consumer-oriented underwater drones could affect different industries.

At Recode, April Glaser looks at how a key part of the U.S. government’s drone regulations appears to be based on a computer simulation from 1968.

At FlightGlobal, Dominic Perry writes that France’s Dassault is not concerned that the U.K. decision to leave the E.U. will affect a plan to develop a combat drone with BAE Systems.

At Drone360, Kara Murphy profiles six women who are contributing to and influencing the world of drones.

At DroningON, Ash argues that the SelFly selfie drone KickStarter project may go the way of the failed Zano drone.

At the Los Angeles Times, Bryce Alderton looks at how cities in California are addressing the influx of drones with new regulations.

At CBS News, Larry Light looks at how Bill Gates has reignited a debate over taxes on companies that use robots.

In an interview with the Wall Street Journal, Andrew Ng and Neil Jacobstein argue that artificial intelligence will bring about significant changes to commerce and society in the next 10 to 15 years.

In testimony before the House Armed Services Committee’s subcommittee on seapower, panelists urged the U.S. Navy to develop and field unmanned boats and railguns. (USNI News)

The Economist looks at how aluminium batteries could provide underwater drones with increased range and endurance.

At Buzzfeed, Mike Giglio examines the different ways that ISIS uses drones to gain an advantage over Iraqi troops in Mosul.

At, Richard Sisk looks at how a U.S.-made vehicle-mounted signals “jammer” is helping Iraqi forces prevent ISIS drone attacks in Mosul.

In a Drone Radio Show podcast, Steven Flynn discusses why prioritizing drone operators who comply with federal regulations is important for the drone industry.

At ABC News, Andrew Greene examines how a push by the Australian military to acquire armed drones has reignited a debate over targeted killings.

At Smithsonian Air & Space, Tim Wright profiles the NASA High Altitude Shuttle System, a glider drone that is being used to test communications equipment for future space vehicles.

At NPR Marketplace, Douglas Starr discusses the urgency surrounding the push to develop counter-drone systems.

Know Your Drone

Researchers at Virginia Tech are flying drones into crash-test dummies to evaluate the potential harm that a drone could cause if it hits a human. (Bloomberg)

Meanwhile, researchers at École Polytechnique Fédérale de Lausanne are developing flexible multi-rotor drones that absorb the impact of a collision without breaking. (Gizmodo)

The China Academy of Aerospace Aerodynamics is readying its Caihong solar-powered long-endurance drone for its maiden flight, which is scheduled for mid-year. (Eco-Business)

Meanwhile, the China Aerospace Science and Industry Corporation has announced that it is developing drones with stealth capabilities. (Voice of America)

During an exercise, defense firm Rafael successfully launched a missile from an Israeli Navy unmanned boat. (Times of Israel)

Technology firms Thales and Unifly unveiled the ECOsystem UTM, an air traffic management system for drones. (Unmanned Systems Technology)

Norway’s government has approved a plan to establish a large test site for unmanned maritime vehicles at the Trondheim Fjord. (

Automaker Land Rover unveiled a search and rescue SUV equipped with a roof-mounted drone. (TechCrunch)

U.S. chipmaker NVIDIA has launched the Jetson TX2, an artificial intelligence platform that can be used in drones and robots. (Engadget)

Recent satellite images of Russia’s Gromov Flight Research Institute appear to show the country’s new Orion, a medium-altitude long-endurance military drone. (iHLS)

Technology firms Aveillant and DSNA Services are partnering to develop a counter-drone system. (

Aerospace firm Airbus has told reporters that it is serious about producing its Pop.Up passenger drone concept vehicle. (Wired)

Drones at Work

The Peruvian National Police are looking to deploy drones for counter-narcotics operations. (Business Insider)

The U.S. Air Force used a multi-rotor drone to conduct a maintenance inspection of a C-17 cargo plane. (U.S. Air Force)

India is reportedly looking to deploy U.S drones for surveillance operations along the Line of Control on the border with Pakistan. (Times of India)

The Fire Department of New York used its tethered multi-rotor drone for the first time during an apartment fire in the Bronx. (Crain’s New York)

The Michigan State Police Bomb Squad used an unmanned ground vehicle to inspect the interior of two homes that were damaged by a large sinkhole. (WXYZ)

A video posted to YouTube appears to show a woman in Washington State firing a gun at a drone that was flying over her property. (Huffington Post)

Meanwhile, a bill being debated in the Oklahoma State Legislature would remove civil liability for anybody who shoots a drone down over their private property. (Ars Technica)

In a promotional video, the company that makes Oreos used drones to dunk cookies into cups of milk. (YouTube)

The NYC Drone Film Festival will hold its third annual event this week. (YouTube)

An Arizona man who leads an anti-immigration vigilante group is using a drone to patrol the U.S border with Mexico in search of undocumented crossings. (Voice of America)

A man who attempted to use a drone to smuggle drugs into a Scottish prison has been sentenced to five years in prison. (BBC)

Industry Intel

The Turkish military has taken a delivery of six Bayraktar TB-2 military drones, two of which are armed, for air campaigns against ISIL and Turkish forces. (Defense News)

The U.S. Navy awarded Boeing Insitu a contract for RQ-21A Blackjack and ScanEagle drones. (FBO)

The U.S. Army awarded Riegl a $30,021 contract for LiDAR accessories for the Riegl RiCopter drone. (FBO)

General Atomics Aeronautical Systems awarded Hughes Network Systems a contract for satellite communications for the U.K.’s Predator B drones. (Space News)

Schiebel awarded CarteNav Solutions a contact for its AIMS-ISR software for the S-100 Camcopter unmanned helicopters destined for the Royal Australian Navy. (Press Release)

Defence Research and Development Canada awarded Ontario Drive & Gear a $1 million contract for trials of the Atlas J8 unmanned ground vehicle. (Canadian Manufacturing)

Kratos Defense and Security Solutions reported a $5.6 million contract for aerial targeted drones for the U.S. government. (Shephard Media)

Deveron UAS will provide Thompsons, a subsidiary of Lansing Trade Group and The Andersons, with drone data for agricultural production through 2018. (Press Release)

Precision Vectors Aerial selected the Silent Falcon UAS for its beyond visual line-of-sight operations in Canada. (Shephard Media)

Rolls-Royce won a grant from Tekes, a Finnish government research funding agency, to continue developing remote and autonomous shipping technologies. (Shephard Media)

Israeli drone manufacturer BlueBird is submitting an updated MicroB UAV system for the Indian army small UAV competition. (FlightGlobal)

A Romanian court has suspended a planned acquisition of Aeronautics Defense Systems Orbiter 4 drones for the Romanian army. (FlightGlobal)
Deere & Co.—a.k.a. John Deere—announced that it will partner with Kespry, a drone startup, to market drones for the construction and forestry industries. (TechCrunch)

For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.

Beyond 5G: NSF awards $6.1 million to accelerate advanced wireless research

The National Science Foundation (NSF) announced a $6.1 million, five-year award to accelerate fundamental research on wireless communication and networking technologies through the foundation’s Platforms for Advanced Wireless Research (PAWR) program.

Through the PAWR Project Office (PPO), award recipients US Ignite, Inc. and Northeastern University will collaborate with NSF and industry partners to establish and oversee multiple city-scale testing platforms across the United States. The PPO will manage nearly $100 million in public and private investments over the next seven years.

“NSF is pleased to have the combined expertise from US Ignite, Inc. and Northeastern University leading the project office for our PAWR program,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. “The planned research platforms will provide an unprecedented opportunity to enable research in faster, smarter, more responsive, and more robust wireless communication, and move experimental research beyond the lab — with profound implications for science and society.”

Over the last decade, the use of wireless, internet-connected devices in the United States has nearly doubled. As the momentum of this exponential growth continues, the need for increased capacity to accommodate the corresponding internet traffic also grows. This surge in devices, including smartphones, connected tablets and wearable technology, places an unprecedented burden on conventional 4G LTE and public Wi-Fi networks, which may not be able to keep pace with the growing demand.

NSF established the PAWR program to foster use-inspired, fundamental research and development that will move beyond current 4G LTE and Wi-Fi capabilities and enable future advanced wireless networks. Through experimental research platforms that are at the scale of small cities and communities and designed by the U.S. academic and industry wireless research community, PAWR will explore robust new wireless devices, communication techniques, networks, systems and services that will revolutionize the nation’s wireless systems. These platforms aim to support fundamental research that will enhance broadband connectivity and sustain U.S. leadership and economic competitiveness in the telecommunications sector for many years to come.

“Leading the PAWR Project Office is a key component of US Ignite’s mission to help build the networking foundation for smart communities,” said William Wallace, executive director of US Ignite, Inc., a public-private partnership that aims to support ultra-high-speed, next-generation applications for public benefit. “This effort will help develop the advanced wireless networks needed to enable smart and connected communities to transform city services.”

Establishing the PPO with this initial award is the first step in launching a long-term, public-private partnership to support PAWR. Over the next seven years, PAWR will take shape through two multi-stage phases:

  • Design and Development. The PPO will assume responsibility for soliciting and vetting proposals to identify the platforms for advanced wireless research and work closely with sub-awardee organizations to plan the design, development, deployment and initial operations of each platform.
  • Deployment and Initial Operations. The PPO will establish and manage each platform and document best practices as it progresses through the lifecycle.

“We are delighted that our team of wireless networking researchers has been selected to take the lead of the PAWR Project Office in partnership with US Ignite, Inc.,” said Dr. Nadine Aubry, dean of the college of engineering and university distinguished professor at Northeastern University. “I believe that PAWR, by bringing together academia, industry, government and communities, has the potential to make a transformative impact through advances spanning fundamental research and field platforms in actual cities.”

The PPO will work closely with NSF, industry partners and the wireless research community in all aspects of PAWR planning, implementation and management. Over the next seven years, NSF anticipates investing $50 million in PAWR, combined with approximately $50 million in cash and in-kind contributions from over 25 companies and industry associations. The PPO will disperse these investments to support the selected platforms.

Additional information can be found on the PPO webpage.

This announcement will also be highlighted this week during the panel discussion, “Wireless Network Innovation: Smart City Foundation,” at the South by Southwest conference in Austin, Texas.

Reactions from experts: Robotics and tech to receive funding boost from UK government

Yesterday, the UK government announced their budget plans to invest in robotics, artificial intelligence, driverless cars, and faster broadband. The spending commitments include:

  • £16m to create a 5G hub to trial the forthcoming mobile data technology. In particular, the government wants there to better mobile network coverage over the country’s roads and railway lines
  • £200m to support local “full-fibre” broadband network projects that are designed to bring in further private sector investment
  • £270m towards disruptive technologies to put the UK “at the forefront” including cutting-edge artificial intelligence and robotics systems that will operate in extreme and hazardous environments, including off-shore energy, nuclear energy, space and deep mining; batteries for the next generation of electric vehicles; and biotech.
  • Investing £300 million to further develop the UK’s research talent, including through creating an additional 1,000 PhD places.

The entire budget can be reviewed here.

Several experts in the robotics community agree that progress is shifting in the right direction, however, more needs to happen if the UK is to remain competitive in the robotics sector:

Prof Paul Newman, Founder, Oxbotica:

“The UK understand the very real positive impact that RAS [robotics & autonomous systems] will have on our society from now, of all time. It continues to see the big picture and today’s announcement by the Chancellor is a clear indication of that. We can have better roads, cleaner cities, healthier oceans and bodies, safer skies, deeper mines, better jobs and more opportunity. That’s what machines are for.”

Dr Graeme Smith, CEO, Oxbotica:

“We are at a real inflection point in the development of autonomous technology. The UK has a number of nascent world class companies in the area of self-driving vehicles, which have a huge potential to change the world, whilst creating jobs and producing exportable UK goods and services. We have a head start and now we need to take advantage of it.” [from FT]

Dominic Keen, Founder of Britbots:

“Some of the great robotics companies of the future are being launched by British entrepreneurs and the support announced in today’s budget will to strengthen their impact and global competitiveness.  We’re currently seeing strong appetite from private investors to back locally-grown robotics businesses and this money will help bring even more interest in this space”

Dr Rob Buckingham, Director of the UK Atomic Energy Authority’s RACE robotics centre:

“This is welcome news for the many research organisations developing robotics applications. As a leading UK robotics research group specialising in extreme and challenging environments, we welcome the allocation of significant funding in this field as part of the Government’s evolving Industrial Strategy. RACE and the rest of the robotics R&D sector are looking forward to working with industry to fully utilise this funding.”

Dr Sabine Hauert, University of Bristol:

“Robotics and AI is set to be a driving force in increasing productivity, but also in solving societal and environmental challenges. It’s opening new frontiers in off-shore and nuclear energy, space and deep mining. Investment from government will be key in helping the UK stay at the forefront of this field.” [from BBC]

Prof Noel Sharkey, University of Sheffield:

“We lost our best machine learning group to Amazon just recently. The money means there will be more resources for universities, which may help them retain their staff. But it’s not nearly enough for all of the disruptive technologies being developed in the UK. The government says it want this to be the leading robotics country in the world, but Google and others are spending far more, so it’s ultimately chicken feed by comparison.” [from BBC]

Prof Alan Winfield, UWE Bristol:

“I’m pleased by the additional funding, and, in fact, my group is a partner in a new £4.6M EPSRC grant to develop robots for nuclear decommissioning announced last week.

But having just returned from Tokyo (from AI in Asia: AI for Social Good), I’m well aware that other countries are investing much more heavily than the UK. China was for instance described as an emerging powerhouse of AI. A number of colleagues at that meeting also made the same point as Noel, that universities are haemorrhaging star AI/robotics academics to multi-national companies with very deep pockets.”

Michael Szollosy, Research Fellow at Sheffield Centre for Robotics, Dept of Psychology:

“I, like many others, was pleased to hear more money going into robotics and AI research, but I was disappointed – though completely unsurprised – to see nothing about how to restructure the economy to deal with the consequences of increasing research into and use of robots and AI. Hammond’s blunder on the relationship of productivity to wages – and it can’t be seen as anything other than a blunder – means that he doesn’t even seem to appreciate that there is a problem.

The truth is that increased automation means fewer jobs and lower wages and this needs to be addressed with some concrete measures. There will be benefits to society with increased automation, but we need to start thinking now (and taking action now) to ensure that those benefits aren’t solely economic gain for the already-wealthy. The ‘robot dividend’ needs to be shared across society, as it can have far-reaching consequences beyond economics: improving our quality of life, our standard of living, education, health and accessibility.”

Frank Tobe, Editor at The Robot Report:

“America has the American Manufacturing Initiative which, in 2015, was expanded to establish Fraunhofer-like research facilities around the US (on university campuses) that focus on particular aspects of the science of manufacturing.

Robotics were given $50 million of the $500 million for the initiative and one of the research facilities was to focus on robotics. Under the initiative, efforts from the SBIR, NSF, NASA and DoD/DARPA were to be coordinated in their disbursement of fundings for science in robotics. None of these fundings comes anywhere close to the coordinated funding programs and P-P-Ps found in the EU, Korea and Japan, nor the top-down incentivized directives of China’s 5-year plans. Essentially American robotic funding is (and has been) predominantly entrepreneurial with token support from the government.

In the new Trump Administration, there is no indication of any direction nor continuation (funding) of what little existing programs we have. At a NY Times editorial board sit-down with Trump after his election, he was quoted as saying that “Robotics is becoming very big and we’re going to do that. We’re going to have more factories. We can’t lose 70,000 factories. Just can’t do it. We’re going to start making things.” Thus far there is no followup to those statements nor has Trump hired replacements for the top executives at the Office of Science and Technology Policy, all of which are presently vacant.”

And finally, a few comments from the business sector on Twitter:


Supporting Women in Robotics on International Women’s Day.. and beyond.

International Women’s Day is raising discussion about the lack of diversity and role models in STEM and the potential negative outcomes of bias and stereotyping in robotics and AI. Let’s balance the words with positive actions. Here’s what we can all do to support women in robotics and AI, and thus improve diversity, innovation and reduce skills shortages for robotics and AI.

Join – a network of women working in robotics (or who aspire to work in robotics). We are a global discussion group supporting local events that bring women together for peer networking. We recognize that lack of support and mentorship in the workplace holds women back, particularly if there is only one woman in an organization/company.

Although the main group is only for women, we are going to start something for male ‘Allies’ or ‘Champions’. So men, you can join women in robotics too! Women need champions and while it would be ideal to have an equal number of women in leadership roles, until then, companies can improve their hiring and retention by having visible and vocal male allies. We all need mentors as our careers progress.

Women also need visibility and high profile projects for their careers to progress on par. One way of improving that is to showcase the achievements of women in robotics. Read and share all four year’s worth of our annual “25 Women in Robotics you need to know about” – that’s more than 100 women already because we have some groups in there. (There has always been a lot of women on the core team at, so we love showing our support.) Our next edition will come out on October 10 2017 to celebrate Ada Lovelace Day.

Change starts at the top of an organization. It’s very hard to hire women if you don’t have any women, or if they can’t see pathways for advancement in your organization. However, there are many things you can do to improve your hiring practices. Some are surprisingly simple, yet effective. I’ve collected a list and posted it at Silicon Valley Robotics – How to hire women.

And you can invest in women entrepreneurs. All the studies show that you get a higher rate of return, and higher likelihood of success from investments in female founders. And yet, proportionately investment is much less. You don’t need to be a VC to invest in women either. is matching loans today and $25 can empower an entrepreneur all over the world. #InvestInHer

And our next Silicon Valley/ San Francisco Women in Robotics event will be on March 22 at SoftBank Robotics – we’d love to see you there – or in support!

Mining and nuclear decommissioning: Robots in dangerous and dirty areas

Cross section of underground tunnel showing miners at work with mining equipment.
Cross section of underground tunnel showing miners at work with mining equipment.

Workers have long confronted dangerous and dirty jobs. They’ve had to dig to the bottom of mines, or put themselves in harm’s way to decommission ageing nuclear sites. It’s time to make these jobs safer and more efficient, robots are just starting to provide the necessary tools.


Mining has become much safer, yet workers continue to die every year in accidents across Europe, highlighting the perils of this genuinely needed industry. Everyday products use minerals extracted from mining, and 30 million jobs in the EU depend on their supply. Robots are a way to modernise an industry that is constantly under pressure with the fall in prices of commodities and the lack of safe access to hard-to-reach resources. Making mining greener is also a key concern.

The vision is for people to move away from the rock face, and onto the surface. In an ideal world where mining 4.0 is the norm, a central control room will run all operations in the mine, which will become a zero-entry zone for workers. Robots will take care of safety critical tasks such as drilling, extracting, crushing and transport of excavated material. The mine could operate continuously while experts on the surface are in charge of managing, monitoring, optimising and maintenance of the systems – essentially making mining a high-tech job.

Smart Mine of the Future report.
Smart Mine of the Future report.

A recent report from the IDC echoes this vision saying, “The future of mining is to create the capability to manage the mine as a system – through an integrated web of technologies such as virtualization, robotics, Internet of Things (IoT), sensors, connectivity, and mobility – to command, control, and respond.”

And companies are betting their money on it, “69% of mining companies globally are looking for remote operation and monitoring centres, 56% at new mine methods, 29% at robotics and 27% at unmanned drones.”

Europe is also heavily investing, with several large projects over the past 15 years. The European project I2Mine, which finished last year, focussed on technologies suitable for underground mining activities (at depths greater than 1500m). With 23M Euros invested, it was the biggest EU RTD project funded in the mining sector.

Project Manager, Dr Horst Hejny, said: “The overall objective of the project was the development of innovative technologies and methods for sustainable mining at greater depths.”

One result of the project was a set of sensors for material recognition and boundary layer detection and sorting, as well as a new cutting head which allows for continuous operation.

Full, and even remote, automation, however, is still a long way ahead. Like any robotic system, automated mining will have to deal with a plethora of real-world challenges. And navigating underground mines, or manipulating rock, are very far from ideal laboratory settings. As an intermediate step, researchers are looking to set up test sites where they can experiment with the technology outside of the lab and before deployment in safety critical areas. Juha Röning from the University of Oulu in Finland uses Oulu Zone, a race track that could prove helpful to test automated driving for the mining industry. His laboratory has previous experience in this area, having tested an automated dumper robot for excavated material. It’s an obvious application for a country that Juha says “invented mining”. There is more to it than autonomous driving, however, and his laboratory has been thinking about ways to improve the infrastructure around the deployment of mobile robots, including using advanced positioning systems to increase the precision of robot tracking and control.

Another test site, RACE, which stands for Remote Applications in Challenging Environments, was recently opened by the UK Atomic Energy Authority. The facility conducts R&D and commercial activities focused on developing robots and autonomous systems for challenging environments.

On their website, they claim to be challenging ‘challenging environments’ saying: “Challenging Environments exist in numerous sectors, nuclear, petrochemical, space exploration, construction and mining are examples. The technical hurdle is different for different physical environments and includes radiation, extreme temperature, limited access, vacuum and magnetic fields, but solutions will have common features. The commercial imperative is to enable safe and cost efficient operations.”

Rather than develop full turn-key solutions for mines, many European companies have been providing automation solutions for very specific tasks. Swedish company Sandvik, for example, demonstrated a fully automated LHD (Load, Haul, Dump machine) vehicle.

Also based in Sweden, Atlas Copco has an autonomous LHD system of their own called Scooptram.

Polish company KGHM, a leader in copper and silver production, has been deeply involved in many R&D projects across Europe. Their mines in Lubin and Polkowice Sieroszowice have served as test sites for recent developments. KGHM, and mining companies Boliden and LKAB in Sweden joined forces with several major global suppliers and the academia to develop a common vision for future mining for 2011 to 2020.

The report discusses how to make deep mining of the future “safer, leaner and greener.” The short answer: “we need an innovative organisation that attracts talented young men and women to meet the grand challenges and opportunities of future mineral supply.” They add however that “by 2030 we will not yet have achieved invisible mining, zero waste, or the fully intelligent, automated mine without any human presence”. More time is needed.

Robotics technology also opens a new frontier in areas that can be mined beyond what is currently human-reachable. The new push is towards mining the deep sea, or space in a responsible manner. UK-based company Soil Machine Dynamics Ltd recently developed three vehicles that operate at depths of up to 2,500m on seafloor massive sulphide (SMS) deposits for the company Nautilus Minerals. The subsea mining machines weigh up to 310t and have vessel-based power and control systems, pilot consoles, umbilical systems and launch and recovery systems.

As for the space race, asteroids provide an untapped resource for metallic elements such as iron, nickel, and cobalt. Although space robots have been shown to navigate and drill in space, scaling up to meaningful extraction quantities will be a challenge. And it’s still unclear if the cost and complexity of space mining justify the means.

Nuclear Decommissioning

Like mining, nuclear decommissioning requires zero-entry operations. Across Europe, there are plans to close up to 80 civilian nuclear power reactors in the next ten years.

“The total cost of nuclear decommissioning in the UK alone is currently estimated at £60 billion. Analysis by the National Nuclear Laboratory indicates that 20% of the cost of complex decommissioning will be spent on RAS (Robotics and Autonomous Systems) technology.” – RAS UK Strategy.

Designing robots for the nuclear environment is especially challenging because the robots need to be robust, reliable, safe, and also need to withstand a highly radioactive environment.

In 2012, one of the high hazard plants at Sellafield UK used a custom-made remotely operated robot arm to isolate the risk caused by a 60-year-old First Generations Magnox Storage Pond. The arm had to separate and remove redundant pipework in a high radiation area, and then clean and seal a contaminated pond wall. The redundant pipework was then isolated with special sealants, before its remote removal. The robotic arm then scabbled the pond wall and applied a specialist coating to seal the concrete.

Over 80,000 hours of testing in a separate test facility were needed before the team had confidence the robots would perform flawlessly on such a high-risk task.

The Sellafield site has since added a “Riser” (Remote Intelligence Survey Equipment for Radiation) quadcopter developed by Blue Bear Systems Research Ltd and Createc Ltd. It is equipped with a system that allows it to map the inside of a building and radiation levels.

Little underwater vehicles were deployed in the nuclear storage pools. The robots build on existing technology developed for inspection of offshore oil and gas industries prepared by company James Fisher Nuclear. They were initially sent to image the environment but are now used for basic manipulation tasks.

In Marcoule France, Maestro, a tele-operated robot arm, is also being using to decommission a site. The robot can laser-cut 4mx2m metal structures into smaller pieces. Humans could do this faster, but 30 minutes on the site would be lethal.

And with so many robot arms entering the field, KUKA has also developed a suite of robots specifically for nuclear decommissioning.

Given the high-risk nature of nuclear decommissioning, traditional robotic solutions seem to be favoured for now, as they are tested and understood. However, a new wave of innovative solutions is also making its way to the market.

Swiss startup Rovenso, for example, developed ROVéo, a robot whose unique four-wheel design allows it to climb over obstacles up to two-thirds its height. They aim to produce a larger-scale model equipped with a robotic arm for use in dismantling nuclear plants.

OCRobotics in the UK is also working closely with the nuclear industry to build robots that have a better reach than modern industrial robot arms. Their snake arms can be fit with lasers or other tools, and can slither through nearly any structure.

Andy Graham, Technical Director at OC Robotics, said “Robots have the potential to improve everyone’s quality of life. Reducing the need for people to enter hazardous workplaces and confined spaces is central to what we do at OC Robotics, whether the application is in manufacturing industries, inspection and maintenance in the oil and gas sector, or decommissioning nuclear power stations. Users are becoming more and more aware of the potential for robots to enable their workers to work more comfortably and safely from outside these spaces”.

“The Lasersnake 2 project, led by OC Robotics and part-funded by the UK government, has developed and is currently testing a snake-arm robot equipped with a powerful laser capable of cutting effortlessly through 60mm thick steel. The same snake-arm robot can be equipped with a gripper enabling it to lift 25kg at a reach of about 5m, and has also been demonstrated underwater in an environment similar to a nuclear storage pond. With this cutting capability and the ability to snake through small holes and around obstacles, this enables “keyhole surgery” for nuclear decommissioning, leaving containment structures, shielding and cells intact while dismantling the processing equipment inside them”.

Beyond decommissioning, robots are also being used for new energy infrastructure including at ITER, the next generation fusion research device being built in the south of France that will achieve ‘burning plasma’, one of the required steps for fusion power. Remote handling is critical to ITER, says project partner RACE.

“Cutting and welding large diameter stainless steel pipes is a fundamental process for remote maintenance of ITER.” RACE has been developing concepts to replace remotely the beam sources of the neutral beam heating system, high energy ion beams that are used to heat the plasmas to 200M °C.

From their website, “A monorail crane was designed with high lift in a compact space, with an innovative control system for high radiation environments. The beam line transporter operates along the full length of the beam line, like an industrial production line. It has a load capacity of many tonnes, haptic feedback and is fully remotely operated and remotely recoverable.”

“ITER provides some seriously challenging environments for robotics: high radiation dose; elevated temperatures; limited access; large, compact equipment and some very challenging inspection and maintenance procedures to implement fast and reliably, without failure.”

A cross-sectional view of the ITER tokamak.
A cross-sectional view of the ITER tokamak.

These projects are just part of a worldwide effort to advance the safety of nuclear applications. Japan has also been working on its robot fleet, in response to the Fukushima disaster and the cleanup efforts still ahead. Their robots take different shapes and forms depending on their task, and can blast dry ice, inspect vents, cut pipes, and remove debris.

Competitions like the recent Darpa Robotics Challenge, or the European Robotics League (ERL) Emergency Challenge, have been driving the state of the art forward. ERL Emergency is an outdoor multi-domain robotic competition inspired by the 2011 Fukushima accident. The challenge requires teams of land, underwater and flying robots to work together to survey the scene, collect environmental data, and identify critical hazards.

“Robotics competitions are not just for testing a robot outside a laboratory, or engaging with an audience; they are events that get people together, inspire younger generations and facilitate cooperation and exchange of knowledge between multiple research groups. Robotics competitions provide a perfect platform for challenging, developing and showcasing robotics technologies.” said Marta Palau, ERL Emergency Project Manager

Similar scenarios are also being explored by Juha Röning from the University of Oulu in Finland. He aims to use flying robots to map radiation levels after a nuclear accident thanks to support from the Nordic Nuclear Safety Research Agency. He says “in the future, flying robots could be used to map radiation levels, and then a second team of ground robots could be sent in for the cleanup”.

Overall, robots are helping workers avoid dirty and dangerous areas, while making the job more efficient, and potentially fulfilling. We are only at the initial stages, however, as many of these high-risk tasks require years of testing before new technologies are implemented.

Antenna separation: How close can you go?


When building a new robot mechanical engineers always ask me: how close can different antennas be to one another? It is not uncommon to try squeezing 5+ antennas on a single robot (GPS, GPS2 for heading, RTK, joystick, e-stop, communications, etc..). So what is the proper response? The real answer is that it depends heavily on the environment. However, below are the rules of thumb I have learned and have been passed down to me for antenna separation. I do want to give a disclaimer some of this has been passed down as rules of thumb and may not be 100% correct.

Here is the rule: The horizontal distance between antennas should be greater than 1/4 of its wavelength (absolute minimum separation), but it should not be located at the exact multiples of its wavelength (maybe avoid the first 3-4 multiples). If multiple frequency antennas are near each other, then use the spacing distance of the lower frequency antenna, or even better, try to satisfy the rule for both frequencies.

Device Frequency Wavelength 1/4 Wavelength
WiFi 802.11 5.8GHz 5.17cm 1.29cm
WiFi 802.11 2.4GHz 12.49cm 3.12cm
GPS* 1.227GHz 24.43cm 6.11cm
Radios 900MHz 33.3cm 8.33cm

Here is a nice wavelength calculator I just found to generate the table above.

* If you are using two GPS antennas to compute heading then this does not apply. These numbers are strictly for RF considerations.

So, for example, if you have a GPS antenna and a WiFi 2.4GHz antenna you would want them to be separated by at least (more is better, within reason) 8.33cm. And you should avoid putting them at exactly 24.43cm or 33.3cm from each other.

This rule seems to work with the low power antennas that we typically use in most robotics applications. I am not sure how this would work with high power transmitters. For higher power transmitting antennas, you might want greater separation. The power drops off pretty quickly with distance (proportional to the square of the distance).

I also try to separate receive and transmit antennas (as a precaution) to try and prevent interference problems.

An extension of the above rule is ground planes. Ground planes are conductive reference planes that are put below antennas to reflect the waves to create a predictable wave pattern and can also help prevent multipath waves (that are bouncing off the ground, water, buildings, etc…). The further an antenna is from the ground (since the ground can act as a ground plane), the more likely having a ground place becomes necessary. In its simplest form, a ground plane is a piece of metal that extends out from the antennas base at least 1/4 wavelength in each direction. Fancy ground planes might just be several metal prongs that stick out. A very common ground plane is the metal roof of a vehicle/robot.

Note: Do not confuse ground planes with RF grounds, signal grounds, DC grounds, etc…

Aerial roof markings on London police car. Source: Wikimedia Commons
Aerial roof markings on London police car. Source: Wikimedia Commons

An example of building a ground plane can be with a GPS antenna. It should be mounted in the center of a metal roofed robot/car or on the largest flat metal location. This will minimize the multipath signals from the ground. If there is no flat metal surface to mount the antenna you can create a ground plane by putting a 12.22cm diameter metal sheet directly below the antenna (about 1/2 the signals wavelength, which gives 1/4 wavelength per side).

Note: Some fancy antennas do not require that you add a ground plane. For example, the Novatel GPS antennas do NOT require you to add a ground plane, as described above.

Other things to watch out for are shadowing between the antennas and sensors and the fresnel zone. For more information on the fresnel zone, and why antennas are not actually just line of site, click here.

Beyond Asimov: how to plan for ethical robots

Ethical robots in our future? Source: Flickr/CC
Ethical robots in our future? Source: Meddygarnet/Flickr/CC

As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimov knew they weren’t perfect

Rowena Morrill/GFDL, CC BY-SA
Rowena Morrill/GFDL, CC BY-SA

Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Robots without ethics

Asimov's laws are in a particular order, for good reason. Randall Monroe/xkcd CC-by-NC
Asimov’s laws are in a particular order, for good reason. Randall Monroe/xkcd CC-by-NC

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Catastrophe results from giving too much power to artificial intelligence.

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

Toward defining robot ethics

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including
the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and neurosciencesocial psychologydevelopmental psychology and philosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

  • If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as peopledo. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
  • Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
  • A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
  • Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
  • Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
  • As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
  • A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

Disclosure statement

Benjamin Kuipers is primarily a professor. He spends a small amount of time as an advisor for, for which he receives a small amount of money and stock. He hopes that they (like other readers) will benefit intellectually from this article, but recognizes that they are unlikely to benefit financially. He has received a number of research grants from government and industry, none directly on this topic. He is a member of several professional organizations, including the Association for the Advancement of Artificial Intelligence (AAAI). He has also taken public positions and signed statements opposing the use of lethal force by robots, and describing his own decision not to take military funding for his research.

This article was originally published on The Conversation. Read the original article.

What is a CRO and will your business soon need one?


The times are changing! What seemed like science fiction a few years ago is now on the front page of the news. Augmented reality, wearables, virtual assistants, robots, and other smart devices are beginning to permeate our daily lives. Amazon Prime Air, the e-commerce giant’s new retail drone delivery system, is ready to launch but is waiting on regulatory approval from the U.S. Federal Aviation Administration. If you have not seen the latest video of their system in action then it’s an eye-opener on how far automation has come.

Turning from the skies to the road, robotics is set to revolutionize how we drive. Self-driving features are already available in certain models like Tesla, with fully automated vehicles ready for purchase in the next few years. In fact, some forecasts estimate that 10 million of these cars will be on the road by 2020. Who would have thought?

There’s no other way to state it: the business impacts of automation in the next decade will be profound. Market intelligence firm Tractica estimates that the global robotics market will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass most non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles.

But even more impactful than CAGR numbers and market projections around robotics, are the implications on jobs. People are growing increasingly concerned about the ways automation will change their work and careers, and rightfully so. Gartner Research, one of the world’s leading voices in technology trends, has declared that the smart machine era will be the most disruptive in the history of IT. And even now, the results of a 2013 Gartner survey show that sixty percent of CEOs believe that the emergence of smart machines capable of taking away millions of middle-class jobs within the next 15 years is a “futuristic fantasy.” Be that as it may, this new era of robotics is going to dramatically change the nature of work – along with the roles and functions of business.

And this is where things get really interesting!

According to Remy Glaisner, CEO and founder of Myria Research, the future robotics revolution will significantly impact the C-Suite of business. As Robotics and Intelligence Operational Systems (RIOS) technologies scale up, companies will require more structured and strategic approaches to managing the implications of this global transformation on their verticals.

Enter the CRO or Chief Robotics Officer. In a whitepaper dedicated to this topic, Glaisner spells out the role and function of a CRO:

The Chief Robotics Officer plans, directs, organizes and manages all activities of the RIOS (Robotics & Intelligent Operational Systems) department to ensure the effective, secure and efficient use of all RIOS solutions and applications. These efforts must be accomplished in partnership with other business units (IT, Finance, Engineering, R&D, Operations, HR, Business Development, et al) and increasingly with senior management and the Board of Directors. CROs must have significant vertical industry knowledge so that they can better consider the evolution of RIOS solutions in existing and future functions and processes.

The anticipated effects of this new enterprise transformation in business and technology will be fascinating, if not a bit staggering, and suggest that we truly are living in unprecedented times.

Back in January of 2007, Bill Gates famously declared robots as the “next big thing” and placed the industry at the same place as the PC market in the late 1970s. Perhaps this prediction was a bit premature at the time since breakthroughs in mobile, cloud, and Big Data were just beginning. But now a decade later, things are much different; technology has reached an inflection point. Everything about the market suggests that it’s finally happening – that robots really are about to go mainstream.

So the question now becomes simple: How will you adapt and adjust to the global disruption caused by robots and automation in the next 5-10 years? What will your company do about this sea change? Now is the time to plan and pivot in order to avoid falling behind this technology curve. As Glaisner predicts, within the next decade “over 60% of manufacturing, logistics & supply chain, healthcare, agro-farming, and oil/gas/mining companies part of the Global 1000 will include a Chief Robotic Officer (CRO) and associated staff as part of their organization.”

The age of robotics is truly upon us. Will you be ready?

An alternative to specific regulations for robocars: A liability doubling

An image of some connected autonomous cars
An image of some connected autonomous cars

Can our emotional fear of machines, and the call for premature regulation, be mollified by a temporary increase in liability which takes the place of specific regulations to keep people safe?

So far, most new automotive technologies, especially ones that control driving such as autopilot, forward collision avoidance, lane keeping, anti-lock brakes, stability control and adaptive cruise control, have not been covered by specific regulations. They were developed and released by vendors, sold for years or decades, and when (and if) they got specific regulations, those often took the form of ‘electronic stability control is so useful, we will now require all cars to have it.’ It has worked reasonably well.

Just because there are no specific regulations for these things does not mean they are unregulated. There are rafts of general safety regulations on cars, and the biggest deterrent to the deployment of unsafe technology is the liability system and the huge cost of recalls. As a result, while there are exceptions, most carmakers are safety paranoid to a rather high degree — just because of liability. At the same time, they are free to experiment and develop new technologies. Specific regulations tend to come into play when it becomes clear that automakers are doing something dangerous, and that they won’t stop doing it because of the liability. In part, this is because today it’s easy to assign blame for accidents to drivers, and often harder to assign it to a manufacturing defect, or to a deliberate design decision.

The exceptions, like GM’s famous ignition switch problem, arise because of the huge cost of doing a recall for a defect that will have rare effects. Companies are afraid of having to replace parts in every car they made when they know they will fail — even fatally — just one time in a million. The one person killed or injured does not feel like one in a million, and our system pushes the car maker (and thus all customers) to bear that cost.

I wrote an article on regulating Robocar Safety in 2015, and this post expands on some of those ideas.

Robocars change some of this equation. First of all, in robocar accidents, the car manufacturer (or driving system) is going to be liable by default. Nobody else really makes sense, and indeed, some companies, like Volvo, Mercedes and Google, have already accepted that. Some governments are talking about declaring it, but frankly, it could never be any other way. Making the owner or passenger liable is technically possible, however, do you want to ride in an Uber where you have to pay if it crashes for reasons having nothing to do with you?

Due to this, the fear of liability is even stronger for robocar makers.

Robocar failures will almost all be software issues. As such, once fixed, they can be deployed for free. The logistics of the “recall” will cost nothing. GM would have no reason not to send out a software update once they found a problem; they would be crazy not to. Instead, there is the difficult question of what to do between the time a problem is discovered and a fix has been declared safe to deploy. Shutting down the whole fleet is not a workable answer, it would kill deployment of robocars if several times a year all robocars stopped working.

In spite of all this history and the prospect of it getting even better, a number of people — including government regulators — think they need to start writing robocar safety regulations today, rather than 10-20 years after the cars are on the road as has been traditional. This desire is well-meaning and understandable, but it’s actually dangerous because it will significantly slow down the deployment of safety technologies that will save many lives by making the world’s 2nd most dangerous consumer product safer. Regulations and standards generally codify existing practice and conventional wisdom. They are bad ideas with emerging technologies, where developers are coming up with entirely new ways to do things and entirely new ways to be safe. The last thing you want is to tell vendors is that they must apply old-world thinking when they can come up with much better ways of thinking.

Sadly, there are groups who love old-world thinking, namely the established players. Big companies start out hating regulation but eventually come to crave it, because it mandates the way they do things and understand the law. This stops upstarts from figuring out how to do it better, and established players love that.

The fear of machines is strong, so it may be that something else needs to be done to satisfy all desires: The desire of the public to feel the government is working to keep these scary new robots from being unsafe and the need for unconstrained innovation. I don’t desire to satisfy the need to protect old ways of doing things.


One option would be to propose a temporary rule: For accidents caused by robocar systems, the liability, if the system should be at fault, shall be double that if a similar accident were caused by driver error. (Punitive damages for willful negligence would not be governed by this rule.) We know the cost of accidents caused by humans. We all pay for it with our insurance premiums, at an average rate of about 6 cents/mile. This would double that cost, pushing vendors to make their systems at least twice as safe as the average human in order to match that insurance cost.

Victims of these accidents (including hapless passengers in the vehicles) would now be doubly compensated. Sometimes no compensation is enough, but for better or worse, we have set on values and doubling them is not a bad deal. Creators of systems would have a higher bar to reach, and the public would know it.

While doubling the cost is a high price, I think most system creators would accept this as part of the risk of a bold new venture. You expect those to cost extra as they get started. You invest to make the system sustainable.

Over time, the liability multiplier would reduce, and the rule would go away entirely. I suspect that might take about a decade. The multiplier does present a barrier to entry for small players and we don’t want something like that around for too long.

Soft-bodied robots: Actuators inspired by muscle

In this image, VAMPs are shown actuated and cut open in cross section. The cross section shows the inner chambers that collapse when vacuum is applied. Credit: Wyss Institute at Harvard University.
In this image, VAMPs are shown actuated and cut open in cross section. The cross section shows the inner chambers that collapse when vacuum is applied. Credit: Wyss Institute at Harvard University.

To make robots more cooperative and have them perform tasks in close proximity to humans, they must be softer and safer. A new actuator developed by a team led by George Whitesides, Ph.D. — who is a Core Faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering and the Woodford L. and Ann A. Flowers University Professor of Chemistry and Chemical Biology in Harvard University’s Faculty of Arts and Sciences (FAS) – generates movements similar to those of skeletal muscles using vacuum power to automate soft, rubber beams.

Like real muscles, the actuators are soft, shock absorbing, and pose no danger to their environment or humans working collaboratively alongside them or the potential future robots equipped with them. The work was reported June 1 in the journal Advanced Materials Technologies.

“Functionally, our actuator models the human bicep muscle,” said Whitesides, who is also a Director of the Kavli Institute for Bionano Science and Technology at Harvard University. “There are other soft actuators that have been developed, but this one is most similar to muscle in terms of response time and efficiency.”

Whitesides’ team took an unconventional approach to its design, relying on vacuum to decrease the actuator’s volume and cause it to buckle. While conventional engineering would consider bucking to be a mechanical instability and a point of failure, in this case the team leveraged this instability to develop VAMPs (vacuum-actuated muscle-inspired pneumatic structures). Whereas previous soft actuators rely on pressurized systems that expand in volume, VAMPs mimic true muscle because they contract, which makes them an attractive candidate for use in confined spaces and for a variety of purposes.

The actuator — comprising soft rubber or ‘elastomeric’ beams — is filled with small, hollow chambers of air like a honeycomb. By applying vacuum the chambers collapse and the entire actuator contracts, generating movement. The internal honeycomb structure can be custom tailored to enable linear, twisting, bending, or combinatorial motions.

VAMPs are functionally modeled after the human bicep, similar to the biological muscle in terms of response time and efficiency. Credit: Wyss Institute at Harvard University

“Having VAMPs built of soft elastomers would make it much easier to automate a robot that could be used to help humans in the service industry,” said the study’s first author Dian Yang, who was a graduate researcher pursuing his Ph.D. in Engineering Sciences at Harvard during the time of the work, and is now a Postdoctoral Researcher.

The team envisions that robots built with VAMPs could be used to assist the disabled or elderly, to serve food, deliver goods, and perform other tasks related to the service industry. What’s more, soft robots could make industrial production lines safer, faster, and quality control easier to manage by enabling human operators to work in the same space.

Although a complex control system has not yet been developed for VAMPs, this type of actuation is easy to control due to its simplicity: when vacuum is applied, VAMPs will contract. They could be used as part of a tethered or untethered system depending on environmental or performance needs. Additionally, VAMPs are designed to prevent failure — even when damaged with a 2mm hole, the team showed that VAMPs will still function. In the event that major damage is caused to the system, it fails safely.

“It can’t explode, so it’s intrinsically safe,” said Whitesides.

Here, a VAMPs lifts a 500 gram weight with ease. Credit: Wyss Institute at Harvard University

Whereas other actuators powered by electricity or combustion could cause damage to humans or their surroundings, loss of vacuum pressure in VAMPs would simply render the actuator motionless.

“These self-healing, bioinspired actuators bring us another step closer to being able to build entirely soft-bodied robots, which may help to bridge the gap between humans and robots and open entirely new application areas in medicine and beyond,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Boston Children’s Hospital Vascular Biology Program, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

In addition to Whitesides and Yang, other authors on the study included: Mohit S. Verma, Ph.D.,(FAS); Ju-Hee So, Ph.D., (FAS); Bobak Mosadegh, Ph.D., (Wyss, FAS); Christoph Keplinger, Ph.D., (FAS); Benjamin Lee (FAS); Fatemeh Khashai (FAS); Elton Lossner (FAS), and Zhigang Suo, Ph.D., (SEAS, Kavli Institute).

May fundings, acquisitions and IPOs


UPDATE: June 1, 2016: Forbes wrote today that Toyota is in discussions with Google not only for Boston Dynamics but also for Schaft, the Japanese startup that won the DARPA Robotics Challenge — a two-company sale.

May was another big month for robotics – 13 companies were funded to the tune of $111 million. Four companies were acquired with 2 of the 4 reporting selling prices totaling $422 million. And that’s without the $5.2 billion bid for Kuka by Chinese Midea, or the pending sale of Google’s Boston Dynamics.

The financial pages are lighting up over recent stories about these big-money sales. First there was the $5.2 billion offer by Midea Group, a Chinese appliance manufacturer, for Kuka AG, the Augsburg, Germany-based manufacturer of robots and automated systems. Kuka is one of the Big Four of robot manufacturers. On the day of the bid, Kuka’s stock rose from $84/share to $110 where it’s stayed since.

Then came the announcement by Tech Insider that the Toyota Research Institute is in the final phase of negotiations to acquire Google’s robotics company Boston Dynamics, of Big Dog fame. Boston Dynamics spun out of the MIT Leg Lab in 1992 and worked on various military and DARPA funded research projects until Google’s Andy Rubin acquired the company along with 8 other robotics companies. Boston Dynamics never quite adapted to Google and Google’s push to build a consumer robot, hence their being put on the block in March, 2016.

From Forbes, news of a new fund focusing on robotics: Chrysalix VC, a Vancouver, BC venture capital group focused on alternative energy, has partnered with Dutch robotics commercialization center RoboValley to create a new VC fund focused on robotics. The vehicle is targeting E100 million.

Below are the fundings, acquisitions, IPOs and failures that actually happened in May:


  1. Locus Robotics raised $8 million in a Series A funding from existing seed investors. The funds will be used to expand product development and general marketing of Locus’ novel material handling robots. Locus is a Massachusetts-based company founded specifically in answer to Kiva Systems’ robots being taken in house by Amazon and no longer available to non-Amazon clients. Locus’ founder, Bruce Welty is a Kiva-using distribution center owner, who, as a consequence of Amazon’s actions, had no recourse other than to build a company that uses a fleet of robots integrated into current warehouse management systems to provide robotic platforms to carry picked items to a conveyor or to the packing station thereby reducing human walking distances and improving overall picking efficiencies.
  2. Gamaya, a Swiss aerial analytics spin-off from the Swiss EPFL, raised $3.2 million in a Series A funding. Funds will be used to develop their new 40 bands of light hyperspectral imaging sensor and analytics software platform (traditional multi-spectral sensors have 4 bands).
  3. Hortau is a California soil moisture monitoring company which raised $10 million to grow and broaden their new system of networked field sensors, weather stations and control units allowing growers to remotely open and close valves and fire up engines for irrigation from cloud-based management software.
  4. nuTonomy is a Cambridge-based start-up that raised $16 million in a Series A round of funding from a group of Singapore and US VCs. This is in addition to the $3.6 million raised in January which included funds from Ford Chairman Bill Ford. nuTonomy is planning to launch a fleet of autonomous taxis in Singapore by 2019 and begin testing later this year. NuTonomy is using retrofitted Mitsubishi electric cars and plans to add Renault EVs later this year.
  5. Mazor Surgical Technologies, an Israeli company, has sold $11.9 million of their stock, 4% of their shares, to Medtronic, a global medical technology, services, and solutions provider, with a performance agreement to sell another 6% of Mazor shares for up to $20 million. An additional clause of the agreement kicks in if performance milestones are met whereby Mazor can issue an addition 5% of new shares for an additional $20M from Medtronic. Details of the deal are here.
  6. Dedrone GmbH, a German startup whose DroneTracker drone detection platform, raised $10 million in a Series A funding from a series of EU and Silicon Valley VCs. In just 15 months, Dedrone has grown to more than 40 employees and 100 distributors in over 50 countries.
  7. Astrobotic Technology, the CMU spin-off company working on delivering payloads to the moon, raised $2.5 million from Space Angels Network. Astrobotic has 10 projects with governments, companies, universities, non-profits, NASA, and individuals for their first moon mission.
  8. MegaBots, an Oakland, CA entertainment startup, has raised $2.4 million in seed funding to bring robot-fighting to a venue near you. MegaBots plans to use the seed funding to build their robot for the fight against the Japanese team they’ve challenged; and to secure sponsorships, perhaps even a TV contract for a program that tracks the team from building the robots to competing.
  9. Zipline International, a San Francisco startup, raised $800k from UPS and $18 million from Yahoo founder Jerry Yang, Microsoft co-founder Paul Allen and others to develop their small robot airplane designed to carry vaccines, medicine and blood to remote areas where health workers place text orders for what they need.
  10. Cyberhawk Innovations raised $2.9 million in financing to enable UK-based Cyberhawk to expand its commercial development of the drone-captured data inspection market for the oil & gas industry and infrastructure markets.
  11. Eonite Perception, a Silicon Valley vision systems startup, raised $5.25 million in a seed round from multiple Silicon Valley VCs. Eonite is building a 3D mapping and tracking system for the virtual reality marketplace using low latency dense depth sensors.
  12. eyeSight Technologies, an Israeli vision systems startup, received $20 million from a Chinese VC group, for its vision system of sensing, gesture recognition and user awareness to be embedded into consumer products.
  13. AIO Robotics is a Los Angeles startup developing an all-in-one 3D printer scanner with an onboard CAD and modeling system. AIO received an undisclosed amount of seed funding.


  1. 5D Robotics, a San Diego area integrator of unmanned and mobile robotics using ultra-wide band (5D) communications, acquired Aerial MOB, a drone aerial cinematography startup, for an undisclosed sum. The acquisition has led to the formation of the 5D Aerial division which will provide 3D mapping, photogrammetry, thermal and multi-spectral imagery data to vertical markets including oil and gas, utilities and construction.
  2. Dematic, a global supplier of AGVs and materials handling technology, acquired (in March) NDC Automation, an AGV manufacturer in Australia and New Zealand, for an undisclosed amount.
  3. Voith GmbH, a family-owned German group of industrial and engineering companies, has sold 80% of its industrial services unit to buyout group Triton Partners for $342 million to free up capital for planned investments. Voith has a 25.1% share of Kuka’s stock which, if the $5.2 bn Midea offer passes, will be worth close to 40% more than the share value the day before the offer. According to Forbes, Voith ranks 200th in global family-owned businesses with revenue of $7.5 bn and 43,000 employees.
  4. ChemChina and a group of other investors including Chinese state funds, acquired Germany’s KraussMaffei Automation, an industrial robot integrator and plastics, carbon fiber, and rubber processor, for $1 billion – in January.


  • None. Private placements and increased investment from hedge funds, mutual funds and via corporate acquisitions appears to have dried up the robotics IPO pipeline.
  • But Moley Robotics, a UK startup developing a cooking robot, is using the new equity crowd funding rules that passed the FCC last year to offer 2% of their shares via the Seedrs crowd funding site. Details will be released soon to subscribers to the Moley and Seedrs websites.


  • RoboDynamics, a SoCal startup with a stylish mobile telepresence robot named Luna, has gone out of business.

Winning team story: The ups and downs of building a drilling robot for the Airbus Shopfloor Challenge

Felix von Drigalski, leader for Team NAIST, tells us about their robot design, unforeseen challenges, and working together to find a solution to prepare for the Airbus Shopfloor Challenge at ICRA in Stockholm, Sweden.

How did you come up with initial robot concept?

When we started talking about the contest back in January, I had the basic idea in my head fairly quickly: something that would make contact with the plate to stabilize the drill. The main inspiration was taken from the intuitive experience that it is hard to hold a position with an outstretched arm. Humans solve this problem by resting their hand when performing precision tasks such as writing, so we carried this idea over to the robot.

In mechanical terms, resting your hand reduces the amount of load-bearing structure between your tool and the workpiece. The kinematic chain is shorter and the flow of forces more direct. As such, there are fewer things that can vibrate and the stiffness is higher. And we knew that high stiffness would be key to drilling quality holes.

Further, to keep with the analogy, resting your hand while writing almost removes your arm from the process. You write with your hand, and not your arm. Similarly, in our solution the function “Drilling” would be fulfilled by the end effector, and the function “Positioning” only by the arm. If there is one thing that remains firmly hammered into my head after my engineering classes, it’s separation of function.

With our design goal formulated, we soon converged on the three-pronged frame with a separate pneumatic actuator. Instead of having all the motors in the robot arm actively compensate for the forces from the drilling, the end effector would transmit them right back to the workpiece and the arm would do nothing but move the end effector and hold it in place.


In our final design, the robot applies force during the drilling process to keep the end effector stationary. Ideally though, the robot would only be responsible for positioning. We were planning to take the next step to transfer the load off the robot and onto the end effector and the workpiece, but sadly, could not get the necessary parts to Japan in time, and simplified the idea in late April.

Everything else was mostly decided in order of simplicity. We already had the robot arm and small powerful cameras in the lab, so it was clear we would use that. A lot of the code originally comes from our research projects, that we tried to stay close to (for example, the vision system is repurposed eye-tracking code).

As an anecdote: another of the original ideas was to fine-tune the position of the drill by using the end effector. We were basically thinking of using a whole robot hand to adjust the position of the drill. However, unless the robot cannot ensure the positioning on its own, there is no advantage to adding that functionality to the end effector, so we dropped it in favor of the fixed rods with springs (rubber) on the feet. Just another lesson to be learned from Separation Of Function, the benevolent goddess of good design!

What sort of prep work went into the project prior to the competition?

After fleshing out the main idea between January and March, we decided to go for it and write a solid application. We spent a long weekend 3D printing a proof-of-concept, cutting together this flashy video and writing our application in research paper format — since that’s what grad students do. When we got the OK from Airbus to compete on March 27, I doubt we realized how much time would be spent on the project until we started our weekly meetings with our Airbus contact.

It turned out that our biggest problem was logistics. I would estimate a good 40% of our time was spent calling the airport, customs, shipping companies, university offices, and insurance companies for quotes and details about how to get our robot from Japan to Sweden. Shipping would take a whole week, a visiting PhD student needed the robot for his experiments, and with us having to work with the robot before the contest, we had to think creatively and find a workable solution fast.

The biggest issue was that our robot weighed 31 kg on its own, and no single piece of luggage can be over 32 kg in European airports. This left us just 1 kg to package the robot safely. In the end, we made a custom box by cutting a robot-shaped hole into boards of styrofoam with a soldering iron and gluing them together. We even sewed a bag for it.

Teammates Gustavo and with the robot and resourceful packaging. Photo: Felix von Drigalski
Teammates Gustavo and Lotfi with the robot and resourceful packaging. Photo: Felix von Drigalski

This is a good moment to remember that this robot arm is easily worth $80,000!

With all the logistics issues, long delivery times, and Golden Week (a string of closely-packed public holidays in Japan) delaying them even more, the time we had to prepare the solution got shorter and shorter. We had parts arriving until the last week before we left, so we had to put everything together in a hurry. We ran the first full trial before the competition on Saturday morning, the day of our flight! It was ridiculously last minute.

Thinking of prep work: we needed our robot at different heights when the plate was inclined, but I designed our stand for the wrong height! I spent almost two months under the completely unfounded misconception that the plate would be inclined by 15° instead of 30°. This is why our stand looks completely different each round and why we had a new mechanical design problem to solve each time. As if that weren’t enough, we misread the distance between the rails and were off by 50 mm. This is why the stand always looks like a mess, on top of everything else!

What was it like competing with your robot? What have you learned in the manic process? Anything you’d do differently?

When we arrived with our limited preparation on Monday to set up the robot and saw the other teams, we didn’t have high hopes. Everyone looked so well-prepared and primed to win. At that point, we just wanted to deliver something solid and keep our heads held high. We freaked out at the thought of not making it through the demo round, so we dragged the robot back to the hotel and stayed up all night to make sure we would get into the contest.


After passing the demo round (with a bug that inverted our hole pattern; look closely at the interview video), we were so happy that we were actually drilling real holes in a real plate that we just kept drilling all through the first round, almost blind. We didn’t have a good way to recalibrate on the fly that day, so we had to accept that our positioning would be off and hope for the best. In fact, our holes were so far off the mark that two thirds of them gave us negative points and we got a terrible score. However, one of the judges said that the calibration was basically the only problem we had — the holes themselves were “absolutely perfect”. That made us perk up.

We had a look at the other plates. Everyone was having more problems with the drilling than we thought. Most were applying force on the drill by moving their robot arm, and the quality of their holes were suffering because of it. Our drilling process was by far the fastest and cleanest. My design worked. It really was only a problem of calibration and our code. That was the moment we gained hope and decided to take the robot back to the hotel for another grueling night of work.

The next day, we paid for our sleep deprivation: we spent the first half hour of round 2 with the robot not moving at all, until we figured out that we never put all the things we fixed during the night into our live code. After we finally did and finished the round with barely half the plate’s holes drilled, we thought we may well have missed our shot at entering the final. However, we absolutely wanted to prove to ourselves that we could at least properly drill a whole set and take a picture of it home, so we were hoping for a chance to maybe perform on a plate separately from the contest, if it came to that. When all the team leaders were informed that the last round would be between 4 teams and we were in it, I was the first to rush out of the room and prepare (and accidentally spill the news to the neighboring and positively amazing team from India, who were the only ones to drop out that round).

In the end, we had more arcane bugs to sort out during the final and we didn’t get to drill through the entire hole pattern like we hoped. The first 10 minutes we weren’t drilling anything at all, and it was only during the last 20 minutes that we really were on point and drilling at (almost) full speed. It was also the first time I had the time to stand back and talk to spectators about what we were doing. Which is when our drill bit got sucked into the clamp with 3 minutes on the clock. Ah well, things never go as planned. We still managed to drill three more holes, although sadly not the damaged one the bit got stuck on. All of that stress fell off our shoulders when the round was called, and we could turn our attention to celebrate everything and everyone who powered through along with us.


All in all, the competition was a blast! Constant adrenaline in an incredible atmosphere. Probably the best thing was that there was not a shred of negativity anywhere, all of the teams were supportive of everyone’s efforts. Team Sirado even helped us troubleshoot KUKA software we had been struggling to install on our virtual machine the day before the competition! Everyone was simply great! Airbus made an excellent choice putting the celebration right after the final round instead of the awards. That allowed us to celebrate everyone’s efforts rather than our scores.

It was also an incomparable team-building experience. Although we had some stressful moments when our robot did not run in the second round, no one cracked or became emotional or accusatory — everyone stayed laser sharp and focused on solving the problem. We could not have done it without a heavy dose of team spirit.

In all that chaos, we learned our lessons about project planning and software development. Probably the worst mistake we made was working on the same code on multiple machines, with multiple sleep deprived people during the night, then trying to merge all those changes. To illustrate: There is a comment in our code history from 6 AM on Tuesday morning that just reads “only a miracle can save us” (the first day of the contest!!), and our live branch is called “last-minute-firebrigade”. You can imagine the state we were in!

Photo courtesy: Felix von Drigalski
Photo: Felix von Drigalski

What advice would you give to any future robotic challengers out there?

I relay these from the team:

  • Do not underestimate logistics!
  • Don’t rely exclusively on software, robust hardware design is key!
  • Just try and have fun, don’t think it’s impossible without even trying!

And I would stress how much trying is worth it, no matter what. Winning is great and all, but it was just after the final, after close to 60 hours with almost no sleep and all of our holes drilled, that we felt the most relieved and accomplished. We didn’t spend a second thinking about scores or placements — we were just proud that we made something that worked, and we didn’t give up. Thinking up and developing your own solution, working together and seeing it come to life is one of the most satisfying and fulfilling experiences. Making your performance and efforts your motivation, rather than winning, will keep you going when things look dire.

Any closing words?

After the event, we kept getting amused comments about not having Japanese members in our Japanese team. It’s true we come from Germany, Mexico, Belgium and Ecuador, but studying and living in Japan has changed all of us in ways that we sometimes don’t even realize, as anyone who has left home to go abroad understands. As such, we have always said that our team consists of 4 people with 5 nationalities — just no one with a Japanese passport.

So, for the big closing words: If you’re an aspiring youngster reading this and you want to see the world, or if you’re in your undergrad and thinking of going on an exchange, or if you dream of living somewhere else — go for it! Every journey starts with the first step, and it is worth it.

From left to right: Felix von Drigalski, Lotfi El Hafi, Pedro Uriguen, and Gustavo Garcia.

Like this story? Read more about the Airbus Shopfloor Challenge here.

Robocar news around the globe: Tesla crash, Declaration of Amsterdam, and automaker services

Wepods: The first autonomous vehicle on Dutch public roads. Source: True Form/YouTube
WEpods: The first autonomous vehicle on Dutch public roads. Source: True Form/YouTube

We have the first report of a real Tesla autopilot crash. To be fair to Tesla, their owner warnings specify very clearly that the autopilot could crash in just this situation. In the video, there is a stalled car partly in the lane and the car in front swerves around, revealing little time for the driver, or the autopilot, to react.

The deeper issue is the way that the improving quality of the Tesla Autopilot and systems like it are lulling drivers into a false sense of security. I have heard reports of people who now are trusting the Tesla system enough to work while being driven, and indeed, most people will get away with this. And as people get away with it more and more, we will see people driving like this driver, not really prepared to react. This is one of the reasons Google decided not to make a system which requires driver takeover ever. As the system gets better, does it get more dangerous?

Declaration of Amsterdam

Last month, various EU officials gathered in Amsterdam and signed the Declaration of Amsterdam that outlines a plan for normalizing EU laws around self-driving cars. The meeting included a truck automation demo in the Netherlands and a self-drive transit shuttle demonstration. It’s a fairly bland document, more an expression of the times, and it sadly spends a lot of time on the red herring of “connected” vehicles and V2V/V2I, which governments seem to love, and self-driving car developers care very little about.

Let’s hope the regulatory touch is light. The reality is that even the people building these vehicles can’t make firm pronouncements on their final form or development needs, so governments certainly can’t do that, we must be careful that attempts to “help” but may hinder. We already have a number of examples of that happening in draft and real regulations and we’ve barely gotten started. For now, government statements should be limited to, “let’s get out of the way until people start figuring out how this will actually work, unless we see somebody doing something demonstrably dangerous that can’t be stopped except through regulations.” Sadly, too many regulators and commentators imagine it should be, “let’s use our limited current knowledge to imagine what might go wrong and write rules to ban it before it happens.”

Speech from the throne

It was a sign of the times when her Majesty the Queen, giving the speech from the throne in the UK parliament, laid out elements of self-driving car plans. The Queen drove jeeps during her military days, and so routinely drives herself at her country estates, otherwise she would be among the set of people most use to never driving.

The UK has 4 pilot projects in planning. Milton Keynes is underway, and later this year, a variation of the Ultra PRT pods in use at T5 of Heathrow airport — they run on private tracks to the car park — will go out on the open road in Greenwich. They are already signing up people for rides.

 Car companies thinking differently

In deciding which car companies are going to survive the transition to robocars, one thing I look for is willingness to stop thinking like a traditional car company which makes cars and sells them to customers. Most car company CEOs have said they don’t plan to keep thinking that way, but what they do is more important than what they say.

In the past we’ve seen Daimler say it will use their Car2Go service (with the name Car2Come probably likely to cause giggles in the USA) as a way to sell rides rather than cars. BMW has said the same about DriveNow. (And now GM has said this about its partnership with Lyft.) Daimler is also promoting their moovel app which tries to combine different forms of mobility, and BMW is re-launching DriveNow in Seattle as ReachNow which adds peer-to-peer carsharing and other modes of transportation to the mix.

Of course, these are tiny efforts for these big companies, but it scores big over companies still thinking only in the old ways. I’m looking at you, most car companies.

BWM also announced the iNext electric flagship sedan will offer self-drive in 2021.

Florida tells cities — think about self-driving

In 2010, I put out a call for urban planners to starting thinking about robocars and mostly it fell on deaf ears. Times are changing and this month Florida told its cities they should include consideration of this and other future transit in their updated long-term plans.

It is a tough call. Nobody’s predictions about the future here are good enough to make a firm plan and commit billions of dollars. At the same time, we are starting to learn that certain plans — especially status quo plans — are almost certainly seriously wrong. We may not have a perfect idea on what to spend city money on, but we can start to learn what not to do.

Fortunately, transportation is becoming a digital technology, which means you can change your plans much faster than physical infrastructure plans. Robocars like bare pavement, as stupid as can be. The ‘smarts’ go in the cars, not in the roads or the cities. So if you change your mind, you just have to reprogram your cars, not rebuild your city. Which is good, because you can’t rebuild your city.

China gets in the game

China is the world’s number one car manufacturing company. A lot of people don’t know that. Last year I visited the Shanghai auto show and it was a strange trip to walk through giant hall after giant hall of automakers you have never heard of before.

Self-driving car action in China has been slow. Right now everybody has a focus on just making regular cars for the rising middle class that is buying them as fast as they can. Wealthier Chinese usually buy foreign brands, although those cars are often made in China even though they have a VW or Buick nameplate.

Baidu has been working on cars for a couple of years and promises a pilot project in 2018. Recently, Chinese automaker Changan did an autopilot demo driving 1,000 miles. China has had its own annual academic version of the Darpa Grand Challenge for several years as well.

There is a strong chance that companies like Uber, Apple or Google, when they want to get their cars made, could go to Chinese manufacturers. Now that they’re getting practice at making their own technology. China is also a major source of electric vehicles, with future robotaxis likely to be small and electric. The manufacturers and suppliers with the most experience at making such vehicles are likely to be the winners.

Apple buys into Didi

Speaking of which, Apple just put a billion dollar investment into Didi. You may not know Didi, but it is the dominant phone-hail service in China with a much larger market share than Uber. It’s one of the few places Uber has lost in the market, but of course it’s China. As the auto industry moves to being about selling rides rather than cars, it’s an interesting move by Apple, which rarely does outside investments like this.

Book Review: ‘Peer Reviews in Software, A Practical Guide,’ by Karl Wiegers

Code review of a C++ program with an error found.
Code review of a C++ program with an error found.

I have been part of many software teams where we desired to do code reviews. In most of those cases the code reviews did not take place, or were pointless and a waste of time. So the question is: how do you effectively conduct peer reviews in order to improve the quality of your systems?

I found this book, Peer Reviews in Software: A Practical Guide by Karl E. Wiegers. This book was recommended to me, and having “practical guide” in the title caught my attention —  I have reviewed other books that claimed practical, but were not. Hopefully this book will help provide me (and you) with tools for conducting valuable code reviews.

Peer Reviews in Software: A Practical Guide. Photo: Amazon
Peer Reviews in Software: A Practical Guide. Photo: Amazon

As a human, I will make mistakes when programming; finding my mistakes is often difficult since I am very close to my work. How many times have you spent hours trying to find a bug in your code only to realize you had a bad semi-colon or parentheses? Another person who has not worked on the code for hours might have been able to spot the problem right away. When I first started programming it could be embarrassing to have somebody review my code and point out the problems. However now that I am more senior, I do not view it as an embarrassment, but as a learning opportunity since everybody has a different set of experiences that influences their code. I would encourage other developers to view it as a learning experience and not be bashful about reviews. Remember the person is critiquing the work, not you; this is how to become a better developer.

According to Wiegers, there are many types of peer reviews including: inspections, team reviews, walkthroughs, pair programming, peer desk check passaround, and finally ad hoc review.

This book is divided into three sections:

  1. Cultural & social aspects
  2. Different types of reviews (with a strong focus on inspection)
  3. How to implement review process within your projects

Cultural & Social Aspects

In this first section of the book, the author makes the argument that quality work is not free and that “paying” the extra cost of peer reviews is a good investment in the future. By having peer review you can reduce failures before the product is released out into the world and any applicable reworks. Shifting the defect detection to the early stages of a product has huge potential payoff, due to high costs of fixing defects found late in the release cycle, or after release. The space shuttle program found the relative cost for fixing a defect is: $1, if found during initial inspection; $13, if found during a system test; and $92, to fix after delivery! In the book, the author documents various companies who saved substantial amounts of time and money all by having code inspection programs.

One thing I like is the reference to IEEE 1999, which talks about other items that are good to review. People don’t think about it but other things, such as marketing brochures, requirement specifications, user guides, test plans and many other things, are good candidates for peer review.

I have seen many project teams try to do error tracking and/or code reviews but fail, due to team culture. I saw one case where peer review actually worked: when a dedicated person’s only job was to manage reliability in the project. He was great at hounding people to track bugs and review code. This book discussed how team culture must be developed to value “quality”. If you are the type of person that does not want to waste time reviewing another’s code, you must remember you will want the other person to “waste” time looking at your code. In this manner, we must all learn to scratch each other’s back. There are also two traps to watch out for:

  1. Becoming lazy and submitting bad code for review since somebody else will find/fix it, or
  2. Trying to perfect your code before sharing it, in order to protect your ego from getting bruised, and to only show your best work.

We also cannot forget managers. Managers need to value quality and provide time and resources for employees to develop good practices. Managers need to understand the point of these exercises are to find flaws and people should not be punished based on those flaws. I have often seen managers not putting time in the schedule for good code reviews.

Types of reviews

Before discussing the types of reviews there is a good discussion on the guiding principles for reviews. Some of the principles are:

  • Check your egos at the door
  • Keep the review team small
  • Find problems during review, but don’t try to fix them at the review. Give up to 1 minute for discussion of fixes.
  • Limit review meeting to 2 hours max
  • Require advanced preparation

There are several types of peer reviews discussed in this book. This list starts with the least formal approach and develops until the most formal approach (the book uses the opposite order which I found non intuitive).

  1. Ad Hoc – These are the spur of the moment meetings where you call a coworker to your desk to help with a small problem. Usually, this just solves an immediate problem. (This is super useful when trying to work out various coordinate transforms)
  2. Peer passaround/deskcheck, – In this approach a copy of the work is sent to multiple reviewers, after which you can then collate all of the reviews. This allows multiple people to look at the code/item and also lets you get something if one person does not respond. In the peer deskcheck version, only one person looks at it instead of passing it around for multiple reviews.
  3. Pair Programming – This is the idea that two people program together. So while there is no official review two sets of eyes see each line of code being typed. This has an added bonus that now two people will understand the code. The downside is that often one of the coders can “doze-off” and not be effective at watching for flaws. Also, many coders might not like this.
  4. Walkthrough – This is where the author of the code walks through the code to a group of reviewers. This is often unstructured and heavily dependent on how good of a job the author prepared. In my experience this is good for helping people understand the code and finding large logic flaws, but not so much for finding smalls flaws/bugs.
  5. Team Review – This is similar to the walkthrough however reviewers are provided with documentation/code in advance to review and their results are collated.
  6. Inspection – Finally, we have the most formal approach which the author appears to favor. In this approach the author of the code does not lead the review, rather, a moderator, often with the help of checklists, will lead the meeting and read out the various sections. After the moderator reads a section, the reviewers discuss it. The author of the code can answer questions and learn how to improve various sections. Often the author might identify other instances of the same problem that the reviewers did not point out. An issue log should be maintained as a formal way of providing feedback and a list to verify fixes against.
Suggested review methods from "Peer Reviews in Software"
Suggested review methods from “Peer Reviews in Software”

The book then spends the next few chapters detailing the inspection method of peer review. Here are just a few notes. As always, read the book to find out more.

  • In most situation 3-7 is a good size group for the inspection. The number of people can be based on the item being reviewed.
  • The review needs to be planned in advance and have time to prepare content to distribute to reviewers.
  • After the meeting the author should address each item in the issue log that was created and submit it to the moderator (or other such person) to verify that the solutions are good.
  • Perform an inspection when that module is ready to pass to the next development stage. Waiting too long can leave a person with a lot of bad code that is now too hard to fix.
  • You can (sort of) measure the ROI by looking at the bugs found and how long they took to find. There are many other metrics detailed in the book.
  • Keep spelling and grammar mistakes on a separate paper and not on the main issue list.

How to implement review processes within your projects

Getting a software team and management to change can be difficult. The last part of this book is dedicated to how you can get reviews started, and how to let them naturally grow within the company. One significant thing identified is to have a key senior person act as a coordinator for building a culture of peer review and to provide training to developers. There is a nice long table in the book of the various pitfalls that an organization may encounter and how to deal with them.

This book also discusses special challenges and how it can affect your review process. Some of the situations addressed are:

  • Large work products
  • Geographic or time separation
  • Distributed reviewers
  • Asynchronous review
  • Generated and non-procedural code
  • To many participants
  • No qualified reviewers available

At the end of this book, there is a link to supplemental material online. I was excited to see this. However when I went to the site, I saw it was all for sale and not free (most things were around $5). That kind of burst my bubble of excitement for the supplemental material. There is a second website for the book that is referenced but does not seem to be valid anymore.

Throughout the book the idea is getting proper training for people on how to conduct inspection reviews. Towards the end of the book, the idea of hiring the book’s author as a trainer to help with this training is suggested.

Overall, I think this is a good book. It introduces people on how to do software reviews. The use of graphics and tables in the book are pretty good. It is practical and easy to read. I also like how this book addresses management and makes the business case for peer reviews. I give this book 4.5 out of 5 stars. The missing 0.5 stars is due to the supplemental material not being free and for not providing those forms with the book.

Disclaimer: I do not know the book author. I purchased this book myself from Amazon.

Page 337 of 338
1 335 336 337 338