Distributed planning, communication, and control algorithms for autonomous robots make up a majorarea of research in computer science. But in the literature on multirobot systems, security has gotten relatively short shrift.
In the latest issue of the journal Autonomous Robots, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and their colleagues present a new technique for preventing malicious hackers from commandeering robot teams’ communication networks. The technique could provide an added layer of security in systems that encrypt communications, or an alternative in circumstances in which encryption is impractical.
“The robotics community has focused on making multirobot systems autonomous and increasingly more capable by developing the science of autonomy. In some sense we have not done enough about systems-level issues like cybersecurity and privacy,” says Daniela Rus, an Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper.
“But when we deploy multirobot systems in real applications, we expose them to all the issues that current computer systems are exposed to,” she adds. “If you take over a computer system, you can make it release private data — and you can do a lot of other bad things. A cybersecurity attack on a robot has all the perils of attacks on computer systems, plus the robot could be controlled to take potentially damaging action in the physical world. So in some sense there is even more urgency that we think about this problem.”
Identity theft
Most planning algorithms in multirobot systems rely on some kind of voting procedure to determine a course of action. Each robot makes a recommendation based on its own limited, local observations, and the recommendations are aggregated to yield a final decision.
A natural way for a hacker to infiltrate a multirobot system would be to impersonate a large number of robots on the network and cast enough spurious votes to tip the collective decision, a technique called “spoofing.” The researchers’ new system analyzes the distinctive ways in which robots’ wireless transmissions interact with the environment, to assign each of them its own radio “fingerprint.” If the system identifies multiple votes as coming from the same transmitter, it can discount them as probably fraudulent.
“There are two ways to think of it,” says Stephanie Gil, a research scientist in Rus’ Distributed Robotics Lab and a co-author on the new paper. “In some cases cryptography is too difficult to implement in a decentralized form. Perhaps you just don’t have that central key authority that you can secure, and you have agents continually entering or exiting the network, so that a key-passing scheme becomes much more challenging to implement. In that case, we can still provide protection.
“And in case you can implement a cryptographic scheme, then if one of the agents with the key gets compromised, we can still provide protection by mitigating and even quantifying the maximum amount of damage that can be done by the adversary.”
Hold your ground
In their paper, the researchers consider a problem known as “coverage,” in which robots position themselves to distribute some service across a geographic area — communication links, monitoring, or the like. In this case, each robot’s “vote” is simply its report of its position, which the other robots use to determine their own.
The paper includes a theoretical analysis that compares the results of a common coverage algorithm under normal circumstances and the results produced when the new system is actively thwarting a spoofing attack. Even when 75 percent of the robots in the system have been infiltrated by such an attack, the robots’ positions are within 3 centimeters of what they should be. To verify the theoretical predictions, the researchers also implemented their system using a battery of distributed Wi-Fi transmitters and an autonomous helicopter.
“This generalizes naturally to other types of algorithms beyond coverage,” Rus says.
The new system grew out of an earlier project involving Rus, Gil, Dina Katabi — who is the other Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT — and Swarun Kumar, who earned master’s and doctoral degrees at MIT before moving to Carnegie Mellon University. That project sought to use Wi-Fi signals to determine transmitters’ locations and to repairad hoc communication networks. On the new paper, the same quartet of researchers is joined by MIT Lincoln Laboratory’s Mark Mazumder.
Typically, radio-based location determination requires an array of receiving antennas. A radio signal traveling through the air reaches each of the antennas at a slightly different time, a difference that shows up in the phase of the received signals, or the alignment of the crests and troughs of their electromagnetic waves. From this phase information, it’s possible to determine the direction from which the signal arrived.
Space vs. time
A bank of antennas, however, is too bulky for an autonomous helicopter to ferry around. The MIT researchers found a way to make accurate location measurements using only two antennas, spaced about 8 inches apart. Those antennas must move through space in order to simulate measurements from multiple antennas. That’s a requirement that autonomous robots meet easily. In the experiments reported in the new paper, for instance, the autonomous helicopter hovered in place and rotated around its axis in order to make its measurements.
When a Wi-Fi transmitter broadcasts a signal, some of it travels in a direct path toward the receiver, but much of it bounces off of obstacles in the environment, arriving at the receiver from different directions. For location determination, that’s a problem, but for radio fingerprinting, it’s an advantage: The different energies of signals arriving from different directions give each transmitter a distinctive profile.
There’s still some room for error in the receiver’s measurements, however, so the researchers’ new system doesn’t completely ignore probably fraudulent transmissions. Instead, it discounts them in proportion to its certainty that they have the same source. The new paper’s theoretical analysis shows that, for a range of reasonable assumptions about measurement ambiguities, the system will thwart spoofing attacks without unduly punishing valid transmissions that happen to have similar fingerprints.
“The work has important implications, as many systems of this type are on the horizon — networked autonomous driving cars, Amazon delivery drones, et cetera,” says David Hsu, a professor of computer science at the National University of Singapore. “Security would be a major issue for such systems, even more so than today’s networked computers. This solution is creative and departs completely from traditional defense mechanisms.”
If you enjoyed this article from CSAIL, you might also be interested in:
Germany reportedly intends to acquire the Northrop Grumman MQ-4C Triton high-altitude surveillance drone, according to a story in Sueddeutsche Zeitung. In 2013, Germany cancelled a similar program to acquire Northrop Grumman’s RQ-4 Global Hawk, a surveillance drone on which the newer Triton is based, due to cost overruns. The Triton is a large, long-endurance system that was originally developed for maritime surveillance by the U.S. Navy. (Reuters)
The U.S. Army released a report outlining its strategy for obtaining and using unmanned ground vehicles. The Robotics and Autonomous Systems strategy outlines short, medium, and long-term goals for the service’s ground robot programs. The Army expects a range of advanced unmanned combat vehicles to be fielded in the 2020 to 2030 timeframe. (IHS Jane’s 360)
The U.S. Air Force announced that there are officially more jobs available for MQ-1 Predator and MQ-9 Reaper pilots than for any manned aircraft pilot position. Following a number of surges in drone operations, the service had previously struggled to recruit and retain drone pilots. The Air Force is on track to have more than 1,000 Predator and Reaper pilots operating its fleet. (Military.com)
At FlightGlobal, Dominic Perry writes that France’s Dassault is not concerned that the U.K. decision to leave the E.U. will affect a plan to develop a combat drone with BAE Systems.
At the Los Angeles Times, Bryce Alderton looks at how cities in California are addressing the influx of drones with new regulations.
At CBS News, Larry Light looks at how Bill Gates has reignited a debate over taxes on companies that use robots.
In an interview with the Wall Street Journal, Andrew Ng and Neil Jacobstein argue that artificial intelligence will bring about significant changes to commerce and society in the next 10 to 15 years.
In testimony before the House Armed Services Committee’s subcommittee on seapower, panelists urged the U.S. Navy to develop and field unmanned boats and railguns. (USNI News)
At DefenseTech.org, Richard Sisk looks at how a U.S.-made vehicle-mounted signals “jammer” is helping Iraqi forces prevent ISIS drone attacks in Mosul.
In a Drone Radio Show podcast, Steven Flynn discusses why prioritizing drone operators who comply with federal regulations is important for the drone industry.
At ABC News, Andrew Greene examines how a push by the Australian military to acquire armed drones has reignited a debate over targeted killings.
At Smithsonian Air & Space, Tim Wright profiles the NASA High Altitude Shuttle System, a glider drone that is being used to test communications equipment for future space vehicles.
Researchers at Virginia Tech are flying drones into crash-test dummies to evaluate the potential harm that a drone could cause if it hits a human. (Bloomberg)
Meanwhile, researchers at École Polytechnique Fédérale de Lausanne are developing flexible multi-rotor drones that absorb the impact of a collision without breaking. (Gizmodo)
Recent satellite images of Russia’s Gromov Flight Research Institute appear to show the country’s new Orion, a medium-altitude long-endurance military drone. (iHLS)
The Fire Department of New York used its tethered multi-rotor drone for the first time during an apartment fire in the Bronx. (Crain’s New York)
The Michigan State Police Bomb Squad used an unmanned ground vehicle to inspect the interior of two homes that were damaged by a large sinkhole. (WXYZ)
A video posted to YouTube appears to show a woman in Washington State firing a gun at a drone that was flying over her property. (Huffington Post)
Meanwhile, a bill being debated in the Oklahoma State Legislature would remove civil liability for anybody who shoots a drone down over their private property. (Ars Technica)
An Arizona man who leads an anti-immigration vigilante group is using a drone to patrol the U.S border with Mexico in search of undocumented crossings. (Voice of America)
A man who attempted to use a drone to smuggle drugs into a Scottish prison has been sentenced to five years in prison. (BBC)
Industry Intel
The Turkish military has taken a delivery of six Bayraktar TB-2 military drones, two of which are armed, for air campaigns against ISIL and Turkish forces. (Defense News)
General Atomics Aeronautical Systems awarded Hughes Network Systems a contract for satellite communications for the U.K.’s Predator B drones. (Space News)
Schiebel awarded CarteNav Solutions a contact for its AIMS-ISR software for the S-100 Camcopter unmanned helicopters destined for the Royal Australian Navy. (Press Release)
Defence Research and Development Canada awarded Ontario Drive & Gear a $1 million contract for trials of the Atlas J8 unmanned ground vehicle. (Canadian Manufacturing)
Deveron UAS will provide Thompsons, a subsidiary of Lansing Trade Group and The Andersons, with drone data for agricultural production through 2018. (Press Release)
Precision Vectors Aerial selected the Silent Falcon UAS for its beyond visual line-of-sight operations in Canada. (Shephard Media)
Rolls-Royce won a grant from Tekes, a Finnish government research funding agency, to continue developing remote and autonomous shipping technologies. (Shephard Media)
Israeli drone manufacturer BlueBird is submitting an updated MicroB UAV system for the Indian army small UAV competition. (FlightGlobal)
A Romanian court has suspended a planned acquisition of Aeronautics Defense Systems Orbiter 4 drones for the Romanian army. (FlightGlobal)
Deere & Co.—a.k.a. John Deere—announced that it will partner with Kespry, a drone startup, to market drones for the construction and forestry industries. (TechCrunch)
For updates, news, and commentary, follow us on Twitter. The Weekly Drone Roundup is a newsletter from the Center for the Study of the Drone. It covers news, commentary, analysis and technology from the drone world. You can subscribe to the Roundup here.
The National Science Foundation (NSF) announced a $6.1 million, five-year award to accelerate fundamental research on wireless communication and networking technologies through the foundation’s Platforms for Advanced Wireless Research (PAWR) program.
Through the PAWR Project Office (PPO), award recipients US Ignite, Inc. and Northeastern University will collaborate with NSF and industry partners to establish and oversee multiple city-scale testing platforms across the United States. The PPO will manage nearly $100 million in public and private investments over the next seven years.
“NSF is pleased to have the combined expertise from US Ignite, Inc. and Northeastern University leading the project office for our PAWR program,” said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. “The planned research platforms will provide an unprecedented opportunity to enable research in faster, smarter, more responsive, and more robust wireless communication, and move experimental research beyond the lab — with profound implications for science and society.”
Over the last decade, the use of wireless, internet-connected devices in the United States has nearly doubled. As the momentum of this exponential growth continues, the need for increased capacity to accommodate the corresponding internet traffic also grows. This surge in devices, including smartphones, connected tablets and wearable technology, places an unprecedented burden on conventional 4G LTE and public Wi-Fi networks, which may not be able to keep pace with the growing demand.
NSF established the PAWR program to foster use-inspired, fundamental research and development that will move beyond current 4G LTE and Wi-Fi capabilities and enable future advanced wireless networks. Through experimental research platforms that are at the scale of small cities and communities and designed by the U.S. academic and industry wireless research community, PAWR will explore robust new wireless devices, communication techniques, networks, systems and services that will revolutionize the nation’s wireless systems. These platforms aim to support fundamental research that will enhance broadband connectivity and sustain U.S. leadership and economic competitiveness in the telecommunications sector for many years to come.
“Leading the PAWR Project Office is a key component of US Ignite’s mission to help build the networking foundation for smart communities,” said William Wallace, executive director of US Ignite, Inc., a public-private partnership that aims to support ultra-high-speed, next-generation applications for public benefit. “This effort will help develop the advanced wireless networks needed to enable smart and connected communities to transform city services.”
Establishing the PPO with this initial award is the first step in launching a long-term, public-private partnership to support PAWR. Over the next seven years, PAWR will take shape through two multi-stage phases:
Design and Development. The PPO will assume responsibility for soliciting and vetting proposals to identify the platforms for advanced wireless research and work closely with sub-awardee organizations to plan the design, development, deployment and initial operations of each platform.
Deployment and Initial Operations. The PPO will establish and manage each platform and document best practices as it progresses through the lifecycle.
“We are delighted that our team of wireless networking researchers has been selected to take the lead of the PAWR Project Office in partnership with US Ignite, Inc.,” said Dr. Nadine Aubry, dean of the college of engineering and university distinguished professor at Northeastern University. “I believe that PAWR, by bringing together academia, industry, government and communities, has the potential to make a transformative impact through advances spanning fundamental research and field platforms in actual cities.”
The PPO will work closely with NSF, industry partners and the wireless research community in all aspects of PAWR planning, implementation and management. Over the next seven years, NSF anticipates investing $50 million in PAWR, combined with approximately $50 million in cash and in-kind contributions from over 25 companies and industry associations. The PPO will disperse these investments to support the selected platforms.
Additional information can be found on the PPO webpage.
This announcement will also be highlighted this week during the panel discussion, “Wireless Network Innovation: Smart City Foundation,” at the South by Southwest conference in Austin, Texas.
Yesterday, the UK government announced their budget plans to invest in robotics, artificial intelligence, driverless cars, and faster broadband. The spending commitments include:
£16m to create a 5G hub to trial the forthcoming mobile data technology. In particular, the government wants there to better mobile network coverage over the country’s roads and railway lines
£200m to support local “full-fibre” broadband network projects that are designed to bring in further private sector investment
£270m towards disruptive technologies to put the UK “at the forefront” including cutting-edge artificial intelligence and robotics systems that will operate in extreme and hazardous environments, including off-shore energy, nuclear energy, space and deep mining; batteries for the next generation of electric vehicles; and biotech.
Investing £300 million to further develop the UK’s research talent, including through creating an additional 1,000 PhD places.
Several experts in the robotics community agree that progress is shifting in the right direction, however, more needs to happen if the UK is to remain competitive in the robotics sector:
“The UK understand the very real positive impact that RAS [robotics & autonomous systems] will have on our society from now, of all time. It continues to see the big picture and today’s announcement by the Chancellor is a clear indication of that. We can have better roads, cleaner cities, healthier oceans and bodies, safer skies, deeper mines, better jobs and more opportunity. That’s what machines are for.”
“We are at a real inflection point in the development of autonomous technology. The UK has a number of nascent world class companies in the area of self-driving vehicles, which have a huge potential to change the world, whilst creating jobs and producing exportable UK goods and services. We have a head start and now we need to take advantage of it.” [from FT]
“Some of the great robotics companies of the future are being launched by British entrepreneurs and the support announced in today’s budget will to strengthen their impact and global competitiveness. We’re currently seeing strong appetite from private investors to back locally-grown robotics businesses and this money will help bring even more interest in this space”
“This is welcome news for the many research organisations developing robotics applications. As a leading UK robotics research group specialising in extreme and challenging environments, we welcome the allocation of significant funding in this field as part of the Government’s evolving Industrial Strategy. RACE and the rest of the robotics R&D sector are looking forward to working with industry to fully utilise this funding.”
“Robotics and AI is set to be a driving force in increasing productivity, but also in solving societal and environmental challenges. It’s opening new frontiers in off-shore and nuclear energy, space and deep mining. Investment from government will be key in helping the UK stay at the forefront of this field.” [from BBC]
“We lost our best machine learning group to Amazon just recently. The money means there will be more resources for universities, which may help them retain their staff. But it’s not nearly enough for all of the disruptive technologies being developed in the UK. The government says it want this to be the leading robotics country in the world, but Google and others are spending far more, so it’s ultimately chicken feed by comparison.” [from BBC]
“I’m pleased by the additional funding, and, in fact, my group is a partner in a new £4.6M EPSRC grant to develop robots for nuclear decommissioning announced last week.
But having just returned from Tokyo (from AI in Asia: AI for Social Good), I’m well aware that other countries are investing much more heavily than the UK. China was for instance described as an emerging powerhouse of AI. A number of colleagues at that meeting also made the same point as Noel, that universities are haemorrhaging star AI/robotics academics to multi-national companies with very deep pockets.”
“I, like many others, was pleased to hear more money going into robotics and AI research, but I was disappointed – though completely unsurprised – to see nothing about how to restructure the economy to deal with the consequences of increasing research into and use of robots and AI. Hammond’s blunder on the relationship of productivity to wages – and it can’t be seen as anything other than a blunder – means that he doesn’t even seem to appreciate that there is a problem.
The truth is that increased automation means fewer jobs and lower wages and this needs to be addressed with some concrete measures. There will be benefits to society with increased automation, but we need to start thinking now (and taking action now) to ensure that those benefits aren’t solely economic gain for the already-wealthy. The ‘robot dividend’ needs to be shared across society, as it can have far-reaching consequences beyond economics: improving our quality of life, our standard of living, education, health and accessibility.”
“America has the American Manufacturing Initiative which, in 2015, was expanded to establish Fraunhofer-like research facilities around the US (on university campuses) that focus on particular aspects of the science of manufacturing.
Robotics were given $50 million of the $500 million for the initiative and one of the research facilities was to focus on robotics. Under the initiative, efforts from the SBIR, NSF, NASA and DoD/DARPA were to be coordinated in their disbursement of fundings for science in robotics. None of these fundings comes anywhere close to the coordinated funding programs and P-P-Ps found in the EU, Korea and Japan, nor the top-down incentivized directives of China’s 5-year plans. Essentially American robotic funding is (and has been) predominantly entrepreneurial with token support from the government.
In the new Trump Administration, there is no indication of any direction nor continuation (funding) of what little existing programs we have. At a NY Times editorial board sit-down with Trump after his election, he was quoted as saying that “Robotics is becoming very big and we’re going to do that. We’re going to have more factories. We can’t lose 70,000 factories. Just can’t do it. We’re going to start making things.” Thus far there is no followup to those statements nor has Trump hired replacements for the top executives at the Office of Science and Technology Policy, all of which are presently vacant.”
And finally, a few comments from the business sector on Twitter:
International Women’s Day is raising discussion about the lack of diversity and role models in STEM and the potential negative outcomes of bias and stereotyping in robotics and AI. Let’s balance the words with positive actions. Here’s what we can all do to support women in robotics and AI, and thus improve diversity, innovation and reduce skills shortages for robotics and AI.
Join WomeninRobotics.org – a network of women working in robotics (or who aspire to work in robotics). We are a global discussion group supporting local events that bring women together for peer networking. We recognize that lack of support and mentorship in the workplace holds women back, particularly if there is only one woman in an organization/company.
Although the main group is only for women, we are going to start something for male ‘Allies’ or ‘Champions’. So men, you can join women in robotics too! Women need champions and while it would be ideal to have an equal number of women in leadership roles, until then, companies can improve their hiring and retention by having visible and vocal male allies. We all need mentors as our careers progress.
Women also need visibility and high profile projects for their careers to progress on par. One way of improving that is to showcase the achievements of women in robotics. Read and share all four year’s worth of our annual “25 Women in Robotics you need to know about” – that’s more than 100 women already because we have some groups in there. (There has always been a lot of women on the core team at Robohub.org, so we love showing our support.) Our next edition will come out on October 10 2017 to celebrate Ada Lovelace Day.
Change starts at the top of an organization. It’s very hard to hire women if you don’t have any women, or if they can’t see pathways for advancement in your organization. However, there are many things you can do to improve your hiring practices. Some are surprisingly simple, yet effective. I’ve collected a list and posted it at Silicon Valley Robotics – How to hire women.
And you can invest in women entrepreneurs. All the studies show that you get a higher rate of return, and higher likelihood of success from investments in female founders. And yet, proportionately investment is much less. You don’t need to be a VC to invest in women either. Kiva.org is matching loans today and $25 can empower an entrepreneur all over the world. #InvestInHer
And our next Silicon Valley/ San Francisco Women in Robotics event will be on March 22 at SoftBank Robotics – we’d love to see you there – or in support!
Guest post by José Hernández-Orallo, Professor at Technical University of Valencia
Two decades ago I started working on metrics of machine intelligence. By that time, during the glacial days of the second AI winter, few were really interested in measuring something that AI lacked completely. And very few, such as David L. Dowe and I, were interested in metrics of intelligence linked to algorithmic information theory, where the models of interaction between an agent and the world were sequences of bits, and intelligence was formulated using Solomonoff’s and Wallace’s theories of inductive inference.
In the meantime, seemingly dozens of variants of the Turing test were proposed every year, the CAPTCHAs were introduced and David showed how easy it is to solve some IQ tests using a very simple program based on a big-switch approach. And, today, a new AI spring has arrived, triggered by a blossoming machine learning field, bringing a more experimental approach to AI with an increasing number of AI benchmarks and competitions (see a previous entry in this blog for a survey).
Last year also witnessed the introduction of a different kind of AI evaluation platforms, such as Microsoft’s Malmö, GoodAI’s School, OpenAI’s Gym and Universe, DeepMind’s Lab, Facebook’s TorchCraft and CommAI-env. Based on a reinforcement learning (RL) setting, these platforms make it possible to create many different tasks and connect RL agents through a standard interface. Many of these platforms are well suited for the new paradigms in AI, such as deep reinforcement learning and some open-source machine learning libraries. After thousands of episodes or millions of steps against a new task, these systems are able to excel, with usually better than human performance.
Despite the myriads of applications and breakthroughs that have been derived from this paradigm, there seems to be a consensus in the field that the main open problem lies in how an AI agent can reuse the representations and skills from one task to new ones, making it possible to learn a new task much faster, with a few examples, as humans do. This can be seen as a mapping problem (usually under the term transfer learning) or can be seen as a sequential problem (usually under the terms gradual, cumulative, incremental, continual or curriculum learning).
One of the key notions that is associated with this capability of a system of building new concepts and skills over previous ones is usually referred to as “compositionality”, which is well documented in humans from early childhood. Systems are able to combine the representations, concepts or skills that have been learned previously in order to solve a new problem. For instance, an agent can combine the ability of climbing up a ladder with its use as a possible way out of a room, or an agent can learn multiplication after learning addition.
In my opinion, two of the previous platforms are better suited for compositionality: Malmö and CommAI-env. Malmö has all the ingredients of a 3D game, and AI researchers can experiment and evaluate agents with vision and 3D navigation, which is what many research papers using Malmö have done so far, as this is a hot topic in AI at the moment. However, to me, the most interesting feature of Malmö is building and crafting, where agents must necessarily combine previous concepts and skills in order to create more complex things.
CommAI-env is clearly an outlier in this set of platforms. It is not a video game in 2D or 3D. Video or audio don’t have any role there. Interaction is just produced through a stream of input/output bits and rewards, which are just +1, 0 or -1. Basically, actions and observations are binary. The rationale behind CommAI-env is to give prominence to communication skills, but it still allows for rich interaction, patterns and tasks, while “keeping all further complexities to a minimum”.
When I was aware that the General AI Challenge was using CommAI-env for their warm-up round I was ecstatic. Participants could focus on RL agents without the complexities of vision and navigation. Of course, vision and navigation are very important for AI applications, but they create many extra complications if we want to understand (and evaluate) gradual learning. For instance, two equal tasks for which the texture of the walls changes can be seen as requiring higher transfer effort than two slightly different tasks with the same texture. In other words, this would be extra confounding factors that would make the analysis of task transfer and task dependencies much harder. It is then a wise choice to exclude this from the warm-up round. There will be occasions during other rounds of the challenge for including vision, navigation and other sorts of complex embodiment. Starting with a minimal interface to evaluate whether the agents are able to learn incrementally is not only a challenging but an important open problem for general AI.
Also, the warm-up round has modified CommAI-env in such a way that bits are packed into 8-bit (1 byte) characters. This makes the definition of tasks more intuitive and makes the ASCII coding transparent to the agents. Basically, the set of actions and observations is extended to 256. But interestingly, the set of observations and actions is the same, which allows many possibilities that are unusual in reinforcement learning, where these subsets are different. For instance, an agent with primitives such as “copy input to output” and other sequence transformation operators can compose them in order to solve the task. Variables, and other kinds of abstractions, play a key role.
This might give the impression that we are back to Turing machines and symbolic AI. In a way, this is the case, and much in alignment to Turing’s vision in his 1950 paper: “it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language”. But in 2017 we have a range of techniques that weren’t available just a few years ago. For instance, Neural Turing Machines and other neural networks with symbolic memory can be very well suited for this problem.
By no means does this indicate that the legion of deep reinforcement learning enthusiasts cannot bring their apparatus to this warm-up round. Indeed they won’t be disappointed by this challenge if they really work hard to adapt deep learning to this problem. They won’t probably need a convolutional network tuned for visual pattern recognition, but there are many possibilities and challenges in how to make deep learning work in a setting like this, especially because the fewer examples, the better, and deep learning usually requires many examples.
As a plus, the simple, symbolic sequential interface opens the challenge to many other areas in AI, not only recurrent neural networks but techniques from natural language processing, evolutionary computation, compression-inspired algorithms or even areas such as inductive programming, with powerful string-handling primitives and its appropriateness for problems with very few examples.
I think that all of the above makes this warm-up round a unique competition. Of course, since we haven’t had anything similar in the past, we might have some surprises. It might happen that an unexpected (or even naïve) technique could behave much better than others (and humans) or perhaps we find that no technique is able to do something meaningful at this time.
I’m eager to see how this round develops and what the participants are able to integrate and invent in order to solve the sequence of micro and mini-tasks. I’m sure that we will learn a lot from this. I hope that machines will, too. And all of us will move forward to the next round!
Guest post by Simon Andersson, Senior Research Scientist @GoodAI
Executive summary
Tracking major unsolved problems in AI can keep us honest about what remains to be achieved and facilitate the creation of roadmaps towards general artificial intelligence.
This document currently identifies 29 open problems.
For each major problem, example tests are suggested for evaluating research progress.
Introduction
This document identifies open problems in AI. It seeks to provide a concise overview of the greatest challenges in the field and of the current state of the art, in line with the “open research questions” theme of focus of the AI Roadmap Institute.
The challenges are grouped into AI-complete problems, closed-domain problems, and fundamental problems in commonsense reasoning, learning, and sensorimotor ability.
I realize that this first attempt at surveying the open problems will necessarily be incomplete and welcome reader feedback.
To help accelerate the search for general artificial intelligence, GoodAI is organizing the General AI Challenge (GoodAI, 2017), that aims to solve some of the problems outlined below, through a series of milestone challenges starting in early 2017.
Sources, method, and related work
The collection of problems presented here is the result of a review of the literature in the areas of
Machine learning
Machine perception and robotics
Open AI problems
Evaluation of AI systems
Tests for the achievement of human-level intelligence
Benchmarks and competitions
To be considered for inclusion, a problem must be
Highly relevant for achieving general artificial intelligence
Closed in scope, not subject to open-ended extension
Testable
Problems vary in scope and often overlap. Some may be contained entirely in others. The second criterion (closed scope) excludes some interesting problems such as learning all human professions; a few problems of this type are mentioned separately from the main list. To ensure that problems are testable, each is presented together with example tests.
Several websites, some listed below, provide challenge problems for AI.
OpenAI Requests for research (OpenAI, 2016) presents machine learning problems of varying difficulty with an emphasis on deep and reinforcement learning.
In the context of evaluating AI systems, Hernández-Orallo (2016a) reviews a number of open AI problems. Lake et al. (2016) offers a critique of the current state of the art in AI and discusses problems like intuitive physics, intuitive psychology, and learning from few examples.
The rest of the document lists AI challenges as outlined below.
AI-complete problems
Closed-domain problems
Commonsense reasoning
Learning
Sensorimotor problems
AI-complete problems
AI-complete problems are ones likely to contain all or most of human-level general artificial intelligence. A few problems in this category are listed below.
Open-domain dialog
Text understanding
Machine translation
Human intelligence and aptitude tests
Coreference resolution (Winograd schemas)
Compound word understanding
Open-domain dialog
Open-domain dialog is the problem of conducting competently a dialog with a human when the subject of the discussion is not known in advance. The challenge includes language understanding, dialog pragmatics, and understanding the world. Versions of the tasks include spoken and written dialog. The task can be extended to include multimodal interaction (e.g., gestural input, multimedia output). Possible success criteria are usefulness and the ability to conduct dialog indistinguishable from human dialog (“Turing test”).
Tests
Dialog systems are typically evaluated by human judges. Events where this has been done include
Text understanding is an unsolved problem. There has been remarkable progress in the area of question answering, but current systems still fail when common-sense world knowledge, beyond that provided in the text, is required.
Tests
McCarthy (1976) provided an early text understanding challenge problem.
Brachman (2006) suggested the problem of reading a textbook and solving its exercises.
Machine translation
Machine translation is AI-complete since it includes problems requiring an understanding of the world (e.g., coreference resolution, discussed below).
Tests
While translation quality can be evaluated automatically using parallel corpora, the ultimate test is human judgement of quality. Corpora such as the Corpus of Contemporary American English (Davies, 2008) contain samples of text from different genres. Translation quality can be evaluated using samples of
Newspaper text
Fiction
Spoken language transcriptions
Intelligence tests
Human intelligence and aptitude tests (Hernández-Orallo, 2017) are interesting in that they are designed to be at the limit of human ability and to be hard or impossible to solve using memorized knowledge. Human-level performance has been reported for Raven’s progressive matrices (Lovett and Forbus, 2017) but artificial systems still lack the general reasoning abilities to deal with a variety of problems at the same time (Hernández-Orallo, 2016b).
Tests
Brachman (2006) suggested using the SAT as an AI challenge problem.
In many languages, there are compound words with set meanings. Novel compound words can be produced, and we are good at guessing their meaning. We understand that a water bird is a bird that lives near water, not a bird that contains or is constituted by water, and that schadenfreude is felt when others, not we, are hurt.
Closed-domain problems are ones that combine important elements of intelligence but reduce the difficulty by limiting themselves to a circumscribed knowledge domain. Game playing agents are examples of this and artificial agents have achieved superhuman performance at Go (Silver et al., 2016) and more recently poker (Aupperlee, 2017; Brown and Sandholm, 2017). Among the open problems are:
Learning to play board, card, and tile games from descriptions
Producing programs from descriptions
Source code understanding
Board, card, and tile games from descriptions
Unlike specialized game players, systems that have to learn new games from descriptions of the rules cannot rely on predesigned algorithms for specific games.
Tests
The problem of learning new games from formal-language descriptions has appeared as a challenge at the AAAI conference (Genesereth et al., 2005; AAAI, 2013).
Even more challenging is the problem of learning games from natural language descriptions; such descriptions for card and tile games are available from a number of websites (e.g., McLeod, 2017).
Programs from descriptions
Producing programs in a programming language such as C from natural language input is a problem of obvious practical interest.
Tests
The “Description2Code” challenge proposed at (OpenAI, 2016) has 5000 descriptions for programs collected by Ethan Caballero.
Source code understanding
Related to source code production is source code understanding, where the system can interpret the semantics of code and detect situations where the code differs in non-trivial ways from the likely intention of its author. Allamanis et al. (2016) reports progress on the prediction of procedure names.
Tests
The International Obfuscated C Code Contest (OCCC, 2016) publishes code that is intentionally hard to understand. Source code understanding could be tested as the ability to improve the readability of the code as scored by human judges.
Commonsense reasoning
Commonsense reasoning is likely to be a central element of general artificial intelligence. Some of the main problems in this area are listed below.
Causal reasoning
Counterfactual reasoning
Intuitive physics
Intuitive psychology
Causal reasoning
Causal reasoning requires recognizing and applying cause-effect relations.
Counterfactual reasoning is required for answering hypothetical questions. It uses causal reasoning together with the system’s other modeling and reasoning capabilities to consider situations possibly different from anything that ever happened in the world.
Despite remarkable advances in machine learning, important learning-related problems remain mostly unsolved. They include:
Gradual learning
Unsupervised learning
Strong generalization
Category learning from few examples
Learning to learn
Compositional learning
Learning without forgetting
Transfer learning
Knowing when you don’t know
Learning through action
Gradual learning
Humans are capable of lifelong learning of increasingly complex tasks. Artificial agents should be, too. Versions of this idea have been discussed under the rubrics of life-long (Thrun and Mitchell, 1995), continual, and incremental learning. At GoodAI, we have adopted the term gradual learning (Rosa et al., 2016) for the long-term accumulation of knowledge and skills. It requires the combination of several abilities discussed below:
Compositional learning
Learning to learn
Learning without forgetting
Transfer learning
Tests
A possible test applies to a household robot that learns household and house maintenance tasks, including obtaining tools and materials for the work. The test evaluates the agent on two criteria: Continuous operation (Nilsson in Brooks, et al., 1996) where the agent needs to function autonomously without reprogramming during its lifetime, and improving capability, where the agent must exhibit, at different points in its evolution, capabilities not present at an earlier time.
Unsupervised learning
Unsupervised learning has been described as the next big challenge in machine learning (LeCun 2016). It appears to be fundamental to human lifelong learning (supervised and reinforcement signals do not provide nearly enough data) and is closely related to prediction and common-sense reasoning (“filling in the missing parts”). A hard problem (Yoshua Bengio, in the “Brains and bits” panel at NIPS 2016) is unsupervised learning in hierarchical systems, with components learning jointly.
Tests
In addition to the possible tests in the vision domain, speech recognition also presents opportunities for unsupervised learning. While current state-of-the-art speech recognizers rely largely on supervised learning on large corpora, unsupervised recognition requires discovering, without supervision, phonemes, word segmentation, and vocabulary. Progress has been reported in this direction, so far limited to small-vocabulary recognition (Riccardi and Hakkani-Tur, 2003, Park and Glass, 2008, Kamper et al., 2016).
A full-scale test of unsupervised speech recognition could be to train on the audio part of a transcribed speech corpus (e.g., TIMIT (Garofolo, 1993)), then learn to predict the transcriptions with only very sparse supervision.
Strong generalization
Humans can transfer knowledge and skills across situations that share high-level structure but are otherwise radically different, adapting to the particulars of a new setting while preserving the essence of the skill, a capacity that (Tarlow, 2016; Gaunt et al., 2016) refer to as strong generalization. If we learn to clean up a room, we know how to clean up most other rooms.
Tests
A general assembly robot could learn to build a toy castle in one material (e.g., lego blocks) and be tested on building it from other materials (sand, stones, sticks).
A household robot could be trained on cleaning and cooking tasks in one environment and be tested in highly dissimilar environments.
Category learning from few examples
Lake et al. (2015) achieved human-level recognition and generation of characters using few examples. However, learning more complex categories from few examples remains an open problem.
Tests
The ImageNet database (Deng et al., 2009) contains images organized by the semantic hierarchy of WordNet (Miller, 1995). Correctly determining ImageNet categories from images with very little training data could be a challenging test of learning from few examples.
Learning to learn
Learning to learn or meta-learning (e.g., Harlow, 1949; Schmidhuber, 1987; Thrun and Pratt, 1998; Andrychowicz et al., 2016; Chen et al., 2016; de Freitas, 2016; Duan et al., 2016; Lake et al., 2016; Wang et al., 2016) is the acquisition of skills and inductive biases that facilitate future learning. The scenarios considered in particular are ones where a more general and slower learning process produces a faster, more specialized one. An example is biological evolution producing efficient learners such as human beings.
Tests
Learning to play Atari video games is an area that has seen some remarkable recent successes, including in transfer learning (Parisotto et al., 2016). However, there is so far no system that first learns to play video games, then is capable of learning a new game, as humans can, from a few minutes of play (Lake et al., 2016).
Compositional learning
Compositional learning (de Freitas, 2016; Lake et al., 2016) is the ability to recombine primitive representations to accelerate the acquisition of new knowledge. It is closely related to learning to learn.
Tests
Tests for compositional learning need to verify both that the learner is effective and that it uses compositional representations.
Some ImageNet categories correspond to object classes defined largely by their arrangements of component parts, e.g., chairs and stools, or unicycles, bicycles, and tricycles. A test could evaluate the agent’s ability to learn categories with few examples and to report the parts of the object in an image.
Compositional learning should be extremely helpful in learning video games (Lake et al., 2016). A learner could be tested on a game already mastered, but where component elements have changed appearance (e.g., different-looking fish in the Frostbite game). It should be able to play the variant game with little or no additional learning.
Learning without forgetting
In order to learn continually over its lifetime, an agent must be able to generalize over new observations while retaining previously acquired knowledge. Recent progress towards this goal is reported in (Kirkpatrick et al., 2016) and (Li and Hoiem, 2016). Work on memory augmented neural networks (e.g., Graves et al., 2016) is also relevant.
Tests
A test for learning without forgetting needs to present learning tasks sequentially (earlier tasks are not repeated) and test for retention of early knowledge. It may also test for declining learning time for new tasks, to verify that the agent exploits the knowledge acquired so far.
A challenging test for learning without forgetting would be to learn to recognize all the categories in ImageNet, presented sequentially.
Transfer learning
Transfer learning (Pan and Yang, 2010) is the ability of an agent trained in one domain to master another. Results in the area of text comprehension are currently poor unless the agent is given some training on the new domain (Kadlec, et al., 2016).
Tests
Sentiment classification (Blitzer et al., 2007) provides a possible testing ground for transfer learning. Learners can be trained on one corpus, tested on another, and compared to a baseline learner trained directly on the target domain.
Reviews of movies and of businesses are two domains dissimilar enough to make knowledge transfer challenging. Corpora for the domains are Rotten Tomatoes movie reviews (Pang and Lee, 2005) and the Yelp Challenge dataset (Yelp, 2017).
Knowing when you don’t know
While uncertainty is modeled differently by different learning algorithms, it seems to be true in general that current artificial systems are not nearly as good as humans at “knowing when they don’t know.” An example are deep neural networks that achieve state-of-the-art accuracy on image recognition but assign 99.99% confidence to the presence of objects in images completely unrecognizable to humans (Nguyen et al., 2015).
Human performance on confidence estimation would include
In induction tasks, like program induction or sequence completion, knowing when the provided examples are insufficient for induction (multiple reasonable hypotheses could account for them)
In speech recognition, knowing when an utterance has not been interpreted reliably
In visual tasks such as pedestrian detection, knowing when a part of the image has not been analyzed reliably
Tests
A speech recognizer can be compared against a human baseline, measuring the ratio of the average confidence to the confidence on examples where recognition fails.
The confidence of image recognition systems can be tested on generated adversarial examples.
Learning through action
Human infants are known to learn about the world through experiments, observing the effects of their own actions (Smith and Gasser, 2005; Malik, 2015). This seems to apply both to higher-level cognition and perception. Animal experiments have confirmed that the ability to initiate movement is crucial to perceptual development (Held and Hein, 1963) and some recent progress has been made on using motion in learning visual perception (Agrawal et al., 2015). In (Agrawal et al., 2016), a robot learns to predict the effects of a poking action.
“Learning through action” thus encompasses several areas, including
Active learning, where the agent selects the training examples most likely to be instructive
Undertaking epistemological actions, i.e., activities aimed primarily at gathering information
Learning to perceive through action
Learning about causal relationships through action
Perhaps most importantly, for artificial systems, learning the causal structure of the world through experimentation is still an open problem.
Tests
For learning through action, it is natural to consider problems of motor manipulation where in addition to the immediate effects of the agent’s actions, secondary effects must be considered as well.
Learning to play billiards: An agent with little prior knowledge and no fixed training data is allowed to explore a real or virtual billiard table and should learn to play billiards well.
Sensorimotor problems
Outstanding problems in robotics and machine perception include:
Autonomous navigation in dynamic environments
Scene analysis
Robust general object recognition and detection
Robust, life-time simultaneous location and mapping (SLAM)
Multimodal integration
Adaptive dexterous manipulation
Autonomous navigation
Despite recent progress in self-driving cars by companies like Tesla, Waymo (formerly the Google self-driving car project) and many others, autonomous navigation in highly dynamic environments remains a largely unsolved problem, requiring knowledge of object semantics to reliably predict future scene states (Ess et al., 2010).
Tests
Fully automatic driving in crowded city streets and residential areas is still a challenging test for autonomous navigation.
Scene analysis
The challenge of scene analysis extends far beyond object recognition and includes the understanding of surfaces formed by multiple objects, scene 3D structure, causal relations (Lake et al., 2016), and affordances. It is not limited to vision but can depend on audition, touch, and other modalities, e.g., electroreception and echolocation (Lewicki et al., 2014; Kondo et al., 2017). While progress has been made, e.g., in recognizing anomalous and improbable scenes (Choi et al., 2012), predicting object dynamics (Fouhey and Zitnick, 2014), and discovering object functionality (Yao et al., 2013), we are still far from human-level performance in this area.
Tests
Some possible challenges for understanding the causal structure in visual scenes are:
Recognizing dangerous situations: A corpus of synthetic images could be created where the same objects are recombined to form “dangerous” and “safe” scenes as classified by humans.
Recognizing physically improbable scenes: A synthetic corpus could be created to show physically plausible and implausible scenes containing the same objects.
Recognizing useless objects: Images of useless objects have been created by (Kamprani, 2017).
Object recognition
While object recognition has seen great progress in recent years (e.g., Han et al., 2016), matches or surpasses human performance for many problems (Karpathy, 2014), and can approach perfection in closed environments (Song et al., 2015), state-of-the-art systems still struggle with the harder cases such as open objects (interleaved with background), broken objects, truncation and occlusion in dynamic environments (e.g., Rajaram et al., 2015).
Tests
Environments that are cluttered and contain objects drawn from a large, open-ended, and changing set of types are likely to be challenging for an object recognition system. An example would be
Seeing photos of the insides of pantries and refrigerators and listing the ingredients available to the owners
Simultaneous location and mapping
While the problem of simultaneous location and mapping (SLAM) is considered solved for some applications, the challenge of SLAM for long-lived autonomous robots, in large-scale, time-varying environments, remains open (Cadena et al., 2016).
Tests
Lifetime location and mapping, without detailed maps provided in advance and robust to changes in the environment, for an autonomous car based in a large city
Multimodal integration
The integration of multiple senses (Lahat, 2015) is important, e.g., in human communication (Morency, 2015) and scene understanding (Lewicki et al., 2014; Kondo et al., 2017). Having multiple overlapping sensory systems seems to be essential for enabling human children to educate themselves by perceiving and acting in the world (Smith and Gasser, 2005).
Tests
Spoken communication in noisy environments, where lip reading and gestural cues are indispensable, can provide challenges for multimodal fusion. An example would be
A robot bartender: The agent needs to interpret customer requests in a noisy bar.
Adaptive dexterous manipulation
Current robot manipulators do not come close to the versatility of the human hand (Ciocarlie, 2015). Hard problems include manipulating deformable objects and operating from a mobile platform.
Tests
Taking out clothes from a washing machine and hanging them on clothes lines and coat hangers in varied places while staying out of the way of humans
Open-ended problems
Some noteworthy problems were omitted from the list for having a too open-ended scope: they encompass sets of tasks that evolve over time or can be endlessly extended. This makes it hard to decide whether a problem has been solved. Problems of this type include
Enrolling in a human university and take classes like humans (Goertzel, 2012)
Automating all types of human work (Nilsson, 2005)
Puzzlehunt challenges, e.g., the annual TMOU game in the Czech republic (TMOU, 2016)
Conclusion
I have reviewed a number of open problems in an attempt to delineate the current front lines of AI research. The problem list in this first version, as well as the problem descriptions, example tests, and mentions of ongoing work in the research areas, are necessarily incomplete. I plan to extend and improve the document incrementally and warmly welcome suggestions either in the comment section below or at the institute’s discourse forum.
Acknowledgements
I thank Jan Feyereisl, Martin Poliak, Petr Dluhoš, and the rest of the GoodAI team for valuable discussion and suggestions.
Agrawal, Pulkit, Joao Carreira, and Jitendra Malik. “Learning to see by moving.” Proceedings of the IEEE International Conference on Computer Vision. 2015.
Agrawal, Pulkit, et al. “Learning to poke by poking: Experiential learning of intuitive physics.” arXiv preprint arXiv:1606.07419 (2016).
AI•ON. “The AI•ON collection of open research problems.” Online under http://ai-on.org/projects(2016)
Allamanis, Miltiadis, Hao Peng, and Charles Sutton. “A convolutional attention network for extreme summarization of source code.” arXiv preprint arXiv:1602.03001 (2016).
Andrychowicz, Marcin, et al. “Learning to learn by gradient descent by gradient descent.” Advances in Neural Information Processing Systems. 2016.
Blitzer, John, Mark Dredze, and Fernando Pereira. “Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification.” ACL. Vol. 7. 2007.
Brachman, Ronald J. “AI more than the sum of its parts.” AI Magazine 27.4 (2006): 19.
Brooks, R., et al. “Challenge problems for artificial intelligence.” Thirteenth National Conference on Artificial Intelligence-AAAI. 1996.
Cadena, Cesar, et al. “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age.” IEEE Transactions on Robotics 32.6 (2016): 1309–1332.
Chang, Michael B., et al. “A compositional object-based approach to learning physical dynamics.” arXiv preprint arXiv:1612.00341 (2016).
Chen, Yutian, et al. “Learning to Learn for Global Optimization of Black Box Functions.” arXiv preprint arXiv:1611.03824 (2016).
Choi, Myung Jin, Antonio Torralba, and Alan S. Willsky. “Context models and out-of-context objects.” Pattern Recognition Letters 33.7 (2012): 853–862.
de Freitas, Nando. “Learning to Learn and Compositionality with Deep Recurrent Neural Networks: Learning to Learn and Compositionality.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.
Degrave, Jonas, Michiel Hermans, and Joni Dambre. “A Differentiable Physics Engine for Deep Learning in Robotics.” arXiv preprint arXiv:1611.01652 (2016).
Deng, Jia, et al. “Imagenet: A large-scale hierarchical image database.” Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
Denil, Misha, et al. “Learning to Perform Physics Experiments via Deep Reinforcement Learning.” arXiv preprint arXiv:1611.01843 (2016).
Duan, Yan, et al. “RL²: Fast Reinforcement Learning via Slow Reinforcement Learning.” arXiv preprint arXiv:1611.02779 (2016).
Ess, Andreas, et al. “Object detection and tracking for autonomous navigation in dynamic environments.” The International Journal of Robotics Research 29.14 (2010): 1707–1725.
Finn, Chelsea, and Sergey Levine. “Deep Visual Foresight for Planning Robot Motion.” arXiv preprint arXiv:1610.00696 (2016).
Fouhey, David F., and C. Lawrence Zitnick. “Predicting object dynamics in scenes.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014.
Fragkiadaki, Katerina, et al. “Learning visual predictive models of physics for playing billiards.” arXiv preprint arXiv:1511.07404 (2015).
Garofolo, John, et al. “TIMIT Acoustic-Phonetic Continuous Speech Corpus LDC93S1.” Web Download. Philadelphia: Linguistic Data Consortium, 1993.
Gaunt, Alexander L., et al. “Terpret: A probabilistic programming language for program induction.” arXiv preprint arXiv:1608.04428 (2016).
Genesereth, Michael, Nathaniel Love, and Barney Pell. “General game playing: Overview of the AAAI competition.” AI magazine 26.2 (2005): 62.
Graves, Alex, et al. “Hybrid computing using a neural network with dynamic external memory.” Nature 538.7626 (2016): 471–476.
Hamrick, Jessica B., et al. “Imagination-Based Decision Making with Physical Models in Deep Neural Networks.” Online under http://phys.csail.mit.edu/papers/5.pdf(2016)
Han, Dongyoon, Jiwhan Kim, and Junmo Kim. “Deep Pyramidal Residual Networks.” arXiv preprint arXiv:1610.02915 (2016).
Harlow, Harry F. “The formation of learning sets.” Psychological review 56.1 (1949): 51.
Held, Richard, and Alan Hein. “Movement-produced stimulation in the development of visually guided behavior.” Journal of comparative and physiological psychology 56.5 (1963): 872.
Hernández-Orallo, José. “Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement.” Artificial Intelligence Review(2016a): 1–51.
Hernández-Orallo, José, et al. “Computer models solving intelligence test problems: progress and implications.” Artificial Intelligence 230 (2016b): 74–107.
Hernández-Orallo, José. “The measure of all minds.” Cambridge University Press, 2017.
IOCCC. “The International Obfuscated C Code Contest.” Online under http://www.ioccc.org(2016)
Kadlec, Rudolf, et al. “Finding a jack-of-all-trades: an examination of semi-supervised learning in reading comprehension.” Under review at ICLR 2017, online under https://openreview.net/pdf?id=rJM69B5xx
Kamper, Herman, Aren Jansen, and Sharon Goldwater. “Unsupervised word segmentation and lexicon discovery using acoustic word embeddings.” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 24.4 (2016): 669–679.
Kirkpatrick, James, et al. “Overcoming catastrophic forgetting in neural networks.” arXiv preprint arXiv:1612.00796 (2016).
Kondo, H. M., et al. “Auditory and visual scene analysis: an overview.” Philosophical transactions of the Royal Society of London. Series B, Biological sciences 372.1714 (2017).
Lahat, Dana, Tülay Adali, and Christian Jutten. “Multimodal data fusion: an overview of methods, challenges, and prospects.” Proceedings of the IEEE 103.9 (2015): 1449–1477.
Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. “Human-level concept learning through probabilistic program induction.” Science 350.6266 (2015): 1332–1338.
Lake, Brenden M., et al. “Building machines that learn and think like people.” arXiv preprint arXiv:1604.00289 (2016).
Lewicki, Michael S., et al. “Scene analysis in the natural environment.” Frontiers in psychology 5 (2014): 199.
Li, Wenbin, Aleš Leonardis, and Mario Fritz. “Visual stability prediction and its application to manipulation.” arXiv preprint arXiv:1609.04861 (2016).
Li, Zhizhong, and Derek Hoiem. “Learning without forgetting.” European Conference on Computer Vision. Springer International Publishing, 2016.
McLeod, John. “Card game rules — card games and tile games from around the world.” Online under https://www.pagat.com (2017)
Miller, George A. “WordNet: a lexical database for English.” Communications of the ACM 38.11 (1995): 39–41.
Mottaghi, Roozbeh, et al. ““What happens if…” Learning to Predict the Effect of Forces in Images.” European Conference on Computer Vision. Springer International Publishing, 2016.
Nair, Ashvin, et al. “Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation.” Online under http://phys.csail.mit.edu/papers/15.pdf (2016)
Nguyen, Anh, Jason Yosinski, and Jeff Clune. “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015.
Nilsson, Nils J. “Human-level artificial intelligence? Be serious!.” AI magazine 26.4 (2005): 68.
Pan, Sinno Jialin, and Qiang Yang. “A survey on transfer learning.” IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345–1359.
Pang, Bo, and Lillian Lee. “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales.” Proceedings of the 43rd annual meeting on association for computational linguistics. Association for Computational Linguistics, 2005.
Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhutdinov. “Actor-mimic: Deep multitask and transfer reinforcement learning.” arXiv preprint arXiv:1511.06342 (2015).
Park, Alex S., and James R. Glass. “Unsupervised pattern discovery in speech.” IEEE Transactions on Audio, Speech, and Language Processing 16.1 (2008): 186–197.
Rajaram, Rakesh Nattoji, Eshed Ohn-Bar, and Mohan M. Trivedi. “An exploration of why and when pedestrian detection fails.” 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, 2015.
Riccardi, Giuseppe, and Dilek Z. Hakkani-Tür. “Active and unsupervised learning for automatic speech recognition.” Interspeech. 2003.
Rosa, Marek, Jan Feyereisl, and The GoodAI Collective. “A Framework for Searching for General Artificial Intelligence.” arXiv preprint arXiv:1611.00685 (2016).
Schmidhuber, Jurgen. “Evolutionary principles in self-referential learning.” On learning how to learn: The meta-meta-… hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich (1987).
Silver, David, et al. “Mastering the game of Go with deep neural networks and tree search.” Nature 529.7587 (2016): 484–489.
Smith, Linda, and Michael Gasser. “The development of embodied cognition: Six lessons from babies.” Artificial life 11.1–2 (2005): 13–29.
Song, Shuran, Linguang Zhang, and Jianxiong Xiao. “Robot in a room: Toward perfect object recognition in closed environments.” CoRR (2015).
Stewart, Russell, and Stefano Ermon. “Label-free supervision of neural networks with physics and domain knowledge.” arXiv preprint arXiv:1609.05566 (2016).
Verschae, Rodrigo, and Javier Ruiz-del-Solar. “Object detection: current and future directions.” Frontiers in Robotics and AI 2 (2015): 29.
Wang, Jane X., et al. “Learning to reinforcement learn.” arXiv preprint arXiv:1611.05763 (2016).
Yao, Bangpeng, Jiayuan Ma, and Li Fei-Fei. “Discovering object functionality.” Proceedings of the IEEE International Conference on Computer Vision. 2013.
Guest post by Martin Stránský, Research Scientist @GoodAI
Recent progress in artificial intelligence, especially in the area of deep learning, has been breath-taking. This is very encouraging for anyone interested in the field, yet the true progress towards human-level artificial intelligence is much harder to evaluate.
The evaluation of artificial intelligence is a very difficult problem for a number of reasons. For example, the lack of consensus on the basic desiderata necessary for intelligent machines is one of the primary barriers to the development of unified approaches towards comparing different agents. Despite a number of researchers specifically focusing on this topic (e.g. José Hernández-Orallo or Kristinn R. Thórisson to name a few), the area would benefit from more attention from the AI community.
Methods for evaluating AI are important tools that help to assess the progress of already built agents. The comparison and evaluation of roadmaps and approaches towards building such agents is however less explored. Such comparison is potentially even harder, due to the vagueness and limited formal definitions within such forward-looking plans.
Nevertheless, we believe that in order to steer towards promising areas of research and to identify potential dead-ends, we need to be able to meaningfully compare existing roadmaps. Such comparison requires the creation of a framework that defines processes on how to acquire important and comparable information from existing documents outlining their respective roadmaps. Without such a unified framework, each roadmap might not only differ in its target (e.g. general AI, human-level AI, conversational AI, etc…) but also in its approaches towards achieving that goal that might be impossible to compare and contrast.
This post offers a glimpse of how we, at GoodAI, are starting to look at this problem internally (comparing the progress of our three architecture teams), and how this might scale to comparisons across the wider community. This is still very much a work-in-progress, but we believe it might be beneficial to share these initial thoughts with the community, to start the discussion about, what we believe, is an important topic.
Overview
In the first part of this article, a comparison of three GoodAI architecture development roadmaps is presented and a technique for comparing them is discussed. The main purpose is to estimate the potential and completeness of plans for every architecture to be able to direct our effort to the most promising one.
To manage adding roadmaps from other teams we have developed a general plan of human-level AI development called a meta-roadmap. This meta-roadmap consists of 10 steps which must be passed in order to reach an ‘ultimate’ target. We hope that most of the potentially disparate plans solve one or more problems identified in the meta-roadmap.
Next, we tried to compare our approaches with that of Mikolov et. al by assigning the current documents and open tasks to problems in the meta-roadmap. We found that useful, as it showed us what is comparable and that different techniques of comparison are needed for every problem.
Architecture development plans comparison
Three teams from GoodAI have been working on their architectures for a few months. Now we need a method to measure the potential of the architectures to be able to, for example, direct our effort more efficiently by allocating more resources to the team with the highest potential. We know that determining which way is the most promising based on the current state is still not possible, so we asked the teams working on unfinished architectures to create plans for future development, i.e. to create their roadmaps.
Based on the provided responses, we have iteratively unified requirements for those plans. After numerous discussions, we came up with the following structure:
A Unit of a plan is called a milestone and describes some piece of work on a part of the architecture (e.g. a new module, a different structure, an improvement of a module by adding functionality, tuning parameters etc.)
Each milestone contains — Time Estimate, i.e. expected time spent on milestone assuming current team size, Characteristicof work or new features and Test of new features.
A plan can be interrupted by checkpoints which serve as common tests for two or more architectures.
Now we have a set of basic tools to monitor progress:
We will see whether a particular team will achieve their self-designed tests and thereby can fulfill their original expectations on schedule.
Due to checkpoints it is possible to compare architectures in the middle of development.
We can see how far a team sees. Ideally after finishing the last milestone, the architecture should be prepared to pass through a curriculum (which will be developed in the meantime) and a final test afterwards.
Total time estimates. We can compare them as well.
We are still working on a unified set (among GoodAI architectures) of features which we will require from an architecture (desiderata for an architecture).
The particular plans were placed side by side (c.f. Figure 1) and a few checkpoints were (currently vaguely) defined. As we can see, teams have rough plans of their work for more than one year ahead, still the plans are not complete in a sense that the architectures will not be ready for any curriculum. Two architectures use a connectivist approach and they are easy to compare. The third, OMANN, manipulates symbols, thus from the beginning it can perform tasks which are hard for the other two architectures and vice versa. This means that no checkpoints for OMANN have been defined yet. We see a lack of common tests as a serious issue with the plan and are looking for changes to make the architecture more comparable with the others, although it may cause some delays with the development.
There was an effort to include another architecture in the comparison, but we have not been able to find a document describing future work in such detail, with the exception of Weston’s et al. paper. After further analysis, we determined that the paper was focused on a slightly different problem than the development of an architecture. We will address this later in the post.
Assumptions for a common approach
We would like to take a look at the problem from the perspective of the unavoidable steps required to develop an intelligent agent. First we must make a few assumptions about the whole process. We realize that these are somewhat vague — we want to make them acceptable to other AI researchers.
A target is to produce a software (referred to as an architecture), which can be a part of some agent in some world.
In the world there will be tasks that the agent should solve, or a reward based on world states that the agent should seek.
An intelligent agent can adapt to an unknown/changing environment and solve previously unseen tasks.
To check whether the ultimate goal was reached (no matter how defined), every approach needs some well defined final test, which shows how intelligent the agent is (preferably compared to humans).
Before the agent is able to pass their final test, there must be a learning phase in order to teach the agent all necessary skills or abilities. If there is a possibility that the agent can pass the final test without learning anything, the final test is insufficient with respect to point 3. Description of the learning phase (which can include also a world description) is called curriculum.
Meta-roadmap
Using the above assumptions (and a few more obvious ones which we won’t enumerate here) we derive Figure 2 describing the list of necessary steps and their order. We call this diagram a meta-roadmap.
The most important and imminent tasks in the diagram are
The definition of an ultimate target,
A final test specification,
The proposed design of a curriculum, and
A roadmapfor the development of an architecture.
We think that the majority of current approaches solve one or more of these open problems; from different points of view according to an ultimate target and beliefs of authors. In order to make the effort more clear, we will divide approaches described in published papers into groups according to the problem that they solve and compare them within those groups. Of course, approaches are hard to compare among groups (yet it is not impossible, for example final test can be comparable to a curriculum under specific circumstances). Even within one group it can be very hard in some situations, where requirements (which are the first thing that should be defined according to our diagram) differ significantly.
Also an analysis of complexity and completeness of an approach can be made within this framework. For example, if a team omits one or more of the open problems, it indicates that the team may not have considered that particular issue and are proceeding without a complete notion of the ‘big picture’.
Problem assignment
We would like to show an attempt to assign approaches to problems and compare them. First, we have analyzed GoodAI’s and Mikolov/Weston’s approach as the latter is well described. You can see the result in Figure 3 below.
As the diagram suggests, we work on a few common problems. We will not provide the full analysis here, but will make several observations to demonstrate the meaningfulness of the meta-roadmap. In desiderata, according to Mikolov’s “A Roadmap towards Machine Intelligence”, a target is an agent which can understand human language. In contrast with the GoodAI approach, other modalities than text are not considered as important. In the curriculum, GoodAI wants to teach an agent in a more anthropocentric way — visual input first, language later — while the entirety of Weston’s curriculum comprises of language-oriented tasks.
Mikolov et al. do not provide a development plan for their architecture, so we can compare their curriculum roadmap to ours, but it is not possible to include their desiderata into the diagram in Figure 1.
Conclusion
We have presented our meta-roadmap and a comparison of three GoodAI development roadmaps. We hope that this post will offer a glimpse into how we started this process at GoodAI and will invigorate a discussion on how this could be improved and scaled beyond internal comparisons. We will be glad to receive any feedback — the generality of our meta-roadmap should be discussed further, as well as our methods for estimating roadmap completeness and their potential to achieve human-level AI.
Workers have long confronted dangerous and dirty jobs. They’ve had to dig to the bottom of mines, or put themselves in harm’s way to decommission ageing nuclear sites. It’s time to make these jobs safer and more efficient, robots are just starting to provide the necessary tools.
Mining
Mining has become much safer, yet workers continue to die every year in accidents across Europe, highlighting the perils of this genuinely needed industry. Everyday products use minerals extracted from mining, and 30 million jobs in the EU depend on their supply. Robots are a way to modernise an industry that is constantly under pressure with the fall in prices of commodities and the lack of safe access to hard-to-reach resources. Making mining greener is also a key concern.
The vision is for people to move away from the rock face, and onto the surface. In an ideal world where mining 4.0 is the norm, a central control room will run all operations in the mine, which will become a zero-entry zone for workers. Robots will take care of safety critical tasks such as drilling, extracting, crushing and transport of excavated material. The mine could operate continuously while experts on the surface are in charge of managing, monitoring, optimising and maintenance of the systems – essentially making mining a high-tech job.
A recent report from the IDC echoes this vision saying, “The future of mining is to create the capability to manage the mine as a system – through an integrated web of technologies such as virtualization, robotics, Internet of Things (IoT), sensors, connectivity, and mobility – to command, control, and respond.”
And companies are betting their money on it, “69% of mining companies globally are looking for remote operation and monitoring centres, 56% at new mine methods, 29% at robotics and 27% at unmanned drones.”
Europe is also heavily investing, with several large projects over the past 15 years. The European project I2Mine, which finished last year, focussed on technologies suitable for underground mining activities (at depths greater than 1500m). With 23M Euros invested, it was the biggest EU RTD project funded in the mining sector.
Project Manager, Dr Horst Hejny, said: “The overall objective of the project was the development of innovative technologies and methods for sustainable mining at greater depths.”
One result of the project was a set of sensors for material recognition and boundary layer detection and sorting, as well as a new cutting head which allows for continuous operation.
Full, and even remote, automation, however, is still a long way ahead. Like any robotic system, automated mining will have to deal with a plethora of real-world challenges. And navigating underground mines, or manipulating rock, are very far from ideal laboratory settings. As an intermediate step, researchers are looking to set up test sites where they can experiment with the technology outside of the lab and before deployment in safety critical areas. Juha Röning from the University of Oulu in Finland uses Oulu Zone, a race track that could prove helpful to test automated driving for the mining industry. His laboratory has previous experience in this area, having tested an automated dumper robot for excavated material. It’s an obvious application for a country that Juha says “invented mining”. There is more to it than autonomous driving, however, and his laboratory has been thinking about ways to improve the infrastructure around the deployment of mobile robots, including using advanced positioning systems to increase the precision of robot tracking and control.
Another test site, RACE, which stands for Remote Applications in Challenging Environments, was recently opened by the UK Atomic Energy Authority. The facility conducts R&D and commercial activities focused on developing robots and autonomous systems for challenging environments.
On their website, they claim to be challenging ‘challenging environments’ saying: “Challenging Environments exist in numerous sectors, nuclear, petrochemical, space exploration, construction and mining are examples. The technical hurdle is different for different physical environments and includes radiation, extreme temperature, limited access, vacuum and magnetic fields, but solutions will have common features. The commercial imperative is to enable safe and cost efficient operations.”
Rather than develop full turn-key solutions for mines, many European companies have been providing automation solutions for very specific tasks. Swedish company Sandvik, for example, demonstrated a fully automated LHD (Load, Haul, Dump machine) vehicle.
Also based in Sweden, Atlas Copco has an autonomous LHD system of their own called Scooptram.
Polish company KGHM, a leader in copper and silver production, has been deeply involved in many R&D projects across Europe. Their mines in Lubin and Polkowice Sieroszowice have served as test sites for recent developments. KGHM, and mining companies Boliden and LKAB in Sweden joined forces with several major global suppliers and the academia to develop a common vision for future mining for 2011 to 2020.
The report discusses how to make deep mining of the future “safer, leaner and greener.” The short answer: “we need an innovative organisation that attracts talented young men and women to meet the grand challenges and opportunities of future mineral supply.” They add however that “by 2030 we will not yet have achieved invisible mining, zero waste, or the fully intelligent, automated mine without any human presence”. More time is needed.
Robotics technology also opens a new frontier in areas that can be mined beyond what is currently human-reachable. The new push is towards mining the deep sea, or space in a responsible manner. UK-based company Soil Machine Dynamics Ltd recently developed three vehicles that operate at depths of up to 2,500m on seafloor massive sulphide (SMS) deposits for the company Nautilus Minerals. The subsea mining machines weigh up to 310t and have vessel-based power and control systems, pilot consoles, umbilical systems and launch and recovery systems.
As for the space race, asteroids provide an untapped resource for metallic elements such as iron, nickel, and cobalt. Although space robots have been shown to navigate and drill in space, scaling up to meaningful extraction quantities will be a challenge. And it’s still unclear if the cost and complexity of space mining justify the means.
Nuclear Decommissioning
Like mining, nuclear decommissioning requires zero-entry operations. Across Europe, there are plans to close up to 80 civilian nuclear power reactors in the next ten years.
“The total cost of nuclear decommissioning in the UK alone is currently estimated at £60 billion. Analysis by the National Nuclear Laboratory indicates that 20% of the cost of complex decommissioning will be spent on RAS (Robotics and Autonomous Systems) technology.” – RAS UK Strategy.
Designing robots for the nuclear environment is especially challenging because the robots need to be robust, reliable, safe, and also need to withstand a highly radioactive environment.
In 2012, one of the high hazard plants at Sellafield UK used a custom-made remotely operated robot arm to isolate the risk caused by a 60-year-old First Generations Magnox Storage Pond. The arm had to separate and remove redundant pipework in a high radiation area, and then clean and seal a contaminated pond wall. The redundant pipework was then isolated with special sealants, before its remote removal. The robotic arm then scabbled the pond wall and applied a specialist coating to seal the concrete.
Over 80,000 hours of testing in a separate test facility were needed before the team had confidence the robots would perform flawlessly on such a high-risk task.
The Sellafield site has since added a “Riser” (Remote Intelligence Survey Equipment for Radiation) quadcopter developed by Blue Bear Systems Research Ltd and Createc Ltd. It is equipped with a system that allows it to map the inside of a building and radiation levels.
Little underwater vehicles were deployed in the nuclear storage pools. The robots build on existing technology developed for inspection of offshore oil and gas industries prepared by company James Fisher Nuclear. They were initially sent to image the environment but are now used for basic manipulation tasks.
In Marcoule France, Maestro, a tele-operated robot arm, is also being using to decommission a site. The robot can laser-cut 4mx2m metal structures into smaller pieces. Humans could do this faster, but 30 minutes on the site would be lethal.
Given the high-risk nature of nuclear decommissioning, traditional robotic solutions seem to be favoured for now, as they are tested and understood. However, a new wave of innovative solutions is also making its way to the market.
Swiss startup Rovenso, for example, developed ROVéo, a robot whose unique four-wheel design allows it to climb over obstacles up to two-thirds its height. They aim to produce a larger-scale model equipped with a robotic arm for use in dismantling nuclear plants.
OCRobotics in the UK is also working closely with the nuclear industry to build robots that have a better reach than modern industrial robot arms. Their snake arms can be fit with lasers or other tools, and can slither through nearly any structure.
Andy Graham, Technical Director at OC Robotics, said “Robots have the potential to improve everyone’s quality of life. Reducing the need for people to enter hazardous workplaces and confined spaces is central to what we do at OC Robotics, whether the application is in manufacturing industries, inspection and maintenance in the oil and gas sector, or decommissioning nuclear power stations. Users are becoming more and more aware of the potential for robots to enable their workers to work more comfortably and safely from outside these spaces”.
“The Lasersnake 2 project, led by OC Robotics and part-funded by the UK government, has developed and is currently testing a snake-arm robot equipped with a powerful laser capable of cutting effortlessly through 60mm thick steel. The same snake-arm robot can be equipped with a gripper enabling it to lift 25kg at a reach of about 5m, and has also been demonstrated underwater in an environment similar to a nuclear storage pond. With this cutting capability and the ability to snake through small holes and around obstacles, this enables “keyhole surgery” for nuclear decommissioning, leaving containment structures, shielding and cells intact while dismantling the processing equipment inside them”.
Beyond decommissioning, robots are also being used for new energy infrastructure including at ITER, the next generation fusion research device being built in the south of France that will achieve ‘burning plasma’, one of the required steps for fusion power. Remote handling is critical to ITER, says project partner RACE.
“Cutting and welding large diameter stainless steel pipes is a fundamental process for remote maintenance of ITER.” RACE has been developing concepts to replace remotely the beam sources of the neutral beam heating system, high energy ion beams that are used to heat the plasmas to 200M °C.
From their website, “A monorail crane was designed with high lift in a compact space, with an innovative control system for high radiation environments. The beam line transporter operates along the full length of the beam line, like an industrial production line. It has a load capacity of many tonnes, haptic feedback and is fully remotely operated and remotely recoverable.”
“ITER provides some seriously challenging environments for robotics: high radiation dose; elevated temperatures; limited access; large, compact equipment and some very challenging inspection and maintenance procedures to implement fast and reliably, without failure.”
These projects are just part of a worldwide effort to advance the safety of nuclear applications. Japan has also been working on its robot fleet, in response to the Fukushima disaster and the cleanup efforts still ahead. Their robots take different shapes and forms depending on their task, and can blast dry ice, inspect vents, cut pipes, and remove debris.
Competitions like the recent Darpa Robotics Challenge, or the European Robotics League (ERL) Emergency Challenge, have been driving the state of the art forward. ERL Emergency is an outdoor multi-domain robotic competition inspired by the 2011 Fukushima accident. The challenge requires teams of land, underwater and flying robots to work together to survey the scene, collect environmental data, and identify critical hazards.
“Robotics competitions are not just for testing a robot outside a laboratory, or engaging with an audience; they are events that get people together, inspire younger generations and facilitate cooperation and exchange of knowledge between multiple research groups. Robotics competitions provide a perfect platform for challenging, developing and showcasing robotics technologies.” said Marta Palau, ERL Emergency Project Manager
Similar scenarios are also being explored by Juha Röning from the University of Oulu in Finland. He aims to use flying robots to map radiation levels after a nuclear accident thanks to support from the Nordic Nuclear Safety Research Agency. He says “in the future, flying robots could be used to map radiation levels, and then a second team of ground robots could be sent in for the cleanup”.
Overall, robots are helping workers avoid dirty and dangerous areas, while making the job more efficient, and potentially fulfilling. We are only at the initial stages, however, as many of these high-risk tasks require years of testing before new technologies are implemented.
When building a new robot mechanical engineers always ask me: how close can different antennas be to one another? It is not uncommon to try squeezing 5+ antennas on a single robot (GPS, GPS2 for heading, RTK, joystick, e-stop, communications, etc..). So what is the proper response? The real answer is that it depends heavily on the environment. However, below are the rules of thumb I have learned and have been passed down to me for antenna separation. I do want to give a disclaimer some of this has been passed down as rules of thumb and may not be 100% correct.
Here is the rule: The horizontal distance between antennas should be greater than 1/4 of its wavelength (absolute minimum separation), but it should not be located at the exact multiples of its wavelength (maybe avoid the first 3-4 multiples). If multiple frequency antennas are near each other, then use the spacing distance of the lower frequency antenna, or even better, try to satisfy the rule for both frequencies.
*If you are using two GPS antennas to compute heading then this does not apply. These numbers are strictly for RF considerations.
So, for example, if you have a GPS antenna and a WiFi 2.4GHz antenna you would want them to be separated by at least (more is better, within reason) 8.33cm. And you should avoid putting them at exactly 24.43cm or 33.3cm from each other.
This rule seems to work with the low power antennas that we typically use in most robotics applications. I am not sure how this would work with high power transmitters. For higher power transmitting antennas, you might want greater separation. The power drops off pretty quickly with distance (proportional to the square of the distance).
I also try to separate receive and transmit antennas (as a precaution) to try and prevent interference problems.
An extension of the above rule is ground planes. Ground planes are conductive reference planes that are put below antennas to reflect the waves to create a predictable wave pattern and can also help prevent multipath waves (that are bouncing off the ground, water, buildings, etc…). The further an antenna is from the ground (since the ground can act as a ground plane), the more likely having a ground place becomes necessary. In its simplest form, a ground plane is a piece of metal that extends out from the antennas base at least 1/4 wavelength in each direction. Fancy ground planes might just be several metal prongs that stick out. A very common ground plane is the metal roof of a vehicle/robot.
Note: Do not confuse ground planes with RF grounds, signal grounds, DC grounds, etc…
An example of building a ground plane can be with a GPS antenna. It should be mounted in the center of a metal roofed robot/car or on the largest flat metal location. This will minimize the multipath signals from the ground. If there is no flat metal surface to mount the antenna you can create a ground plane by putting a 12.22cm diameter metal sheet directly below the antenna (about 1/2 the signals wavelength, which gives 1/4 wavelength per side).
Note: Some fancy antennas do not require that you add a ground plane. For example, the Novatel GPS antennas do NOT require you to add a ground plane, as described above.
As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?
Asimov knew they weren’t perfect
Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.
In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.
0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.
Robots without ethics
It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.
The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).
Catastrophe results from giving too much power to artificial intelligence.
Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.
Toward defining robot ethics
While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.
The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.
If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as peopledo. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.
Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.
Disclosure statement
Benjamin Kuipers is primarily a professor. He spends a small amount of time as an advisor for Vicarious.com, for which he receives a small amount of money and stock. He hopes that they (like other readers) will benefit intellectually from this article, but recognizes that they are unlikely to benefit financially. He has received a number of research grants from government and industry, none directly on this topic. He is a member of several professional organizations, including the Association for the Advancement of Artificial Intelligence (AAAI). He has also taken public positions and signed statements opposing the use of lethal force by robots, and describing his own decision not to take military funding for his research.
The times are changing! What seemed like science fiction a few years ago is now on the front page of the news. Augmented reality, wearables, virtual assistants, robots, and other smart devices are beginning to permeate our daily lives. Amazon Prime Air, the e-commerce giant’s new retail drone delivery system, is ready to launch but is waiting on regulatory approval from the U.S. Federal Aviation Administration. If you have not seen the latest video of their system in action then it’s an eye-opener on how far automation has come.
Turning from the skies to the road, robotics is set to revolutionize how we drive. Self-driving features are already available in certain models like Tesla, with fully automated vehicles ready for purchase in the next few years. In fact, some forecasts estimate that 10 million of these cars will be on the road by 2020. Who would have thought?
There’s no other way to state it: the business impacts of automation in the next decade will be profound. Market intelligence firm Tractica estimates that the global robotics market will grow from $28.3 billion worldwide in 2015 to $151.7 billion by 2020. What’s especially significant is that this market share will encompass most non-industrial robots, including segments like consumer, enterprise, medical, military, UAVs, and autonomous vehicles.
But even more impactful than CAGR numbers and market projections around robotics, are the implications on jobs. People are growing increasingly concerned about the ways automation will change their work and careers, and rightfully so. Gartner Research, one of the world’s leading voices in technology trends, has declared that the smart machine era will be the most disruptive in the history of IT. And even now, the results of a 2013 Gartner survey show that sixty percent of CEOs believe that the emergence of smart machines capable of taking away millions of middle-class jobs within the next 15 years is a “futuristic fantasy.” Be that as it may, this new era of robotics is going to dramatically change the nature of work – along with the roles and functions of business.
And this is where things get really interesting!
According to Remy Glaisner, CEO and founder of Myria Research, the future robotics revolution will significantly impact the C-Suite of business. As Robotics and Intelligence Operational Systems (RIOS) technologies scale up, companies will require more structured and strategic approaches to managing the implications of this global transformation on their verticals.
Enter the CRO or Chief Robotics Officer. In a whitepaper dedicated to this topic, Glaisner spells out the role and function of a CRO:
The Chief Robotics Officer plans, directs, organizes and manages all activities of the RIOS (Robotics & Intelligent Operational Systems) department to ensure the effective, secure and efficient use of all RIOS solutions and applications. These efforts must be accomplished in partnership with other business units (IT, Finance, Engineering, R&D, Operations, HR, Business Development, et al) and increasingly with senior management and the Board of Directors. CROs must have significant vertical industry knowledge so that they can better consider the evolution of RIOS solutions in existing and future functions and processes.
The anticipated effects of this new enterprise transformation in business and technology will be fascinating, if not a bit staggering, and suggest that we truly are living in unprecedented times.
Back in January of 2007, Bill Gates famously declared robots as the “next big thing” and placed the industry at the same place as the PC market in the late 1970s. Perhaps this prediction was a bit premature at the time since breakthroughs in mobile, cloud, and Big Data were just beginning. But now a decade later, things are much different; technology has reached an inflection point. Everything about the market suggests that it’s finally happening – that robots really are about to go mainstream.
So the question now becomes simple: How will you adapt and adjust to the global disruption caused by robots and automation in the next 5-10 years? What will your company do about this sea change? Now is the time to plan and pivot in order to avoid falling behind this technology curve. As Glaisner predicts, within the next decade “over 60% of manufacturing, logistics & supply chain, healthcare, agro-farming, and oil/gas/mining companies part of the Global 1000 will include a Chief Robotic Officer (CRO) and associated staff as part of their organization.”
The age of robotics is truly upon us. Will you be ready?
Can our emotional fear of machines, and the call for premature regulation, be mollified by a temporary increase in liability which takes the place of specific regulations to keep people safe?
So far, most new automotive technologies, especially ones that control driving such as autopilot, forward collision avoidance, lane keeping, anti-lock brakes, stability control and adaptive cruise control, have not been covered by specific regulations. They were developed and released by vendors, sold for years or decades, and when (and if) they got specific regulations, those often took the form of ‘electronic stability control is so useful, we will now require all cars to have it.’ It has worked reasonably well.
Just because there are no specific regulations for these things does not mean they are unregulated. There are rafts of general safety regulations on cars, and the biggest deterrent to the deployment of unsafe technology is the liability system and the huge cost of recalls. As a result, while there are exceptions, most carmakers are safety paranoid to a rather high degree — just because of liability. At the same time, they are free to experiment and develop new technologies. Specific regulations tend to come into play when it becomes clear that automakers are doing something dangerous, and that they won’t stop doing it because of the liability. In part, this is because today it’s easy to assign blame for accidents to drivers, and often harder to assign it to a manufacturing defect, or to a deliberate design decision.
The exceptions, like GM’s famous ignition switch problem, arise because of the huge cost of doing a recall for a defect that will have rare effects. Companies are afraid of having to replace parts in every car they made when they know they will fail — even fatally — just one time in a million. The one person killed or injured does not feel like one in a million, and our system pushes the car maker (and thus all customers) to bear that cost.
I wrote an article on regulating Robocar Safety in 2015, and this post expands on some of those ideas.
Robocars change some of this equation. First of all, in robocar accidents, the car manufacturer (or driving system) is going to be liable by default. Nobody else really makes sense, and indeed, some companies, like Volvo, Mercedes and Google, have already accepted that. Some governments are talking about declaring it, but frankly, it could never be any other way. Making the owner or passenger liable is technically possible, however, do you want to ride in an Uber where you have to pay if it crashes for reasons having nothing to do with you?
Due to this, the fear of liability is even stronger for robocar makers.
Robocar failures will almost all be software issues. As such, once fixed, they can be deployed for free. The logistics of the “recall” will cost nothing. GM would have no reason not to send out a software update once they found a problem; they would be crazy not to. Instead, there is the difficult question of what to do between the time a problem is discovered and a fix has been declared safe to deploy. Shutting down the whole fleet is not a workable answer, it would kill deployment of robocars if several times a year all robocars stopped working.
In spite of all this history and the prospect of it getting even better, a number of people — including government regulators — think they need to start writing robocar safety regulations today, rather than 10-20 years after the cars are on the road as has been traditional. This desire is well-meaning and understandable, but it’s actually dangerous because it will significantly slow down the deployment of safety technologies that will save many lives by making the world’s 2nd most dangerous consumer product safer. Regulations and standards generally codify existing practice and conventional wisdom. They are bad ideas with emerging technologies, where developers are coming up with entirely new ways to do things and entirely new ways to be safe. The last thing you want is to tell vendors is that they must apply old-world thinking when they can come up with much better ways of thinking.
Sadly, there are groups who love old-world thinking, namely the established players. Big companies start out hating regulation but eventually come to crave it, because it mandates the way they do things and understand the law. This stops upstarts from figuring out how to do it better, and established players love that.
The fear of machines is strong, so it may be that something else needs to be done to satisfy all desires: The desire of the public to feel the government is working to keep these scary new robots from being unsafe and the need for unconstrained innovation. I don’t desire to satisfy the need to protect old ways of doing things.
One option would be to propose a temporary rule: For accidents caused by robocar systems, the liability, if the system should be at fault, shall be double that if a similar accident were caused by driver error. (Punitive damages for willful negligence would not be governed by this rule.) We know the cost of accidents caused by humans. We all pay for it with our insurance premiums, at an average rate of about 6 cents/mile. This would double that cost, pushing vendors to make their systems at least twice as safe as the average human in order to match that insurance cost.
Victims of these accidents (including hapless passengers in the vehicles) would now be doubly compensated. Sometimes no compensation is enough, but for better or worse, we have set on values and doubling them is not a bad deal. Creators of systems would have a higher bar to reach, and the public would know it.
While doubling the cost is a high price, I think most system creators would accept this as part of the risk of a bold new venture. You expect those to cost extra as they get started. You invest to make the system sustainable.
Over time, the liability multiplier would reduce, and the rule would go away entirely. I suspect that might take about a decade. The multiplier does present a barrier to entry for small players and we don’t want something like that around for too long.
To make robots more cooperative and have them perform tasks in close proximity to humans, they must be softer and safer. A new actuator developed by a team led by George Whitesides, Ph.D. — who is a Core Faculty member at Harvard’s Wyss Institute for Biologically Inspired Engineering and the Woodford L. and Ann A. Flowers University Professor of Chemistry and Chemical Biology in Harvard University’s Faculty of Arts and Sciences (FAS) – generates movements similar to those of skeletal muscles using vacuum power to automate soft, rubber beams.
Like real muscles, the actuators are soft, shock absorbing, and pose no danger to their environment or humans working collaboratively alongside them or the potential future robots equipped with them. The work was reported June 1 in the journal Advanced Materials Technologies.
“Functionally, our actuator models the human bicep muscle,” said Whitesides, who is also a Director of the Kavli Institute for Bionano Science and Technology at Harvard University. “There are other soft actuators that have been developed, but this one is most similar to muscle in terms of response time and efficiency.”
Whitesides’ team took an unconventional approach to its design, relying on vacuum to decrease the actuator’s volume and cause it to buckle. While conventional engineering would consider bucking to be a mechanical instability and a point of failure, in this case the team leveraged this instability to develop VAMPs (vacuum-actuated muscle-inspired pneumatic structures). Whereas previous soft actuators rely on pressurized systems that expand in volume, VAMPs mimic true muscle because they contract, which makes them an attractive candidate for use in confined spaces and for a variety of purposes.
The actuator — comprising soft rubber or ‘elastomeric’ beams — is filled with small, hollow chambers of air like a honeycomb. By applying vacuum the chambers collapse and the entire actuator contracts, generating movement. The internal honeycomb structure can be custom tailored to enable linear, twisting, bending, or combinatorial motions.
“Having VAMPs built of soft elastomers would make it much easier to automate a robot that could be used to help humans in the service industry,” said the study’s first author Dian Yang, who was a graduate researcher pursuing his Ph.D. in Engineering Sciences at Harvard during the time of the work, and is now a Postdoctoral Researcher.
The team envisions that robots built with VAMPs could be used to assist the disabled or elderly, to serve food, deliver goods, and perform other tasks related to the service industry. What’s more, soft robots could make industrial production lines safer, faster, and quality control easier to manage by enabling human operators to work in the same space.
Although a complex control system has not yet been developed for VAMPs, this type of actuation is easy to control due to its simplicity: when vacuum is applied, VAMPs will contract. They could be used as part of a tethered or untethered system depending on environmental or performance needs. Additionally, VAMPs are designed to prevent failure — even when damaged with a 2mm hole, the team showed that VAMPs will still function. In the event that major damage is caused to the system, it fails safely.
“It can’t explode, so it’s intrinsically safe,” said Whitesides.
Whereas other actuators powered by electricity or combustion could cause damage to humans or their surroundings, loss of vacuum pressure in VAMPs would simply render the actuator motionless.
“These self-healing, bioinspired actuators bring us another step closer to being able to build entirely soft-bodied robots, which may help to bridge the gap between humans and robots and open entirely new application areas in medicine and beyond,” said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Boston Children’s Hospital Vascular Biology Program, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).
In addition to Whitesides and Yang, other authors on the study included: Mohit S. Verma, Ph.D.,(FAS); Ju-Hee So, Ph.D., (FAS); Bobak Mosadegh, Ph.D., (Wyss, FAS); Christoph Keplinger, Ph.D., (FAS); Benjamin Lee (FAS); Fatemeh Khashai (FAS); Elton Lossner (FAS), and Zhigang Suo, Ph.D., (SEAS, Kavli Institute).
UPDATE: June 1, 2016: Forbes wrote today that Toyota is in discussions with Google not only for Boston Dynamics but also for Schaft, the Japanese startup that won the DARPA Robotics Challenge — a two-company sale.
May was another big month for robotics – 13 companies were funded to the tune of $111 million. Four companies were acquired with 2 of the 4 reporting selling prices totaling $422 million. And that’s without the $5.2 billion bid for Kuka by Chinese Midea, or the pending sale of Google’s Boston Dynamics.
The financial pages are lighting up over recent stories about these big-money sales. First there was the $5.2 billion offer by Midea Group, a Chinese appliance manufacturer, for Kuka AG, the Augsburg, Germany-based manufacturer of robots and automated systems. Kuka is one of the Big Four of robot manufacturers. On the day of the bid, Kuka’s stock rose from $84/share to $110 where it’s stayed since.
Then came the announcement by Tech Insider that the Toyota Research Institute is in the final phase of negotiations to acquire Google’s robotics company Boston Dynamics, of Big Dog fame. Boston Dynamics spun out of the MIT Leg Lab in 1992 and worked on various military and DARPA funded research projects until Google’s Andy Rubin acquired the company along with 8 other robotics companies. Boston Dynamics never quite adapted to Google and Google’s push to build a consumer robot, hence their being put on the block in March, 2016.
From Forbes, news of a new fund focusing on robotics: Chrysalix VC, a Vancouver, BC venture capital group focused on alternative energy, has partnered with Dutch robotics commercialization center RoboValley to create a new VC fund focused on robotics. The vehicle is targeting E100 million.
Below are the fundings, acquisitions, IPOs and failures that actually happened in May:
FUNDINGS
Locus Robotics raised $8 million in a Series A funding from existing seed investors. The funds will be used to expand product development and general marketing of Locus’ novel material handling robots. Locus is a Massachusetts-based company founded specifically in answer to Kiva Systems’ robots being taken in house by Amazon and no longer available to non-Amazon clients. Locus’ founder, Bruce Welty is a Kiva-using distribution center owner, who, as a consequence of Amazon’s actions, had no recourse other than to build a company that uses a fleet of robots integrated into current warehouse management systems to provide robotic platforms to carry picked items to a conveyor or to the packing station thereby reducing human walking distances and improving overall picking efficiencies.
Gamaya, a Swiss aerial analytics spin-off from the Swiss EPFL, raised $3.2 million in a Series A funding. Funds will be used to develop their new 40 bands of light hyperspectral imaging sensor and analytics software platform (traditional multi-spectral sensors have 4 bands).
Hortau is a California soil moisture monitoring company which raised $10 million to grow and broaden their new system of networked field sensors, weather stations and control units allowing growers to remotely open and close valves and fire up engines for irrigation from cloud-based management software.
nuTonomy is a Cambridge-based start-up that raised $16 million in a Series A round of funding from a group of Singapore and US VCs. This is in addition to the $3.6 million raised in January which included funds from Ford Chairman Bill Ford. nuTonomy is planning to launch a fleet of autonomous taxis in Singapore by 2019 and begin testing later this year. NuTonomy is using retrofitted Mitsubishi electric cars and plans to add Renault EVs later this year.
Mazor Surgical Technologies, an Israeli company, has sold $11.9 million of their stock, 4% of their shares, to Medtronic, a global medical technology, services, and solutions provider, with a performance agreement to sell another 6% of Mazor shares for up to $20 million. An additional clause of the agreement kicks in if performance milestones are met whereby Mazor can issue an addition 5% of new shares for an additional $20M from Medtronic. Details of the deal are here.
Dedrone GmbH, a German startup whose DroneTracker drone detection platform, raised $10 million in a Series A funding from a series of EU and Silicon Valley VCs. In just 15 months, Dedrone has grown to more than 40 employees and 100 distributors in over 50 countries.
Astrobotic Technology, the CMU spin-off company working on delivering payloads to the moon, raised $2.5 million from Space Angels Network. Astrobotic has 10 projects with governments, companies, universities, non-profits, NASA, and individuals for their first moon mission.
MegaBots, an Oakland, CA entertainment startup, has raised $2.4 million in seed funding to bring robot-fighting to a venue near you. MegaBots plans to use the seed funding to build their robot for the fight against the Japanese team they’ve challenged; and to secure sponsorships, perhaps even a TV contract for a program that tracks the team from building the robots to competing.
Zipline International, a San Francisco startup, raised $800k from UPS and $18 million from Yahoo founder Jerry Yang, Microsoft co-founder Paul Allen and others to develop their small robot airplane designed to carry vaccines, medicine and blood to remote areas where health workers place text orders for what they need.
Cyberhawk Innovations raised $2.9 million in financing to enable UK-based Cyberhawk to expand its commercial development of the drone-captured data inspection market for the oil & gas industry and infrastructure markets.
Eonite Perception, a Silicon Valley vision systems startup, raised $5.25 million in a seed round from multiple Silicon Valley VCs. Eonite is building a 3D mapping and tracking system for the virtual reality marketplace using low latency dense depth sensors.
eyeSight Technologies, an Israeli vision systems startup, received $20 million from a Chinese VC group, for its vision system of sensing, gesture recognition and user awareness to be embedded into consumer products.
AIO Robotics is a Los Angeles startup developing an all-in-one 3D printer scanner with an onboard CAD and modeling system. AIO received an undisclosed amount of seed funding.
ACQUISITIONS
5D Robotics, a San Diego area integrator of unmanned and mobile robotics using ultra-wide band (5D) communications, acquired Aerial MOB, a drone aerial cinematography startup, for an undisclosed sum. The acquisition has led to the formation of the 5D Aerial division which will provide 3D mapping, photogrammetry, thermal and multi-spectral imagery data to vertical markets including oil and gas, utilities and construction.
Dematic, a global supplier of AGVs and materials handling technology, acquired (in March) NDC Automation, an AGV manufacturer in Australia and New Zealand, for an undisclosed amount.
Voith GmbH, a family-owned German group of industrial and engineering companies, has sold 80% of its industrial services unit to buyout group Triton Partners for $342 million to free up capital for planned investments. Voith has a 25.1% share of Kuka’s stock which, if the $5.2 bn Midea offer passes, will be worth close to 40% more than the share value the day before the offer. According to Forbes, Voith ranks 200th in global family-owned businesses with revenue of $7.5 bn and 43,000 employees.
ChemChina and a group of other investors including Chinese state funds, acquired Germany’s KraussMaffei Automation, an industrial robot integrator and plastics, carbon fiber, and rubber processor, for $1 billion – in January.
IPOs
None. Private placements and increased investment from hedge funds, mutual funds and via corporate acquisitions appears to have dried up the robotics IPO pipeline.
But Moley Robotics, a UK startup developing a cooking robot, is using the new equity crowd funding rules that passed the FCC last year to offer 2% of their shares via the Seedrs crowd funding site. Details will be released soon to subscribers to the Moley and Seedrs websites.
FAILURES
RoboDynamics, a SoCal startup with a stylish mobile telepresence robot named Luna, has gone out of business.