Page 377 of 400
1 375 376 377 378 379 400

Reading a neural network’s mind

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data is fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.
Image: Chelsea Turner/MIT

By Larry Hardesty

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

During training, however, a neural net continually adjusts its internal settings in ways that even its creators can’t interpret. Much recent work in computer science has focused on clever techniques for determining just how neural nets do what they do.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

They find empirical support for some common intuitions about how the networks probably work. For example, the systems seem to concentrate on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving on to higher-level tasks, such as transcription or semantic interpretation.

But the researchers also find a surprising omission in the type of data the translation network considers, and they show that correcting that omission improves the network’s performance. The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

“In machine translation, historically, there was sort of a pyramid with different layers,” says Jim Glass, a CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student in electrical engineering and computer science. “At the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of interlingual representation, and you’d have different layers where you were doing syntax, semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate to a new language, and then you’d go down again. So part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

The work on machine translation was presented recently in two papers at the International Joint Conference on Natural Language Processing. On one, Belinkov is first author, and Glass is senior author, and on the other, Belinkov is a co-author. On both, they’re joined by researchers from the Qatar Computing Research Institute (QCRI), including Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and Stephan Vogel. Belinkov and Glass are sole authors on the paper analyzing speech recognition systems, which Belinkov presented at the Neural Information Processing Symposium last week.

Leveling down

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data are fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.

During training, the weights between nodes are constantly readjusted. After the network is trained, its creators can determine the weights of all the connections, but with thousands or even millions of nodes, and even more connections between them, deducing what algorithm those weights encode is nigh impossible.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task. This enables them to determine what task each layer is optimized for.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language. The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognizing phones than higher levels, where, presumably, the distinction is less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine-translation network were particularly good at recognizing parts of speech and morphology — features such as tense, number, and conjugation.

Making meaning

But in the new paper, they show that higher levels of the network are better at something called semantic tagging. As Belinkov explains, a part-of-speech tagger will recognize that “herself” is a pronoun, but the meaning of that pronoun — its semantic sense — is very different in the sentences “she bought the book herself” and “she herself bought the book.” A semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well. In such systems, the input, in the source language, passes through several layers of the network — known as the encoder — to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network — the decoder — to yield a translation in the target language.

Although the encoder and decoder are trained together, they can be thought of as separate networks. The researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent. That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

#249: ICRA 2017 Company Showcase, with Li Bingbing, Xianbao Chen, Howard Michel and Lester Teh Chee Onn

Image: ICRA 2017

In this episode, Audrow Nash interviews several companies at the International Conference for Robotics and Automation (ICRA). ICRA is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.

Interviews:

Li Bingbing, Software Engineer and Cofounder of Transforma in Singapore, on a robot for painting tall buildings.

Xianbao Chen, Associate Researcher at Shanghai Jiao Tong University, on a hexapod robot for large parts machining.

Howard Michel, Chief Technology Officer of UBTech Education, on a humanoid robot for STEM (Science Technology Engineering, and Math) education for a range of ages.

Lester Teh Chee Onn, Environmental Engineer at Advisian, on a watercraft for environmental monitoring.

 

Links

Robohub Podcast is on Patreon!

Robohub Podcast has launched a campaign on Patreon!

If you don’t know, Robohub Podcast is a biweekly podcast about robotics. Our goal is to explore global robotics through interviews with experts, both in academia and industry. In our interviews,

  • we discuss technical topics (how things work, design decisions),
  • entrepreneurship (lessons learned, business models, ownership),
  • and anything we find interesting and related to robotics (policy, ethics, global trends, international technology initiatives and education, etc.).

We have published nearly 250 episodes and have spoken with many of the most influential people in robotics, such as Rodney Brooks, Dean Kamen, Radhika Nagpal, and Helen Griener.

We would like your support so we can bring you interviews from the leading robotics conferences and laboratories around the world. Our first goal is to send two interviewers to ICRA 2018 in Brisbane, Australia.

If you want to support us, visit our Patreon campaign.

 

FaSTrack: Ensuring safe real-time navigation of dynamic systems


By Sylvia Herbert, David Fridovich-Keil, and Claire Tomlin

The Problem: Fast and Safe Motion Planning

Real time autonomous motion planning and navigation is hard, especially when we care about safety. This becomes even more difficult when we have systems with complicated dynamics, external disturbances (like wind), and a priori unknown environments. Our goal in this work is to “robustify” existing real-time motion planners to guarantee safety during navigation of dynamic systems.

In control theory there are techniques like Hamilton-Jacobi Reachability Analysis that provide rigorous safety guarantees of system behavior, along with an optimal controller to reach a given goal (see Fig. 1). However, in general the computational methods used in HJ Reachability Analysis are only tractable in decomposable and/or low-dimensional systems; this is due to the “curse of dimensionality.” That means for real time planning we can’t process safe trajectories for systems of more than about two dimensions. Since most real-world system models like cars, planes, and quadrotors have more than two dimensions, these methods are usually intractable in real time.

On the other hand, geometric motion planners like rapidly-exploring random trees (RRT) and model-predictive control (MPC) can plan in real time by using simplified models of system dynamics and/or a short planning horizon. Although this allows us to perform real time motion planning, the resulting trajectories may be overly simplified, lead to unavoidable collisions, and may even be dynamically infeasible (see Fig. 1). For example, imagine riding a bike and following the path on the ground traced by a pedestrian. This path leads you straight towards a tree and then takes a 90 degree turn away at the last second. You can’t make such a sharp turn on your bike, and instead you end up crashing into the tree. Classically, roboticists have mitigated this issue by pretending obstacles are slightly larger than they really are during planning. This greatly improves the chances of not crashing, but still doesn’t provide guarantees and may lead to unanticipated collisions.

So how do we combine the speed of fast planning with the safety guarantee of slow planning?

fig1
Figure 1. On the left we have a high-dimensional vehicle moving through an obstacle course to a goal. Computing the optimal safe trajectory is a slow and sometimes intractable task, and replanning is nearly impossible. On the right we simplify our model of the vehicle (in this case assuming it can move in straight lines connected at points). This allows us to plan very quickly, but when we execute the planned trajectory we may find that we cannot actually follow the path exactly, and end up crashing.

The Solution: FaSTrack

FaSTrack: Fast and Safe Tracking, is a tool that essentially “robustifies” fast motion planners like RRT or MPC while maintaining real time performance. FaSTrack allows users to implement a fast motion planner with simplified dynamics while maintaining safety in the form of a precomputed bound on the maximum possible distance between the planner’s state and the actual autonomous system’s state at runtime. We call this distance the tracking error bound. This precomputation also results in an optimal control lookup table which provides the optimal error-feedback controller for the autonomous system to pursue the online planner in real time.

fig2
Figure 2. The idea behind FaSTrack is to plan using the simplified model (blue), but precompute a tracking error bound that captures all potential deviations of the trajectory due to model mismatch and environmental disturbances like wind, and an error-feedback controller to stay within this bound. We can then augment our obstacles by the tracking error bound, which guarantees that our dynamic system (red) remains safe. Augmenting obstacles is not a new idea in the robotics community, but by using our tracking error bound we can take into account system dynamics and disturbances.

Offline Precomputation

We precompute this tracking error bound by viewing the problem as a pursuit-evasion game between a planner and a tracker. The planner uses a simplified model of the true autonomous system that is necessary for real time planning; the tracker uses a more accurate model of the true autonomous system. We assume that the tracker — the true autonomous system — is always pursuing the planner. We want to know what the maximum relative distance (i.e. maximum tracking error) could be in the worst case scenario: when the planner is actively attempting to evade the tracker. If we have an upper limit on this bound then we know the maximum tracking error that can occur at run time.

fig3
Figure 3. Tracking system with complicated model of true system dynamics tracking a planning system that plans with a very simple model.

Because we care about maximum tracking error, we care about maximum relative distance. So to solve this pursuit-evasion game we must first determine the relative dynamics between the two systems by fixing the planner at the origin and determining the dynamics of the tracker relative to the planner. We then specify a cost function as the distance to this origin, i.e. relative distance of tracker to the planner, as seen in Fig. 4. The tracker will try to minimize this cost, and the planner will try to maximize it. While evolving these optimal trajectories over time, we capture the highest cost that occurs over the time period. If the tracker can always eventually catch up to the planner, this cost converges to a fixed cost for all time.

The smallest invariant level set of the converged value function provides determines the tracking error bound, as seen in Fig. 5. Moreover, the gradient of the converged value function can be used to create an optimal error-feedback control policy for the tracker to pursue the planner. We used Ian Mitchell’s Level Set Toolbox and Reachability Analysis to solve this differential game. For a more thorough explanation of the optimization, please see our recent paper from the 2017 IEEE Conference on Decision and Control.

gif4 gif5
Figures 4 & 5: On the left we show the value function initializing at the cost function (distance to origin) and evolving according to the differential game. On the right we should 3D and 2D slices of this value function. Each slice can be thought of as a “candidate tracking error bound.” Over time, some of these bounds become infeasible to stay within. The smallest invariant level set of the converged value function provides us with the tightest tracking error bound that is feasible.

Online real time Planning

In the online phase, we sense obstacles within a given sensing radius and imagine expanding these obstacles by the tracking error bound with a Minkowski sum. Using these padded obstacles, the motion planner decides its next desired state. Based on that relative state between the tracker and planner, the optimal control for the tracker (autonomous system) is determined from the lookup table. The autonomous system executes the optimal control, and the process repeats until the goal has been reached. This means that the motion planner can continue to plan quickly, and by simply augmenting obstacles and using a lookup table for control we can ensure safety!

gif6
Figure 6. MATLAB simulation of a 10D near-hover quadrotor model (blue line) “pursuing” a 3D planning model (green dot) that is using RRT to plan. As new obstacles are discovered (turning red), the RRT plans a new path towards the goal. Based on the relative state between the planner and the autonomous system, the optimal control can be found via look-up table. Even when the RRT planner makes sudden turns, we are guaranteed to stay within the tracking error bound (blue box).

Reducing Conservativeness through Meta-Planning

One consequence of formulating the safe tracking problem as a pursuit-evasion game between the planner and the tracker is that the resulting safe tracking bound is often rather conservative. That is, the tracker can’t guarantee that it will be close to the planner if the planner is always allowed to do the worst possible behavior. One solution is to use multiple planning models, each with its own tracking error bound, simultaneously at planning time. The resulting “meta-plan” is comprised of trajectory segments computed by each planner, each labelled with the appropriate optimal controller to track trajectories generated by that planner. This is illustrated in Fig. 7, where the large blue error bound corresponds to a planner which is allowed to move very quickly and the small red bound corresponds to a planner which moves more slowly.

fig7
Figure 7. By considering two different planners, each with a different tracking error bound, our algorithm is able to find a guaranteed safe “meta-plan” that prefers the less precise but faster-moving blue planner but reverts to the more precise but slower red planner in the vicinity of obstacles. This leads to natural, intuitive behavior that optimally trades off planner conservatism with vehicle maneuvering speed.

Safe Switching

The key to making this work is to ensure that all transitions between planners are safe. This can get a little complicated, but the main idea is that a transition between two planners — call them A and B — is safe if we can guarantee that the invariant set computed for A is contained within that for B. For many pairs of planners this is true, e.g. switching from the blue bound to the red bound in Fig. 7. But often it is not. In general, we need to solve a dynamic game very similar to the original one in FaSTrack, but where we want to know the set of states that we will never leave and from which we can guarantee we end up inside B’s invariant set. Usually, the resulting safe switching bound (SSB) is slightly larger than A’s tracking error bound (TEB), as shown below.

fig8
Figure 8. The safe switching bound for a transition between a planner with a large tracking error bound to one with a small tracking error bound is generally larger than the large tracking error bound, as shown.

Efficient Online Meta-Planning

To do this efficiently in real time, we use a modified version of the classical RRT algorithm. Usually, RRTs work by sampling points in state space and connecting them with line segments to form a tree rooted at the start point. In our case, we replace the line segments with the actual trajectories generated by individual planners. In order to find the shortest route to the goal, we favor planners that can move more quickly, trying them first and only resorting to slower-moving planners if the faster ones fail.

We do have to be careful to ensure safe switching bounds are satisfied, however. This is especially important in cases where the meta-planner decides to transition to a more precise, slower-moving planner, as in the example above. In such cases, we implement a one-step virtual backtracking algorithm in which we make sure the preceding trajectory segment is collision-free using the switching controller.

Implementation

We implemented both FaSTrack and Meta-Planning in C++ / ROS, using low-level motion planners from the Open Motion Planning Library (OMPL). Simulated results are shown below, with (right) and without (left) our optimal controller. As you can see, simply using a linear feedback (LQR) controller (left) provides no guarantees about staying inside the tracking error bound.

fig09 fig10
Figures 9 & 10. (Left) A standard LQR controller is unable to keep the quadrotor within the tracking error bound. (Right) The optimal tracking controller keeps the quadrotor within the tracking bound, even during radical changes in the planned trajectory.

It also works on hardware! We tested on the open-source Crazyflie 2.0 quadrotor platform. As you can see in Fig. 12, we manage to stay inside the tracking bound at all times, even when switching planners.

f11 f12
Figures 11 & 12. (Left) A Crazyflie 2.0 quadrotor being observed by an OptiTrack motion capture system. (Right) Position traces from a hardware test of the meta planning algorithm. As shown, the tracking system stays within the tracking error bound at all times, even during the planner switch that occurs approximately 4.5 seconds after the start.

This article was initially published on the BAIR blog, and appears here with the authors’ permission.

This post is based on the following papers:

  • FaSTrack: a Modular Framework for Fast and Guaranteed Safe Motion Planning
    Sylvia Herbert*, Mo Chen*, SooJean Han, Somil Bansal, Jaime F. Fisac, and Claire J. Tomlin
    Paper, Website

  • Planning, Fast and Slow: A Framework for Adaptive Real-Time Safe Trajectory Planning
    David Fridovich-Keil*, Sylvia Herbert*, Jaime F. Fisac*, Sampada Deglurkar, and Claire J. Tomlin
    Paper, Github (code to appear soon)

We would like to thank our coauthors; developing FaSTrack has been a team effort and we are incredibly fortunate to have a fantastic set of colleagues on this project.

Artificial muscles give soft robots superpowers

Origami-inspired artificial muscles are capable of lifting up to 1,000 times their own weight, simply by applying air or water pressure. Credit: Shuguang Li / Wyss Institute at Harvard University

By Lindsay Brownell

Soft robotics has made leaps and bounds over the last decade as researchers around the world have experimented with different materials and designs to allow once rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Now, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure, giving much-needed strength to soft robots. The study is published this week in Proceedings of the National Academy of Sciences (PNAS).

“We were very surprised by how strong the actuators [aka, “muscles”] were. We expected they’d have a higher maximum functional weight than ordinary soft robots, but we didn’t expect a thousand-fold increase. It’s like giving these robots superpowers,” says Daniela Rus, Ph.D., the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the senior authors of the paper.

“Artificial muscle-like actuators are one of the most important grand challenges in all of engineering,” adds  Rob Wood, Ph.D., corresponding author of the paper and Founding Core Faculty member of the Wyss Institute, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). “Now that we have created actuators with properties similar to natural muscle, we can imagine building almost any robot for almost any task.”

Each artificial muscle consists of an inner “skeleton” that can be made of various materials, such as a metal coil or a sheet of plastic folded into a certain pattern, surrounded by air or fluid and sealed inside a plastic or textile bag that serves as the “skin.” A vacuum applied to the inside of the bag initiates the muscle’s movement by causing the skin to collapse onto the skeleton, creating tension that drives the motion. Incredibly, no other power source or human input is required to direct the muscle’s movement; it is determined entirely by the shape and composition of the skeleton.

“One of the key aspects of these muscles is that they’re programmable, in the sense that designing how the skeleton folds defines how the whole structure moves. You essentially get that motion for free, without the need for a control system,” says first author Shuguang Li, Ph.D., a Postdoctoral Fellow at the Wyss Institute and MIT CSAIL. This approach allows the muscles to be very compact and simple, and thus more appropriate for mobile or body-mounted systems that cannot accommodate large or heavy machinery.

Artificial muscle-like actuators are one of the most important grand challenges in all of engineering. Robert Wood

“When creating robots, one always has to ask, ‘Where is the intelligence – is it in the body, or in the brain?’” says Rus. “Incorporating intelligence into the body (via specific folding patterns, in the case of our actuators) has the potential to simplify the algorithms needed to direct the robot to achieve its goal. All these actuators have the same simple on/off switch, which their bodies then translate into a broad range of motions.”

The team constructed dozens of muscles using materials ranging from metal springs to packing foam to sheets of plastic, and experimented with different skeleton shapes to create muscles that can contract down to 10% of their original size, lift a delicate flower off the ground, and twist into a coil, all simply by sucking the air out of them.

The structural geometry of artificial muscle skeleton determines the muscle’s motion. Credit: Shuguang Li / Wyss Institute at Harvard University

Not only can the artificial muscles move in many ways, they do so with impressive resilience. They can generate about six times more force per unit area than mammalian skeletal muscle can, and are also incredibly lightweight; a 2.6-gram muscle can lift a 3-kilogram object, which is the equivalent of a mallard duck lifting a car. Additionally, a single muscle can be constructed within ten minutes using materials that cost less than $1, making them cheap and easy to test and iterate.

These muscles can be powered by a vacuum, a feature that makes them safer than most of the other artificial muscles currently being tested. “A lot of the applications of soft robots are human-centric, so of course it’s important to think about safety,” says Daniel Vogt, M.S., co-author of the paper and Research Engineer at the Wyss Institute. “Vacuum-based muscles have a lower risk of rupture, failure, and damage, and they don’t expand when they’re operating, so you can integrate them into closer-fitting robots on the human body.”

“In addition to their muscle-like properties, these soft actuators are highly scalable. We have built them at sizes ranging from a few millimeters up to a meter, and their performance holds up across the board,” Wood says. This feature means that the muscles can be used in numerous applications at multiple scales, such as miniature surgical devices, wearable robotic exoskeletons, transformable architecture, deep-sea manipulators for research or construction, and large deployable structures for space exploration.

The team was even able to construct the muscles out of the water-soluble polymer PVA, which opens the possibility of robots that can perform tasks in natural settings with minimal environmental impact, as well as ingestible robots that move to the proper place in the body and then dissolve to release a drug. “The possibilities really are limitless. But the very next thing I would like to build with these muscles is an elephant robot with a trunk that can manipulate the world in ways that are as flexible and powerful as you see in real elephants,” Rus says.

“The actuators developed through this collaboration between the Wood laboratory at Harvard and Rus group at MIT exemplify the Wyss’ approach of taking inspiration from nature without being limited by its conventions, which can result in systems that not only imitate nature, but surpass it,” says the Wyss Institute’s Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at SEAS.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Wyss Institute for Biologically Inspired Engineering.

The stories we tell about technology: AI Narratives

By Susannah Odell and Natasha McCarthy

Technology narratives

The nature, promise and risks of new technologies enter into our shared thinking through narrative – explicit or implicit stories about the technologies and their place in our lives. These narratives can determine what is salient about the technologies, influencing how they are represented in media, culture and everyday discussion. The narratives can influence the dynamics of concern and aspiration across society; the ways and the contexts in which different groups and individuals become aware of and respond to mainstream, new and emerging technologies. The narratives available at a particular point in time, and who tells them, can affect the course of technology development and uptake in subtle ways.

Whilst stories about artificial intelligence have been around for centuries, the way we think about AI is evolving. The Royal Society and Leverhulme Centre for the Future of Intelligence are exploring the ways that narratives about AI might be influencing the development of the technology.

The longevity and influence of narratives

Exploring different technology areas can show how explicit framings of a technology – how it is presented to the wider world – can be influential and long-lasting in this respect. For example in nuclear energy, Lewis Strauss, the Chairman of the US Atomic Energy Commission in 1954, stated that nuclear power would create energy “too cheap to meter”. What turned out to be over-promising for this technology shaped the arguments of sceptics, and this image continues to be used by those who critique the technology. Early scientific optimism can create inadvertent and unexpected milestones that may – rightly or wrongly – influence how technology is perceived when those milestones are not met. Such framings can be hard to shake off and can dominate more complex and subtle considerations.

Diverse visions: aspirations and concerns

Instead of promoting single framings for technologies and their applications, sowing the seeds early on for multiple voices to be heard can promote diverse narratives and ensure that the technology develops in line with genuine societal needs. A greater diversity of both actors in the development of the technology and diversity in the stories we tell about AI may elucidate new uses and governance needs. This requires extensive and continued public dialogue; the Royal Society’s public dialogue on machine learning recently explored how these views can be context specific.

Encouraging credible, trustworthy and independent communicators who do not stand to benefit personally from the technology can create more realistic narratives around new technologies and science, especially when combined with greater scientific transparency and self-correction. Comprehensive scenario planning can build trustworthy narratives, helping to analyse possible worst case accident scenarios and substantially reduce future risk.

Widening the narratives on AI

Diversifying today’s stories about AI to ensure that they are reflective of the current technological development, will give us better ideas of how AI can be used to transform our lives. Dominant narratives focus on anthropomorphised AI, but the reality of AI includes systems that are distributed, embedded in complex systems and can be found in varied applications such as helping doctors detect breast cancer or increasing the responsiveness of emergency services to flooding incidents. Adding to existing narratives with stories from underrepresented voices can also help us, as citizens, policy-makers and scientists, to imagine new opportunities and expand our assessment of how AI should be regarded, regulated and harnessed for the best possible economic and societal outcomes.

This is why the Leverhulme Centre for the Future of Intelligence and the Royal Society are exploring how visions and narratives are shaping perceptions, the development of intelligent technology and trust in its use. Find more information.

So where are the jobs?

Dan Burstein, reporter, novelist and successful venture capitalist, declared Wednesday night at RobotLab‘s winter forum on Autonomous Transportation & SmartCities that within one hundred years the majority of jobs in the USA (and the world) could disappear, transferring the mantle of work from humans to machines. Burstein cautioned the audience that unless governments address the threat of millions of unemployable humans with a wider safety net, democracy could fail. The wisdom of one of the world’s most successful venture investors did not fall on deaf ears.

In their book, Only Humans Need ApplyThomas Davenport and Julia Kirby also warn that that humans are too easily ceding their future to machines. “Many knowledge workers are fearful. We should be concerned, given the potential for these unprecedented tools to make us redundant. But we should not feel helpless in the midst of the large-scale change unfolding around us,” states Davenport and Kirby. The premise of their book is not to deny the disruption by automation, but to empower its readers with the knowledge of where jobs are going to be created in the new economy. The authors suggest that robots should be looked at as augmenting humans, not replacing them. “Look for ways to help humans perform their most human and most valuable work better,” says Davenport and Kirby. The book suggests that in order for human society to survive long-term a new social contract has to be drawn up between employer and employee. The authors optimistically predict that to “expect corporations’ efforts to keep human workers employable will become part of their ‘social license to operate.”

Screen Shot 2017-12-02 at 8.29.59 PM.png

In every industrial movement since the invention of the power loom and cotton gin there have been great transfers of wealth and jobs. History is riddled with the fear of the unknown that is countered by the unwavering human spirit of invention. As societies evolve, pressured by the adoption of technology, it will be the progressive thinkers embracing change who will lead movements of people to new opportunities.

Burstein’s remarks were very timely, as this past week, McKinsey & Company released a new report entitled, Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey’s analysis took a global view of 46 countries that comprise of 90% of the world’s Gross Domestic Product, and then focused in particular on six industrial countries with varying income levels: China, Germany, India, Japan, Mexico, and the United States. The introduction explains, “For each, we modeled the potential net employment changes for more than 800 occupations, based on different scenarios for the pace of automation adoption and for future labor demand.” Ultimately concluding where the new jobs will be in 2030.

A prominent factor in the McKinsey report is that over the next thirteen years global consumption is anticipated grow by $23 trillion. Based upon this trajectory, they estimate that between 300 to 450 million new jobs would be generated worldwide, especially in the Far East. In addition, dramatic shifts in demographics and the environment will lead to the expansion of jobs in healthcare, IT consulting, construction, infrastructure, clean energy, and domestic services. The report estimates that automation will displace between 400 and 800 million people by 2030, forcing 75 to 375 million working men and women to switch professions and learn new skills. In order to survive, populations will have to embrace the concept of lifelong learning. Based upon the math, more than half of the displaced will be unemployable in their current professions.

Screen Shot 2017-12-02 at 8.54.40 PM.png

According to McKinsey a bright spot for employment could be the 80 to 200 million new jobs created by modernizing aging municipalities into “Smart Cities.” McKinsey’s conclusions were echoed by Rhonda Binder and Mitchell Kominsky of Venture Smarter in their RobotLab presentation last week. Binder presented her experiences with turning around the Jamaica Business Improvement District of Queens, New York by implementing her “Three T’s Strategy – Tourism, Transportation and Technology.” Binder stated that cities offer the perfect laboratory for autonomous systems and sensor-based technologies to improve the quality of life of residents. To support these endeavors a hiring surge of urban technologists, architects, civil engineers, and construction workers across all trades could ensue in the next decade.

This is further validated by Google’s subsidiary Sidewalk Labs recent partnership with Toronto, Canada to redevelop, and digitize, 800 acres of the city’s waterfront property. Dan Doctoroff, Chief Executive Officer of Sidewalk Labs, explained that the goal of the partnership is to prove what is possible by building a digital city within an existing city to test out autonomous transportation, new communication access, healthcare delivery solutions and a variety of urban planning technologies. Doctoroff’s sentiment was endorsed by Canadian Prime Minister Justin Trudeau who said that “Sidewalk Toronto will transform Quayside [the waterfront] into a thriving hub for innovation and a community for tens of thousands of people to live, work, and play.” The access to technology not only offers the ability to improve the quality of living within the city, but fosters an influx of sustainable jobs for decades.

In addition to updating crippling infrastructure, aging humans will be driving an increase in global healthcare services, particularly related to demand for in-home caregivers and aid workers. According to McKinsey, there will be over 300 million people globally over 65 years old by 2030, leading to 50 to 80 million new jobs. Geriatric medicine is leading new research in artificial intelligence and robotics for aging-in-place populations demanding more doctors, nurses, medical technicians, and personal aides.

Screen Shot 2017-12-02 at 11.27.47 PM.png

Aging will not be the only challenge facing the planet. Global warming could lead to an explosion of jobs in turning back the clock on climate change. The rush to develop advances in renewable energy worldwide is already generating billions of dollars of new investment and demand for high skill and manual labor. As an example, the American solar industry employed a record-high 260,077 workers in late 2016, a growth of at least 20% over the past four years. In New York alone, the state experienced a 7% uptick in 2016 of close to 150,000 clean-energy jobs. McKinsey stated that by 2030, tens of millions of new professions could be created in the coming years in developing, manufacturing and installing energy-efficient innovations.

McKinsey also estimates that automation itself will bring new employment  with corporate-technology spending hitting record highs. While the number of jobs added to support the deployment of machines is smaller in number than the other industries above, these occupations offer higher wages. Robots could potentially create 20 to 50 million new “grey collar”  professions globally. In addition, re-training workers to these professions could lead to a new workforce of 100 million educators.

Screen Shot 2017-12-03 at 12.14.35 AM.png

The report does not shirk from the fact that a major disruption is on the horizon for labor. In fact, the authors hope that by researching pockets of positive growth, humans will not be helpless victims. As Devin Fidler of Institute for the Future suggests, “As basic automation and machine learning move toward becoming commodities, uniquely human skills will become more valuable. There will be an increasing economic incentive to develop mass training that better unlocks this value.”

A hundred years ago the world experienced a dramatic shift from agrarian lifestyle to manufacturing. Since then, there have been revolutions in mass transportation and communications. No one could have predicated the massive transfer of jobs from the fields to the urban factories in the beginning of the nineteenth century. Likewise, it is difficult to know what the next hundred years have in store for human, and robot, kind.

Warner Brothers and Intel experiment with in-robocar entertainment. Is that a good idea?

Intel and Warner made a splash at the LA Auto Show announcing how Warner will develop entertainment for viewing while riding in robotaxis. It’s not just movies to watch, their hope is to produce something more like an amusement park ride to keep you engaged on your journey.

Like most partnership announcements around robocars, this one is mainly there for PR since they haven’t built anything yet. The idea is both interesting and hype.

I’ll start with the negative. I think people will carry their entertainment with them in their pockets, and not want it from their cars. Why would I want a different music system with a different interface when my own music and videos are already curated by me and stored in my phone? All I really want is a speaker and screen to display them on.

This is becoming very clear on planes, where I prefer to watch movies I have pre-downloaded on my phone than what is on the bigger screen of the in-flight entertainment system. There are several reasons for that:

  • The UIs on most in-flight systems suck really, really badly. I mean it’s amazing how bad most of them are. (Turns out there is a reason for that.) Cars will probably do it better but the history is not promising.
  • Your personal device is usually newer with more advanced technology because you replace it every 2 years. You have curated the content in it and know the interface.
  • On airplanes in particular, they believe rules force them to pause your experience so that they can announce that duty free sales are now open in 3 languages. And 20 or more other interruptions, only a couple of which are actually important to hear for an experienced flyer.

So Warner is wise in putting a focus on doing something you can’t do with your personal gear, such as a VR experience, or immersive screens around the car. There is a unique opportunity to tune the VR experience to the actual motions of the car. In traffic, you can only tune to the needed motions. On the open road, you might actually be able to program a trip that deliberately slows or speeds up or turns when nobody else is around to make a cool VR experience.

While that might be nice, it would be mostly a gimmick, more like a ride you try once. I don’t think people will want to go everywhere in the batmobile. As such it will be more of a niche, or marketing trick.


More interesting is the ability to reduce carsickness with audio-visual techniques. Some people get pretty queasy if they look down for a long time at a book or laptop. Others are less bothered. A phone held in the hand seems to be easier to use for most than something heavier, perhaps because it moves with the motion of the car. For many years I have proposed that cars communicate their upcoming plans with subtle audio or visual cues so that people know when they are about to turn or slow down. Some experiments are now being reported on this and it will be interesting to see the results.

If you ride on a subway, bus or commuter train today, the scene is now always the same. A row of people, all staring at their phones.

Advertising

Some commenters have speculated that another goal here may be to present advertising to hapless taxi passengers. After all, ride a New York cab and many others and you will see an annoying video loop playing. Each time you have to go through the menus to mute the volume. With hailed taxis, you can’t shop, and so they can also get away with doing this — what are you going to do, get out of the cab and wait for the next one?

I hope that with mobile-phone hail, competition prevents this sort of attempt to monetize the time of the customer. I definitely want my peace and quiet, and the revenue from the advertising — typically well under a dollar an hour — can’t possibly offset that for me.

Videos from the International Conference on Robot Learning

Credit: Melanie Saldana

The first International Conference on Robot Learning (CoRL) happened mid-November in Mountain View, California.

You can now watch all the videos online, including talks by J. Andrew Bagnell (CMU), Rodney Brooks (Rethink Robotics, MIT), Anca Dragan (UC Berkeley), Yann LeCun (Facebook, NYU) and Stefanie Tellex (Brown University).

We’ll also be posting Robohub Podcast interviews done at the conference – so stay tuned!

November fundings, acquisitions and IPOs


Twenty-two different startups were funded in November cumulatively raising $782 million, down a bit from the $862 million in October. The big $400 million funding for UBTech and the $55 million for TuSimple were the only two fundings over $50 million and they were both for Chinese startups with funding from Chinese VC firms.

Six acquisitions were reported during the month including another takeover of a European robotics company by a Chinese one.

On the IPO front, three existing publicly-traded stocks announced additional shares to raise additional funds.

Fundings:

  1. Ubtech, a Shenzhen-based humanoid robots maker startup, raised $400 million in a Series C round led by Tencent Holdings (which invested $40 million in the round). Ubtech (Union Brothers Technology) builds and sells toy robots. Their most recent is a $300 Star Wars Stormtrooper robot which will ship just before the movie debuts mid-December.
  2. TuSimple, a Chinese startup providing autonomous driving technology for the trucking industry, raised $55 million in a Series C round led by Fuhe Capital with Zhiping Capital and SINA Corp. Note that TuSimple raised $20 million in August in a Series B round.
  3. Markforged, a Watertown, MA maker of carbon fiber and metal 3D printing devices, raised $30 million in a Series C round led by Siemens’ next47 venture firm. Microsoft Ventures and Porsche SE also participated along with existing investors Matrix Partners, North Bridge Venture Partners, and Trinity Ventures.
  4. Kindred Systems, a Toronto warehousing AI startup, raised $27.5 million in a Series B round led by First Round Capital with participation by Tencent Holdings and Eclipse. Kindred is building human-like intelligence in machines. It’s first commercial offering is Kindred Sort, a put wall integration of arm, gripper and software to pick and sort random objects.
  5. Locus Robotics, an Andover, MA-based provider of mobile robots for e-commerce fulfillment warehouses, raised $25 million in a Series B funding led by Scale Venture Partners with participation by existing investors.
  6. Optimus Ride, a MIT spinoff company developing self-driving technology, raised $18 million in Series A funding. Greycroft Partners led the round, and was joined by investors including Emerson Collective, Fraser McCombs Capital and Joi Ito.
  7. Bossa Nova, a San Francisco-based developer of autonomous service robots for the global retail industry, raised $17.5 million in Series B funding. Paxion led the round, and was joined by investors including Intel Capital, WRV Capital, Lucas Venture Group, and Cota Capital.
  8. Riverfield Surgical Robot Lab, a Japanese startup and spin-off from the Tokyo Inst of Technology, raised $10 million in a Series B round from Toray Engineering, SBI Investment and JAFCO Japan.
  9. Arbe Robotics, an Israeli radar collision avoidance platform, raised $9 million in a Series A round. O.G. Tech Ventures and OurCrowd led the round, with previous investors Canaan Partners, iAngels, and Taya Ventures also participating. Arbe is also developing radar for autonomous vehicles that facilitates real-time mapping at distances up to 300 meters.
  10. AUBO Robotics (prev named Smokie), a Chinese co-bot manufacturer, raised $9 million in a Series A round by Fosun RZ Capital. “When we decided to manufacture in China we had to be incorporated in China to get the incentives. They had to have a name change because the laws in China state that the name be a Chinese name. AUBO or AU BO loosely translated means ‘New Technology’. The AUBO-i5 production is in Changzhou and we also have R&D in Beijing and Knoxville TN,” said Aubo’s VP of Sales Peter Farkas.
  11. Beijing TechX Aviation Innovation, a Chinese drone startup for military and high-end industrial users, raised $7.5 million in a Series A round from Fosun RZ Capital.
  12. Leju Robotics, a Shenzhen startup developing humanoid robots for the service industry, raised $7.4 million (in August) from Green Pine Capital Partners and Tencent.
  13. Embodied Intelligence, an Emeryville, CA startup developing teaching AI for existing robots, raised $7 million in a seed round led by Amplify Partners with participation from Lux Capital, SV Angels, FreeS11.2 Capital and A.Capital.
  14. Apis Cor, a Moscow startup using a massive robotic 3D device for printing concrete, raised $6 million (in September) in a seed round from Rusnano Sistema Sicar venture fund.
  15. Rokae, a Chinese startup making lightweight/lightload industrial robot arms, raised $6 million in a Series A round from THG Ventures, the venture arm of Tsinghua Holdings, and Delin Capital.
  16. AerDrone Intl, a startup of Irelandia Aviation Drones, both of Dublin, raised $5 million in seed funding from Irelandia to provide leasing funding for drone users.
  17. GJS Robot, a Shenzhen startup making personalized fighting robots known as Ganker robots, raised an undisclosed Series A investment estimated to be $5 million, from Tencent Holdings.
  18. Catalia Health, a San Francisco-based healthcare startup providing an AI-powered patient engagement platform, raised $4 million in pre-Series A funding. Ion Pacific led the round.
  19. TortugaAgTech, a Colorado ag robotics startup, raised $2.6 million (in September) from SVG Partners and Thrive AgTech Ventures. Tortuga is developing robotics for indoor farming operations.
  20. Ceres Imaging, the Oakland, CA aerial imagery and analytics company, raised an additional $2.5 million for their May, 2017 Series A round (which raised $5M). Romulus Capital was the sole investor.
  21. Wink Robotics, a startup focused on using robotics, AI and machine vision for the beauty industry, raised $1.73 million (in August) in seed funding from undisclosed sources.
  22. Natilus, a San Jose, Calif.-based maker of large aircraft drones to haul freight, raised seed funding of an undisclosed amount. Investors included Starburst Ventures, Seraph Group, Gelt VC, Outpost Capital and Draper Associates.

Acquisitions:

  • Dash Robotics, a Hayward, CA connected toys developer, acquired Austin, TX-based Bots Alive, a robotics and AI hobby kit and toy developer, for an undisclosed amount.
  • Huachangda Intelligent Equipmenta Chinese industrial robot integrator servicing primarily the auto industry, has acquired Swedish Robot System Products (RSP), for an undisclosed amount. RSP manufacturers grippers, welding systems, tool changers and other peripheral products for robots.
  • Atronix Engineering, a GA-based industrial robot system integrator, was acquired by MHS (Material Handling Systems), a KY-based integrator of material handling systems, for an undisclosed amount. In April, 2017, MHS was itself acquired by Thomas H. Lee Partners, a Boston-based equity fund.
  • Mapbox, a DC and SF nav systems provider for car companies, acquired a 4-person Minsk, Belarus mapping startup, MapData, for an undisclosed amount. Just last month Mapbox raised $164 million in a round led by the SoftBank Vision Fund. The deal spearheads the hiring of more engineers to help build its next big product, an SDK that will let developers build augmented reality-based maps into their apps that will work by way of the front-facing cameras on people’s devices.
  • Argo AI, a Pittsburgh autonomous vehicles and AI startup, using some of the $1 billion it raised from Ford, acquired Princeton Lightwave for an undisclosed amount. Princeton is a New Jersey manufacturer of real-time Geiger-mode LiDAR technology.
  • Tesla acquired Perbix, a Minnesota integrator of automated machines and industrial robotics that had been a contract supplier to Tesla for many years, for around $10.5 million.

IPOs:

  • Titan Medical (TSE:TMD), a Canadian single-port surgical robot device maker, announced an offering of shares for a minimum of $14,000,000 and a maximum $18,000,000. Titan will issue shares at a price of CDN $0.50 per Unit and each Unit is comprised of one common share and one warrant, exercisable for one Common Share at a price of CDN $0.60, for a period of 5 years following the closing of the Offering.
  • Myomo (NYSEMKT:MYO), a Cambridge, MA-based exoskeleton provider, announced an offering of 1.5 million shares of common stock and warrants to purchase an additional 750,000 shares at a price of $6.25 per share for 1/2 a warrant. Myomo hopes to raise $8.5 on the initial offering with an additional $1.4 million from an underwriter’s option.
  • Fastbrick Robotics (ASX:FBR)an Australian brick-laying startup, raised $26.5 million by offering 184 million shares in a private placement. This is in addition to the $2 million investment in July by Caterpillar who will be manufacturing, selling and servicing Fastbrick’s technology mounted on Caterpillar equipment.

Inertial-Grade MEMS Capacitive Accelerometers

Press Release by Silicon Designs:

Silicon Designs Introduces Inertial-Grade MEMS Capacitive Accelerometers
with Internal Temperature Sensor and Improved Low-Noise Performance
 
Five Full Standard G-Ranges from ±2 g to ±50 g Now Available for Immediate Customer Shipment
November 9, 2017, Kirkland, Washington, USA – Silicon Designs, Inc. (www.SiliconDesigns.com), a 100% veteran owned, U.S. based leading designer and manufacturer of highly rugged MEMS capacitive accelerometer chips and modules, today announced the immediate availability of its Model 1525 Series, a family of commercial and inertial-grade MEMS capacitive accelerometers, offering industry-best-in-class low-noise performance.
Design of the Model 1525 Series incorporates Silicon Designs’ own high-performance MEMS variable capacitive sense element, along with a ±4.0V differential analog output stage, internal temperature sensor and integral sense amplifier — all housed within a miniature, nitrogen damped, hermetically sealed, surface mounted J-lead LCC-20 ceramic package (U.S. Export Classification ECCN 7A994). The 1525 Series features low-power (+5 VDC, 5 mA) operation, excellent in-run bias stability, and zero cross-coupling. Five unique full-scale ranges, of ±2 g, ±5 g, ±10 g, ±25 g, and ±50 g, are currently in production and available for immediate customer shipment. Each MEMS accelerometer offers reliable performance over a standard operating temperature range of -40° C to +85° C. Units are also relatively insensitive to wide temperature changes and gradients. Each device is marked with a serial number on its top and bottom surfaces for traceability. A calibration test sheet is supplied with each unit, showing measured bias, scale factor, linearity, operating current, and frequency response.
Carefully regulated manufacturing processes ensure that each sensor is made to be virtually identical, allowing users to swap out parts in the same g range with few-to-no testing modifications, further saving time and resources. This provides test engineers with a quick plug-and-play solution for almost any application, with total trust in sensor accuracy when used within published specifications. As the OEM of its own MEMS capacitive accelerometer chips and modules, Silicon Designs further ensures the manufacture of consistently high-quality products, with full in-house customization capabilities to customer exacting standards.  This flexibility ensures that Silicon Designs can expeditiously design, develop and manufacture high-quality standard and custom MEMS capacitive accelerometers, yet still keep prices highly competitive.
Photo By: Silicon Designs – www.silicondesigns.com
The Silicon Designs Model 1525 Series tactical grade MEMS inertial accelerometer family is ideal for zero-to-medium frequency instrumentation applications that require high-repeatability, low noise, and maximum stability, including tactical guidance systems, navigation and control systems (GN&C), AHRS, unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), remotely operated vehicles (ROVs), robotic controllers, flight control systems, and marine- and land-based navigational systems. They may also be used to support critical industrial test requirements, such as those common to agricultural, oil and gas drilling, photographic and meteorological drones, as well as seismic and inertial measurements.
Since 1983, the privately held Silicon Designs has served as leading industry experts in the design, development and manufacture of highly rugged MEMS capacitive accelerometers and chips with integrated amplification, operating from its state-of-the-art facility near Seattle, Washington, USA. From the company’s earliest days, developing classified components for the United States Navy under a Small Business Innovation and Research (SBIR) grant, to its later Tibbetts Award and induction into the Space Technology Hall of Fame, Silicon Designs applies nearly 35 years of MEMS R&D innovation and applications engineering expertise into all finished product designs. For additional information on the Model 1525 Series or other MEMS capacitive sensing technologies offered by Silicon Designs, visit www.silicondesigns.com.
-###-
About Silicon Designs, Inc.
Silicon Designs was founded in 1983 with the goal of improving the accepted design standard for traditional MEMS capacitive accelerometers. At that time, industrial-grade accelerometers were bulky, fragile and costly.  The engineering team at Silicon Designs listened to the needs of customers who required more compact, sensitive, rugged and reasonably priced accelerometer modules and chips, though which also offered higher performance.  Resultant product lines were designed and built to surpass customer expectations. The company has grown steadily over the years, while its core competency remains accelerometers, with the core business philosophies of “make it better, stronger, smaller and less expensive” and “let the customer drive R&D” maintained to this day. 

The post Inertial-Grade MEMS Capacitive Accelerometers appeared first on Roboticmagazine.

Report from the AI Race Avoidance Workshop

GoodAI and AI Roadmap Institute
Tokyo, ARAYA headquarters, October 13, 2017

Authors: Marek Rosa, Olga Afanasjeva, Will Millership (GoodAI)

Workshop participants: Olga Afanasjeva (GoodAI), Shahar Avin (CSER), Vlado Bužek (Slovak Academy of Science), Stephen Cave (CFI), Arisa Ema (University of Tokyo), Ayako Fukui (Araya), Danit Gal (Peking University), Nicholas Guttenberg (Araya), Ryota Kanai (Araya), George Musser (Scientific American), Seán Ó hÉigeartaigh (CSER), Marek Rosa (GoodAI), Jaan Tallinn (CSER, FLI), Hiroshi Yamakawa (Dwango AI Laboratory)

Summary

It is important to address the potential pitfalls of a race for transformative AI, where:

  • Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few

Race dynamics may develop regardless of the motivations of the actors. For example, actors may be aiming to develop a transformative AI as fast as possible to help humanity, to achieve economic dominance, or even to reduce costs of development.

There is already an interest in mitigating potential risks. We are trying to engage more stakeholders and foster cross-disciplinary global discussion.

We held a workshop in Tokyo where we discussed many questions and came up with new ones which will help facilitate further work.

The General AI Challenge Round 2: Race Avoidance will launch on 18 January 2018, to crowdsource mitigation strategies for risks associated with the AI race.

What we can do today:

  • Study and better understand the dynamics of the AI race
  • Figure out how to incentivize actors to cooperate
  • Build stronger trust in the global community by fostering discussions between diverse stakeholders (including individuals, groups, private and public sector actors) and being as transparent as possible in our own roadmaps and motivations
  • Avoid fearmongering around both AI and AGI which could lead to overregulation
  • Discuss the optimal governance structure for AI development, including the advantages and limitations of various mechanisms such as regulation, self-regulation, and structured incentives
  • Call to action — get involved with the development of the next round of the General AI Challenge

Introduction

Research and development in fundamental and applied artificial intelligence is making encouraging progress. Within the research community, there is a growing effort to make progress towards general artificial intelligence (AGI). AI is being recognized as a strategic priority by a range of actors, including representatives of various businesses, private research groups, companies, and governments. This progress may lead to an apparent AI race, where stakeholders compete to be the first to develop and deploy a sufficiently transformative AI [1,2,3,4,5]. Such a system could be either AGI, able to perform a broad set of intellectual tasks while continually improving itself, or sufficiently powerful specialized AIs.

“Business as usual” progress in narrow AI is unlikely to confer transformative advantages. This means that although it is likely that we will see an increase in competitive pressures, which may have negative impacts on cooperation around guiding the impacts of AI, such continued progress is unlikely to spark a “winner takes all” race. It is unclear whether AGI will be achieved in the coming decades, or whether specialised AIs would confer sufficient transformative advantages to precipitate a race of this nature. There seems to be less potential of a race among public actors trying to address current societal challenges. However, even in this domain there is a strong business interest which may in turn lead to race dynamics. Therefore, at present it is prudent not to rule out any of these future possibilities.

The issue has been raised that such a race could create incentives to neglect either safety procedures, or established agreements between key players for the sake of gaining first mover advantage and controlling the technology [1]. Unless we find strong incentives for various parties to cooperate, at least to some degree, there is also a risk that the fruits of transformative AI won’t be shared by the majority of people to benefit humanity, but only by a selected few.

We believe that at the moment people present a greater risk than AI itself, and that AI risks-associated fearmongering in the media can only damage constructive dialogue.

Workshop and the General AI Challenge

GoodAI and the AI Roadmap Institute organized a workshop in the Araya office in Tokyo, on October 13, 2017, to foster interdisciplinary discussion on how to avoid pitfalls of such an AI race.

Workshops like this are also being used to help prepare the AI Race Avoidance round of the General AI Challenge which will launch on 18 January 2018.

The worldwide General AI Challenge, founded by GoodAI, aims to tackle this difficult problem via citizen science, promote AI safety research beyond the boundaries of the relatively small AI safety community, and encourage an interdisciplinary approach.

Why are we doing this workshop and challenge?

With race dynamics emerging, we believe we are still at a time where key stakeholders can effectively address the potential pitfalls.

  • Primary objective: find a solution to problems associated with the AI race
  • Secondary objective: develop a better understanding of race dynamics including issues of cooperation and competition, value propagation, value alignment and incentivisation. This knowledge can be used to shape the future of people, our team (or any team), and our partners. We can also learn to better align the value systems of members of our teams and alliances

It’s possible that through this process we won’t find an optimal solution, but a set of proposals that could move us a few steps closer to our goal.

This post follows on from a previous blogpost and workshop Avoiding the Precipice: Race Avoidance in the Development of Artificial General Intelligence [6].

Topics and questions addressed at the workshop

General question: How can we avoid AI research becoming a race between researchers, developers, companies, governments and other stakeholders, where:

  • Safety gets neglected or established agreements are defied
  • The fruits of the technology are not shared by the majority of people to benefit humanity, but only by a selected few

At the workshop, we focused on:

  • Better understanding and mapping the AI race: answering questions (see below) and identifying other relevant questions
  • Designing the AI Race Avoidance round of the General AI Challenge (creating a timeline, discussing potential tasks and success criteria, and identifying possible areas of friction)

We are continually updating the list of AI race-related questions (see appendix), which will be addressed further in the General AI Challenge, future workshops and research.

Below are some of the main topics discussed at the workshop.

1) How can we better understand the race?

  • Create and understand frameworks for discussing and formalizing AI race questions
  • Identify the general principles behind the race. Study meta-patterns from other races in history to help identify areas that will need to be addressed
  • Use first-principle thinking to break down the problem into pieces and stimulate creative solutions
  • Define clear timelines for discussion and clarify the motivation of actors
  • Value propagation is key. Whoever wants to advance, needs to develop robust value propagation strategies
  • Resource allocation is also key to maximizing the likelihood of propagating one’s values
  • Detailed roadmaps with clear targets and open-ended roadmaps (where progress is not measured by how close the state is to the target) are both valuable tools to understanding the race and attempting to solve issues
  • Can simulation games be developed to better understand the race problem? Shahar Avin is in the process of developing a “Superintelligence mod” for the video game Civilization 5, and Frank Lantz of the NYU Game Center came up with a simple game where the user is an AI developing paperclips

2) Is the AI race really a negative thing?

  • Competition is natural and we find it in almost all areas of life. It can encourage actors to focus, and it lifts up the best solutions
  • The AI race itself could be seen as a useful stimulus
  • It is perhaps not desirable to “avoid” the AI race but rather to manage or guide it
  • Is compromise and consensus good? If actors over-compromise, the end result could be too diluted to make an impact, and not exactly what anyone wanted
  • Unjustified negative escalation in the media around the race could lead to unnecessarily stringent regulations
  • As we see race dynamics emerge, the key question is if the future will be aligned with most of humanity’s values. We must acknowledge that defining universal human values is challenging, considering that multiple viewpoints exist on even fundamental values such as human rights and privacy. This is a question that should be addressed before attempting to align AI with a set of values

3) Who are the actors and what are their roles?

  • Who is not part of the discussion yet? Who should be?
  • The people who will implement AI race mitigation policies and guidelines will be the people working on them right now
  • Military and big companies will be involved. Not because we necessarily want them to shape the future, but they are key stakeholders
  • Which existing research and development centers, governments, states, intergovernmental organizations, companies and even unknown players will be the most important?
  • What is the role of media in the AI race, how can they help and how can they damage progress?
  • Future generations should also be recognized as stakeholders who will be affected by decisions made today
  • Regulation can be viewed as an attempt to limit the future more intelligent or more powerful actors. Therefore, to avoid conflict, it’s important to make sure that any necessary regulations are well thought-through and beneficial for all actors

4) What are the incentives to cooperate on AI?

One of the exercises at the workshop was to analyze:

  • What are motivations of key stakeholders?
  • What are the levers they have to promote their goals?
  • What could be their incentives to cooperate with other actors?

One of the prerequisites for effective cooperation is a sufficient level of trust:

  • How do we define and measure trust?
  • How can we develop trust among all stakeholders — inside and outside the AI community?

Predictability is an important factor. Actors who are open about their value system, transparent in their goals and ways of achieving them, and who are consistent in their actions, have better chances of creating functional and lasting alliances.

5) How could the race unfold?

Workshop participants put forward multiple viewpoints on the nature of the AI race and a range of scenarios of how it might unfold.

As an example, below are two possible trajectories of the race to general AI:

  • Winner takes all: one dominant actor holds an AGI monopoly and is years ahead of everyone. This is likely to follow a path of transformative AGI (see diagram below).

Example: Similar technology advantages have played an important role in geopolitics in the past. For example, by 1900 Great Britain, with only 40 million people, managed to capitalise the advantage of technological innovation creating an empire of about one quarter of the Earth’s land and population [7].

  • Co-evolutionary development: many actors on similar level of R&D racing incrementally towards AGI.

Example: This direction would be similar to the first stage of space exploration when two actors (the Soviet Union and the United States) were developing and successfully putting in use a competing technology.

Other considerations:

  • We could enter a race towards incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)
  • We are in multiple races to have incremental leadership on different types of narrow AI. Therefore we need to be aware of different risks accompanying different races
  • The dynamics will be changing as different races evolve

The diagram below explores some of the potential pathways from the perspective of how the AI itself might look. It depicts beliefs about three possible directions that the development of AI may progress in. Roadmaps of assumptions of AI development, like this one, can be used to think of what steps we can take today to achieve a beneficial future even under adversarial conditions and different beliefs.

Click here for full-size image

Legend:

  • Transformative AGI path: any AGI that will lead to dramatic and swift paradigm shifts in society. This is likely to be a “winner takes all” scenario.
  • Swiss Army Knife AGI path: a powerful (can be also decentralized) system made up of individual expert components, a collection of narrow AIs. Such AGI scenario could mean more balance of power in practice (each stakeholder will be controlling their domain of expertise, or components of the “knife”). This is likely to be a co-evolutionary path.
  • Narrow AI path: in this path, progress does not indicate proximity to AGI and it is likely to see companies racing to create the most powerful possible narrow AIs for various tasks.

Current race assumption in 2017

Assumption: We are in a race to incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)

  • Counter-assumption: We are in a race to “incremental” AGI (not a “winner takes all” scenario)
  • Counter-assumption: We are in a race to recursive AGI (winner takes all)
  • Counter-assumption: We are in multiple races to have incremental leadership on different types of “narrow” AI

Foreseeable future assumption

Assumption: At some point (possibly 15 years) we will enter a widely-recognised race to a “winner takes all” scenario of recursive AGI

  • Counter-assumption: In 15 years, we continue incremental (not a “winner takes all” scenario) race on narrow AI or non-recursive AGI
  • Counter-assumption: In 15 years, we enter a limited “winner takes all” race to certain narrow AI or non-recursive AGI capabilities
  • Counter-assumption: The overwhelming “winner takes all” is avoided by the total upper limit of available resources that support intelligence

Other assumptions and counter-assumptions of race to AGI

Assumption: Developing AGI will take a large, well-funded, infrastructure-heavy project

  • Counter-assumption: A few key insights will be critical and they could come from small groups. For example, Google Search which was not invented inside a well known established company but started from scratch and revolutionized the landscape
  • Counter-assumption: Small groups can also layer key insights onto existing work of bigger groups

Assumption: AI/AGI will require large datasets and other limiting factors

  • Counter-assumption: AGI will be able to learn from real and virtual environments and a small number of examples the same way humans can

Assumption: AGI and its creators will be easily controlled by limitations on money, political leverage and other factors

  • Counter-assumption: AGI can be used to generate money on the stock market

Assumption: Recursive improvement will proceed linearly/diminishing returns (e.g. learning to learn by gradient descent by gradient descent)

  • Counter-assumption: At a certain point in generality and cognitive capability, recursive self-improvement may begin to improve more quickly than linearly, precipitating an “intelligence explosion”

Assumption: Researcher talent will be key limiting factor in AGI development

  • Counter-assumption: Government involvement, funding, infrastructure, computational resources and leverage are all also potential limiting factors

Assumption: AGI will be a singular broad-intelligence agent

  • Counter-assumption: AGI will be a set of modular components (each limited/narrow) but capable of generality in combination
  • Counter-assumption: AGI will be an even wider set of technological capabilities than the above

6) Why search for AI race solution publicly?

  • Transparency allows everyone to learn about the topic, nothing is hidden. This leads to more trust
  • Inclusion — all people from across different disciplines are encouraged to get involved because it’s relevant to every person alive
  • If the race is taking place, we won’t achieve anything by not discussing it, especially if the aim is to ensure a beneficial future for everyone

Fear of an immediate threat is a big motivator to get people to act. However, behavioral psychology tells us that in the long term a more positive approach may work best to motivate stakeholders. Positive public discussion can also help avoid fearmongering in the media.

7) What future do we want?

  • Consensus might be hard to find and also might not be practical or desirable
  • AI race mitigation is basically an insurance. A way to avoid unhappy futures (this may be easier than maximization of all happy futures)
  • Even those who think they will be a winner may end up second, and thus it’s beneficial for them to consider the race dynamics
  • In the future it is desirable to avoid the “winner takes all” scenario and make it possible for more than one actor to survive and utilize AI (or in other words, it needs to be okay to come second in the race or not to win at all)
  • One way to describe a desired future is where the happiness of each next generation is greater than the happiness of a previous generation

We are aiming to create a better future and make sure AI is used to improve the lives of as many people as possible [8]. However, it is difficult to envisage exactly what this future will look like.

One way of envisioning this could be to use a “veil of ignorance” thought experiment. If all the stakeholders involved in developing transformative AI assume they will not be the first to create it, or that they would not be involved at all, they are likely to create rules and regulations which are beneficial to humanity as a whole, rather than be blinded by their own self interest.

AI Race Avoidance challenge

In the workshop we discussed the next steps for Round 2 of the General AI Challenge.

About the AI Race Avoidance round

  • Although this post has used the title AI Race Avoidance, it is likely to change. As discussed above, we are not proposing to avoid the race but rather to guide, manage or mitigate the pitfalls. We will be working on a better title with our partners before the release.
  • The round has been postponed until 18 January 2018. The extra time allows more partners, and the public, to get involved in the design of the round to make it as comprehensive as possible.
  • The aim of the round is to raise awareness, discuss the topic, get as diverse an idea pool as possible and hopefully to find a solution or a set of solutions.

Submissions

  • The round is expected to run for several months, and can be repeated
  • Desired outcome: next-steps or essays, proposed solutions or frameworks for analyzing AI race questions
  • Submissions could be very open-ended
  • Submissions can include meta-solutions, ideas for future rounds, frameworks, convergent or open-ended roadmaps with various level of detail
  • Submissions must have a two page summary and, if needed, a longer/unlimited submission
  • No limit on number of submissions per participant

Judges and evaluation

  • We are actively trying to ensure diversity on our judging panel. We believe it is important to have people from different cultures, backgrounds, genders and industries representing a diverse range of ideas and values
  • The panel will judge the submissions on how they are maximizing the chances of a positive future for humanity
  • Specifications of this round are work in progress

Next steps

  • Prepare for the launch of AI Race Avoidance round of the General AI Challenge in cooperation with our partners on 18 January 2018
  • Continue organizing workshops on AI race topics with participation of various international stakeholders
  • Promote cooperation: focus on establishing and strengthening trust among the stakeholders across the globe. Transparency in goals facilitates trust. Just like we would trust an AI system if its decision making is transparent and predictable, the same applies to humans

Call to action

At GoodAI we are open to new ideas about how AI Race Avoidance round of the General AI Challenge should look. We would love to hear from you if you have any suggestions on how the round should be structured, or if you think we have missed any important questions on our list below.

In the meantime we would be grateful if you could share the news about this upcoming round of the General AI Challenge with anyone you think might be interested.

Appendix

More questions about the AI race

Below is a list of some more of the key questions we will expect to see tackled in Round 2: AI Race Avoidance of the General AI Challenge. We have split them into three categories: Incentive to cooperate, What to do today, and Safety and security.

Incentive to cooperate:

  • How to incentivise the AI race winner to obey any related previous agreements and/or share the benefits of transformative AI with others?
  • What is the incentive to enter and stay in an alliance?
  • We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
  • Looking at the problems across different scales, the pain points are similar even at the level of internal team dynamics. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. How do we do this?
  • When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial AGI, or at least an AGI that can be controlled. How?

What to do today:

  • How to reduce the danger of regulation over-shooting and unreasonable political control?
  • What role might states have in the future economy and which strategies are they assuming/can assume today, in terms of their involvement in AI or AGI development?
  • With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if other parties don’t follow the ban?
  • If regulation overshoots by creating unacceptable conditions for regulated actors, the actors may decide to ignore the regulation and bear the risk of potential penalties. For example, total prohibition of alcohol or gambling may lead to displacement of the activities to illegal areas, while well designed regulation can actually help reduce the most negative impacts such as developing addiction.
  • AI safety research needs to be promoted beyond the boundaries of the small AI safety community and tackled interdisciplinarily. There needs to be active cooperation between safety experts, industry leaders and states to avoid negative scenarios. How?

Safety and security:

  • What level of transparency is optimal and how do we demonstrate transparency?
  • Impact of openness: how open shall we be in publishing “solutions” to the AI race?
  • How do we stop the first developers of AGI becoming a target?
  • How can we safeguard against malignant use of AI or AGI?

Related questions

  • What is the profile of a developer who can solve general AI?
  • Who is a bigger danger: people or AI?
  • How would the AI race winner use the newly gained power to dominate existing structures? Will they have a reason to interact with them at all?
  • Universal basic income?
  • Is there something beyond intelligence? Intelligence 2.0
  • End-game: convergence or open-ended?
  • What would an AGI creator desire, given the possibility of building an AGI within one month/year?
  • Are there any goods or services that an AGI creator would need immediately after building an AGI system?
  • What might be the goals of AGI creators?
  • What are the possibilities of those that develop AGI first without the world knowing?
  • What are the possibilities of those that develop AGI first while engaged in sharing their research/results?
  • What would make an AGI creator share their results, despite having the capability of mass destruction (e.g. Internet paralysis) (The developer’s intentions might not be evil, but his defense to “nationalization” might logically be a show of force)
  • Are we capable of creating such a model of cooperation in which the creator of an AGI would reap the most benefits, while at the same time be protected from others? Does a scenario exist in which a software developer monetarily benefits from free distribution of their software?
  • How to prevent usurpation of AGI by governments and armies? (i.e. an attempt at exclusive ownership)

References

[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.

[2] Baum, S. D. (2016). On the promotion of safe and socially beneficial artificial intelligence. AI and Society (2011), 1–9.

[3] Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy, 8(2), 135–148.

[4] Geist, E. M. (2016). It’s already too late to stop the AI arms race — We must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318–321.

[5] Conn, A. (2017). Can AI Remain Safe as Companies Race to Develop It?

[6] AI Roadmap Institute (2017). AVOIDING THE PRECIPICE: Race Avoidance in the Development of Artificial General Intelligence.

[7] Allen, Greg, and Taniel Chan. Artificial Intelligence and National Security. Report. Harvard Kennedy School, Harvard University. Boston, MA, 2017.

[8] Future of Life Institute. (2017). ASILOMAR AI PRINCIPLES developed in conjunction with the 2017 Asilomar conference.

Other links:


Report from the AI Race Avoidance Workshop was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Page 377 of 400
1 375 376 377 378 379 400