Roadmapping the AI race to help ensure safe development of AGI

This article accompanies a visual roadmap which you can view and download here.

Why are roadmaps important?

Roadmapping is a useful tool to allow us to look into the future, predict different possible pathways, and identify areas that might present opportunities or problems. The aim is to visualise different scenarios in order to prepare, and avoid scenarios that might lead to an undesirable future or even worse, disaster. It is also an exercise for visualizing a desired future and finding the optimal path towards achieving it.

Introducing the roadmap

This roadmap depicts three hypothetical scenarios in the development of an artificial general intelligence (AGI) system, from the perspective of an imaginary company (C1). The main focus is on the AI race, where stakeholders strive to reach powerful AI, and its implications on safety. It maps out possible decisions made by key actors in various “states of affairs”, which lead to diverse outcomes. Traffic-light color coding is used to visualise the potential outcomes with green showing positive outcomes, red — negative and orange — in-between.

The aim of this roadmap is not to present the viewer with all possible scenarios, but with a few vivid examples. The roadmap is primarily focusing on AGI, which presumably will have a transformative potential and would be able to dramatically affect society [1].

This roadmap intentionally ventures into some of the extreme scenarios to provoke the discussion on AGI’s role in paradigm shifts.

Scenario 1 - AI race: dangerous AGI is developed

Assuming that the potential of AGI is so great, and being the first to create it could give an unprecedented advantage [2] [3], there is a possibility that an AGI could be deployed before it is adequately tested. In this scenario C1 creates AGI while others still race to complete the technology. This could lead to C1 becoming anxious, deploying the AGI before safety is assured, and losing control of it.

What happens next in this scenario would depend on the nature of the AGI created. If the recursive self-improvement of AGI continues too fast for developers to catch up, the future would be out of humanity’s hands. In this case, depending on the objectives and values of the AGI, it could lead to a doomsday scenario or a kind of coexistence, where some people manage to merge with the AGI and reap its benefits, and others not.

However, if the self-improvement rate of the AGI is not exponential, there may be enough maneuvering time to bring it under control again. The AGI might start to disrupt the socio-economic structures [4], pushing affected groups into action. This could lead to some sort of AGI safety consortium, which includes C1, that could be dedicated to developing and deploying safety measures to bring the technology under control. Therefore, this consortium would be created out of necessity and would likely stay together to ensure AGI remains beneficial in the future. Once the AGI is under control it could theoretically lead to a scenario where a powerful and safe AGI can be (re)created transparently.

Powerful and safe AGI

The powerful and safe AGI outcome can be reached from both scenario 1 and 2 (see diagram). It is possible that some sort of powerful AGI prototype will go onto the market, and while it will not pose an existential threat, it will likely cause major societal disruptions and automation of most of the jobs. This could lead to the need for a form of a “universal basic income”, or an alternative model which enables the sharing of income and benefits of AGI among the population. For example, the general public could be able to claim their share in the new “AI economy” through mechanisms provided by an inclusive alliance (see below). Note that the role of governments as public support program providers might significantly reduce unless the governments have access to AGI alongside powerful economic players. Traditional levers the governments push to obtain resources through taxation might not be sufficient in a new AI economy.

Scenario 2 — AI race: focus on safety

In this scenario AGI is seen as a powerful tool, which will give its creator a major economic and societal advantage. It is not primarily considered here (as it is above) as an existential risk, but as a likely cause of many disruptions and shifts in power. Developers keep most research private and any alliances do not grow past superficial PR coalitions, however, a lot of work is done on AI safety. Two possible paths this scenario could take are a collaborative approach to development or a stealth one.

Collaborative approach

With various actors calling for collaboration on AGI development it is likely that some sort of consortium would develop. This could start off as an ad-hoc trust building exercise between a few players collaborating on “low stake” safety issues, but could develop into a larger international AGI co-development structure. Nowadays the way towards a positive scenario is being paved with notable initiatives including the Partnership on AI [5], IEEE working on ethically aligned design [6], the Future of Life Institute [7] and many more. In this roadmap a hypothetical organization of a global scale, where members collaborate on algorithms and safety (titled “United AI” analogous to United Nations), is used as an example. This is more likely to lead to the “Powerful and safe AGI” state described above, as all available global talent would be dedicated, and could contribute, to safety features and testing.

Stealth approach

The opposite could also happen and developers could work in stealth, still doing safety work internally, but trust between organizations would not be strong enough to foster collaborative efforts. This has the potential to go in many different paths. The roadmap focuses on what might happen if multiple AGIs with different owners emerge around the same time or if C1 has a monopoly over the technology.

Multiple AGIs
Multiple AGIs could emerge around the same time. This could be due to a “leak” in the company, other companies getting close at the same time, or if AGI is voluntarily given away by its creators.

This path also has various potentials depending on the creators’ goals. We could reach a “war of AGIs” where the different actors battle it out for absolute control. However, we could find ourselves in a situation of stability, similar to the post-WW2 world, where a separate AGI economy with multiple actors develops and begins to function. This could lead to two parallel worlds of people who have access to AGI and those who don’t, or even those who merge with AGI creating a society of AGI “gods”. This again could lead to greater inequality, or an economy of abundance, depending on the motivations of the AGI “gods” and whether they choose to share the fruits of AGI with the rest of humanity.

AGI monopoly
If C1 manages to keep AGI within its walls through team culture and security measures, it could go a number of ways. If C1 had bad intentions it could use the AGI to conquer the world, which would be similar to the “war of AGIs” (above). However, the competition is unlikely to stand a chance against such powerful technology. It could also lead to the other two end states above: if C1 decides to share the fruits of the technology with humanity, we could see an economy of abundance, and if it doesn’t, the society will likely be very unequal. However, there is another possibility explored and that is if C1 has no interest in this world and continues to operate in stealth once AGI is created. With the potential of the technology C1 could leave earth and begin to explore the universe without anyone noticing.

Scenario 3 - smooth transition from narrow AI to AGI

This scenario sees a gradual transition from narrow AI to AGI. Along the way infrastructure is built up and powershifts are slower and more controlled. We are already seeing narrow AI occupy our everyday lives throughout the economy and society with manual jobs becoming increasingly automated [8] [9]. This progression may give rise to a narrow AI safety consortium which focuses on narrow AI applications. This model of narrow AI safety / regulation could be used as a trust building space for players who will go on to develop AGI. However, actors who pursue solely AGI and choose not to develop narrow AI technologies might be left out of this scheme.

As jobs become increasingly automated, governments will need to secure more resources (through taxation or other means) to support the affected people. This gradual increase in support could lead to a universal basic income, or a similar model (as outlined above). Eventually AGI would be reached and again the end states would depend on the motivation of the creator.

What did we learn?

Although this roadmap is not a comprehensive outline of all possible scenarios it is useful to demonstrate some possibilities and give us ideas of what we should be focusing on now.

Collaboration

Looking at the roadmap it seems evident that one of the keys to avoiding a doomsday scenario, or a war of AGIs, is collaboration between key actors and the creation of some sort of AI safety consortium or even an international AI co-development structure with stronger ties between actors (“United AI”). In the first scenario we saw the creation of a consortium out of necessity after C1 lost control of the technology. However, in the other two scenarios we see examples of how a safety consortium could help control the development and avoid undesirable scenarios. A consortium that is directed towards safety, but also human well-being, could also help avoid large inequalities in the future and promote an economy of abundance. Nevertheless, identifying the right incentives to cooperate at each point in time remains one of the biggest challenges.

Universal basic income, universal basic dividend, or similar

Another theme that seems inevitable in an AI or AGI economy is a shift towards a “jobless society” where machines do the majority of jobs. A state where, due to automation, the predominant part of the world’s population loses work is something that needs to be planned for. Whether this is a shift to universal basic income, universal basic dividend [10] distributed from a social wealth fund which would invest into equities and bonds, or a similar model that will ensure the societal changes are compensated for, it needs to be gradual to avoid large scale disruptions and chaos. The above-mentioned consortium could also focus on the societal transition to this new system. Check out this post if you would like to read more on AI and the future of work.

Solving the AI race

The roadmap demonstrates implications of a technological race towards AI, and while competition is known to fuel innovation, we should be aware of the risks associated with the race and seek paths to avoid them (e.g. through increasing trust and collaboration). The topic of the AI race has been expanded in the General AI Challenge set up by GoodAI, where participants with different backgrounds from around the world have submitted their risk mitigation proposals. Proposals varied in their definition of the race as well as in their methods for mitigating the pitfalls. They included methods of self-regulation for organisations, international coordination, risk management frameworks and many more. You can find the six prize winning entries at https://www.general-ai-challenge.org/ai-race. We encourage the readers to give us feedback and build on the ideas developed in the challenge.

References

[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.

[2] Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security, Technical Report. Harvard Kennedy School, Harvard University, Boston, MA.

[3] Bostrom, N. 2017. Strategic Implications of Openness in AI Development.
Global Policy 8: 135–148.

[4] Brundage, M., Shahar, A., Clark, J., Allen, G., Flynn, C., Farquhar, S., Crootof, R., & Bryson, J. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

[5] Partnership on AI. (2016). Industry Leaders Establish Partnership on AI Best Practices

[6] IEEE. (2017). IEEE Releases Ethically Aligned Design, Version 2 to show “Ethics in Action” for the Development of Autonomous and Intelligent Systems (A/IS)

[7] Tegmark, M. (2014). The Future of Technology: Benefits and Risks

[8] Havrda, M. & Millership, W. (2018). AI and work — a paradigm shift?. GoodAI blog Medium.

[9] Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., and Sanghvi, S.
(2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. Report from McKinsey Global Institute.

[10] Bruenig, M. (2017). Social Welfare Fund for America. People’s Policy Project.


Roadmapping the AI race to help ensure safe development of AGI was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Report from the AI Race Avoidance Workshop

GoodAI and AI Roadmap Institute
Tokyo, ARAYA headquarters, October 13, 2017

Authors: Marek Rosa, Olga Afanasjeva, Will Millership (GoodAI)

Workshop participants: Olga Afanasjeva (GoodAI), Shahar Avin (CSER), Vlado Bužek (Slovak Academy of Science), Stephen Cave (CFI), Arisa Ema (University of Tokyo), Ayako Fukui (Araya), Danit Gal (Peking University), Nicholas Guttenberg (Araya), Ryota Kanai (Araya), George Musser (Scientific American), Seán Ó hÉigeartaigh (CSER), Marek Rosa (GoodAI), Jaan Tallinn (CSER, FLI), Hiroshi Yamakawa (Dwango AI Laboratory)

Summary

It is important to address the potential pitfalls of a race for transformative AI, where:

  • Key stakeholders, including the developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology won’t be shared by the majority of people to benefit humanity, but only by a selected few

Race dynamics may develop regardless of the motivations of the actors. For example, actors may be aiming to develop a transformative AI as fast as possible to help humanity, to achieve economic dominance, or even to reduce costs of development.

There is already an interest in mitigating potential risks. We are trying to engage more stakeholders and foster cross-disciplinary global discussion.

We held a workshop in Tokyo where we discussed many questions and came up with new ones which will help facilitate further work.

The General AI Challenge Round 2: Race Avoidance will launch on 18 January 2018, to crowdsource mitigation strategies for risks associated with the AI race.

What we can do today:

  • Study and better understand the dynamics of the AI race
  • Figure out how to incentivize actors to cooperate
  • Build stronger trust in the global community by fostering discussions between diverse stakeholders (including individuals, groups, private and public sector actors) and being as transparent as possible in our own roadmaps and motivations
  • Avoid fearmongering around both AI and AGI which could lead to overregulation
  • Discuss the optimal governance structure for AI development, including the advantages and limitations of various mechanisms such as regulation, self-regulation, and structured incentives
  • Call to action — get involved with the development of the next round of the General AI Challenge

Introduction

Research and development in fundamental and applied artificial intelligence is making encouraging progress. Within the research community, there is a growing effort to make progress towards general artificial intelligence (AGI). AI is being recognized as a strategic priority by a range of actors, including representatives of various businesses, private research groups, companies, and governments. This progress may lead to an apparent AI race, where stakeholders compete to be the first to develop and deploy a sufficiently transformative AI [1,2,3,4,5]. Such a system could be either AGI, able to perform a broad set of intellectual tasks while continually improving itself, or sufficiently powerful specialized AIs.

“Business as usual” progress in narrow AI is unlikely to confer transformative advantages. This means that although it is likely that we will see an increase in competitive pressures, which may have negative impacts on cooperation around guiding the impacts of AI, such continued progress is unlikely to spark a “winner takes all” race. It is unclear whether AGI will be achieved in the coming decades, or whether specialised AIs would confer sufficient transformative advantages to precipitate a race of this nature. There seems to be less potential of a race among public actors trying to address current societal challenges. However, even in this domain there is a strong business interest which may in turn lead to race dynamics. Therefore, at present it is prudent not to rule out any of these future possibilities.

The issue has been raised that such a race could create incentives to neglect either safety procedures, or established agreements between key players for the sake of gaining first mover advantage and controlling the technology [1]. Unless we find strong incentives for various parties to cooperate, at least to some degree, there is also a risk that the fruits of transformative AI won’t be shared by the majority of people to benefit humanity, but only by a selected few.

We believe that at the moment people present a greater risk than AI itself, and that AI risks-associated fearmongering in the media can only damage constructive dialogue.

Workshop and the General AI Challenge

GoodAI and the AI Roadmap Institute organized a workshop in the Araya office in Tokyo, on October 13, 2017, to foster interdisciplinary discussion on how to avoid pitfalls of such an AI race.

Workshops like this are also being used to help prepare the AI Race Avoidance round of the General AI Challenge which will launch on 18 January 2018.

The worldwide General AI Challenge, founded by GoodAI, aims to tackle this difficult problem via citizen science, promote AI safety research beyond the boundaries of the relatively small AI safety community, and encourage an interdisciplinary approach.

Why are we doing this workshop and challenge?

With race dynamics emerging, we believe we are still at a time where key stakeholders can effectively address the potential pitfalls.

  • Primary objective: find a solution to problems associated with the AI race
  • Secondary objective: develop a better understanding of race dynamics including issues of cooperation and competition, value propagation, value alignment and incentivisation. This knowledge can be used to shape the future of people, our team (or any team), and our partners. We can also learn to better align the value systems of members of our teams and alliances

It’s possible that through this process we won’t find an optimal solution, but a set of proposals that could move us a few steps closer to our goal.

This post follows on from a previous blogpost and workshop Avoiding the Precipice: Race Avoidance in the Development of Artificial General Intelligence [6].

Topics and questions addressed at the workshop

General question: How can we avoid AI research becoming a race between researchers, developers, companies, governments and other stakeholders, where:

  • Safety gets neglected or established agreements are defied
  • The fruits of the technology are not shared by the majority of people to benefit humanity, but only by a selected few

At the workshop, we focused on:

  • Better understanding and mapping the AI race: answering questions (see below) and identifying other relevant questions
  • Designing the AI Race Avoidance round of the General AI Challenge (creating a timeline, discussing potential tasks and success criteria, and identifying possible areas of friction)

We are continually updating the list of AI race-related questions (see appendix), which will be addressed further in the General AI Challenge, future workshops and research.

Below are some of the main topics discussed at the workshop.

1) How can we better understand the race?

  • Create and understand frameworks for discussing and formalizing AI race questions
  • Identify the general principles behind the race. Study meta-patterns from other races in history to help identify areas that will need to be addressed
  • Use first-principle thinking to break down the problem into pieces and stimulate creative solutions
  • Define clear timelines for discussion and clarify the motivation of actors
  • Value propagation is key. Whoever wants to advance, needs to develop robust value propagation strategies
  • Resource allocation is also key to maximizing the likelihood of propagating one’s values
  • Detailed roadmaps with clear targets and open-ended roadmaps (where progress is not measured by how close the state is to the target) are both valuable tools to understanding the race and attempting to solve issues
  • Can simulation games be developed to better understand the race problem? Shahar Avin is in the process of developing a “Superintelligence mod” for the video game Civilization 5, and Frank Lantz of the NYU Game Center came up with a simple game where the user is an AI developing paperclips

2) Is the AI race really a negative thing?

  • Competition is natural and we find it in almost all areas of life. It can encourage actors to focus, and it lifts up the best solutions
  • The AI race itself could be seen as a useful stimulus
  • It is perhaps not desirable to “avoid” the AI race but rather to manage or guide it
  • Is compromise and consensus good? If actors over-compromise, the end result could be too diluted to make an impact, and not exactly what anyone wanted
  • Unjustified negative escalation in the media around the race could lead to unnecessarily stringent regulations
  • As we see race dynamics emerge, the key question is if the future will be aligned with most of humanity’s values. We must acknowledge that defining universal human values is challenging, considering that multiple viewpoints exist on even fundamental values such as human rights and privacy. This is a question that should be addressed before attempting to align AI with a set of values

3) Who are the actors and what are their roles?

  • Who is not part of the discussion yet? Who should be?
  • The people who will implement AI race mitigation policies and guidelines will be the people working on them right now
  • Military and big companies will be involved. Not because we necessarily want them to shape the future, but they are key stakeholders
  • Which existing research and development centers, governments, states, intergovernmental organizations, companies and even unknown players will be the most important?
  • What is the role of media in the AI race, how can they help and how can they damage progress?
  • Future generations should also be recognized as stakeholders who will be affected by decisions made today
  • Regulation can be viewed as an attempt to limit the future more intelligent or more powerful actors. Therefore, to avoid conflict, it’s important to make sure that any necessary regulations are well thought-through and beneficial for all actors

4) What are the incentives to cooperate on AI?

One of the exercises at the workshop was to analyze:

  • What are motivations of key stakeholders?
  • What are the levers they have to promote their goals?
  • What could be their incentives to cooperate with other actors?

One of the prerequisites for effective cooperation is a sufficient level of trust:

  • How do we define and measure trust?
  • How can we develop trust among all stakeholders — inside and outside the AI community?

Predictability is an important factor. Actors who are open about their value system, transparent in their goals and ways of achieving them, and who are consistent in their actions, have better chances of creating functional and lasting alliances.

5) How could the race unfold?

Workshop participants put forward multiple viewpoints on the nature of the AI race and a range of scenarios of how it might unfold.

As an example, below are two possible trajectories of the race to general AI:

  • Winner takes all: one dominant actor holds an AGI monopoly and is years ahead of everyone. This is likely to follow a path of transformative AGI (see diagram below).

Example: Similar technology advantages have played an important role in geopolitics in the past. For example, by 1900 Great Britain, with only 40 million people, managed to capitalise the advantage of technological innovation creating an empire of about one quarter of the Earth’s land and population [7].

  • Co-evolutionary development: many actors on similar level of R&D racing incrementally towards AGI.

Example: This direction would be similar to the first stage of space exploration when two actors (the Soviet Union and the United States) were developing and successfully putting in use a competing technology.

Other considerations:

  • We could enter a race towards incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)
  • We are in multiple races to have incremental leadership on different types of narrow AI. Therefore we need to be aware of different risks accompanying different races
  • The dynamics will be changing as different races evolve

The diagram below explores some of the potential pathways from the perspective of how the AI itself might look. It depicts beliefs about three possible directions that the development of AI may progress in. Roadmaps of assumptions of AI development, like this one, can be used to think of what steps we can take today to achieve a beneficial future even under adversarial conditions and different beliefs.

Click here for full-size image

Legend:

  • Transformative AGI path: any AGI that will lead to dramatic and swift paradigm shifts in society. This is likely to be a “winner takes all” scenario.
  • Swiss Army Knife AGI path: a powerful (can be also decentralized) system made up of individual expert components, a collection of narrow AIs. Such AGI scenario could mean more balance of power in practice (each stakeholder will be controlling their domain of expertise, or components of the “knife”). This is likely to be a co-evolutionary path.
  • Narrow AI path: in this path, progress does not indicate proximity to AGI and it is likely to see companies racing to create the most powerful possible narrow AIs for various tasks.

Current race assumption in 2017

Assumption: We are in a race to incrementally more capable narrow AI (not a “winner takes all” scenario: grab AI talent)

  • Counter-assumption: We are in a race to “incremental” AGI (not a “winner takes all” scenario)
  • Counter-assumption: We are in a race to recursive AGI (winner takes all)
  • Counter-assumption: We are in multiple races to have incremental leadership on different types of “narrow” AI

Foreseeable future assumption

Assumption: At some point (possibly 15 years) we will enter a widely-recognised race to a “winner takes all” scenario of recursive AGI

  • Counter-assumption: In 15 years, we continue incremental (not a “winner takes all” scenario) race on narrow AI or non-recursive AGI
  • Counter-assumption: In 15 years, we enter a limited “winner takes all” race to certain narrow AI or non-recursive AGI capabilities
  • Counter-assumption: The overwhelming “winner takes all” is avoided by the total upper limit of available resources that support intelligence

Other assumptions and counter-assumptions of race to AGI

Assumption: Developing AGI will take a large, well-funded, infrastructure-heavy project

  • Counter-assumption: A few key insights will be critical and they could come from small groups. For example, Google Search which was not invented inside a well known established company but started from scratch and revolutionized the landscape
  • Counter-assumption: Small groups can also layer key insights onto existing work of bigger groups

Assumption: AI/AGI will require large datasets and other limiting factors

  • Counter-assumption: AGI will be able to learn from real and virtual environments and a small number of examples the same way humans can

Assumption: AGI and its creators will be easily controlled by limitations on money, political leverage and other factors

  • Counter-assumption: AGI can be used to generate money on the stock market

Assumption: Recursive improvement will proceed linearly/diminishing returns (e.g. learning to learn by gradient descent by gradient descent)

  • Counter-assumption: At a certain point in generality and cognitive capability, recursive self-improvement may begin to improve more quickly than linearly, precipitating an “intelligence explosion”

Assumption: Researcher talent will be key limiting factor in AGI development

  • Counter-assumption: Government involvement, funding, infrastructure, computational resources and leverage are all also potential limiting factors

Assumption: AGI will be a singular broad-intelligence agent

  • Counter-assumption: AGI will be a set of modular components (each limited/narrow) but capable of generality in combination
  • Counter-assumption: AGI will be an even wider set of technological capabilities than the above

6) Why search for AI race solution publicly?

  • Transparency allows everyone to learn about the topic, nothing is hidden. This leads to more trust
  • Inclusion — all people from across different disciplines are encouraged to get involved because it’s relevant to every person alive
  • If the race is taking place, we won’t achieve anything by not discussing it, especially if the aim is to ensure a beneficial future for everyone

Fear of an immediate threat is a big motivator to get people to act. However, behavioral psychology tells us that in the long term a more positive approach may work best to motivate stakeholders. Positive public discussion can also help avoid fearmongering in the media.

7) What future do we want?

  • Consensus might be hard to find and also might not be practical or desirable
  • AI race mitigation is basically an insurance. A way to avoid unhappy futures (this may be easier than maximization of all happy futures)
  • Even those who think they will be a winner may end up second, and thus it’s beneficial for them to consider the race dynamics
  • In the future it is desirable to avoid the “winner takes all” scenario and make it possible for more than one actor to survive and utilize AI (or in other words, it needs to be okay to come second in the race or not to win at all)
  • One way to describe a desired future is where the happiness of each next generation is greater than the happiness of a previous generation

We are aiming to create a better future and make sure AI is used to improve the lives of as many people as possible [8]. However, it is difficult to envisage exactly what this future will look like.

One way of envisioning this could be to use a “veil of ignorance” thought experiment. If all the stakeholders involved in developing transformative AI assume they will not be the first to create it, or that they would not be involved at all, they are likely to create rules and regulations which are beneficial to humanity as a whole, rather than be blinded by their own self interest.

AI Race Avoidance challenge

In the workshop we discussed the next steps for Round 2 of the General AI Challenge.

About the AI Race Avoidance round

  • Although this post has used the title AI Race Avoidance, it is likely to change. As discussed above, we are not proposing to avoid the race but rather to guide, manage or mitigate the pitfalls. We will be working on a better title with our partners before the release.
  • The round has been postponed until 18 January 2018. The extra time allows more partners, and the public, to get involved in the design of the round to make it as comprehensive as possible.
  • The aim of the round is to raise awareness, discuss the topic, get as diverse an idea pool as possible and hopefully to find a solution or a set of solutions.

Submissions

  • The round is expected to run for several months, and can be repeated
  • Desired outcome: next-steps or essays, proposed solutions or frameworks for analyzing AI race questions
  • Submissions could be very open-ended
  • Submissions can include meta-solutions, ideas for future rounds, frameworks, convergent or open-ended roadmaps with various level of detail
  • Submissions must have a two page summary and, if needed, a longer/unlimited submission
  • No limit on number of submissions per participant

Judges and evaluation

  • We are actively trying to ensure diversity on our judging panel. We believe it is important to have people from different cultures, backgrounds, genders and industries representing a diverse range of ideas and values
  • The panel will judge the submissions on how they are maximizing the chances of a positive future for humanity
  • Specifications of this round are work in progress

Next steps

  • Prepare for the launch of AI Race Avoidance round of the General AI Challenge in cooperation with our partners on 18 January 2018
  • Continue organizing workshops on AI race topics with participation of various international stakeholders
  • Promote cooperation: focus on establishing and strengthening trust among the stakeholders across the globe. Transparency in goals facilitates trust. Just like we would trust an AI system if its decision making is transparent and predictable, the same applies to humans

Call to action

At GoodAI we are open to new ideas about how AI Race Avoidance round of the General AI Challenge should look. We would love to hear from you if you have any suggestions on how the round should be structured, or if you think we have missed any important questions on our list below.

In the meantime we would be grateful if you could share the news about this upcoming round of the General AI Challenge with anyone you think might be interested.

Appendix

More questions about the AI race

Below is a list of some more of the key questions we will expect to see tackled in Round 2: AI Race Avoidance of the General AI Challenge. We have split them into three categories: Incentive to cooperate, What to do today, and Safety and security.

Incentive to cooperate:

  • How to incentivise the AI race winner to obey any related previous agreements and/or share the benefits of transformative AI with others?
  • What is the incentive to enter and stay in an alliance?
  • We understand that cooperation is important in moving forward safely. However, what if other actors do not understand its importance, or refuse to cooperate? How can we guarantee a safe future if there are unknown non-cooperators?
  • Looking at the problems across different scales, the pain points are similar even at the level of internal team dynamics. We need to invent robust mechanisms for cooperation between: individual team members, teams, companies, corporations and governments. How do we do this?
  • When considering various incentives for safety-focused development, we need to find a robust incentive (or a combination of such) that would push even unknown actors towards beneficial AGI, or at least an AGI that can be controlled. How?

What to do today:

  • How to reduce the danger of regulation over-shooting and unreasonable political control?
  • What role might states have in the future economy and which strategies are they assuming/can assume today, in terms of their involvement in AI or AGI development?
  • With regards to the AI weapons race, is a ban on autonomous weapons a good idea? What if other parties don’t follow the ban?
  • If regulation overshoots by creating unacceptable conditions for regulated actors, the actors may decide to ignore the regulation and bear the risk of potential penalties. For example, total prohibition of alcohol or gambling may lead to displacement of the activities to illegal areas, while well designed regulation can actually help reduce the most negative impacts such as developing addiction.
  • AI safety research needs to be promoted beyond the boundaries of the small AI safety community and tackled interdisciplinarily. There needs to be active cooperation between safety experts, industry leaders and states to avoid negative scenarios. How?

Safety and security:

  • What level of transparency is optimal and how do we demonstrate transparency?
  • Impact of openness: how open shall we be in publishing “solutions” to the AI race?
  • How do we stop the first developers of AGI becoming a target?
  • How can we safeguard against malignant use of AI or AGI?

Related questions

  • What is the profile of a developer who can solve general AI?
  • Who is a bigger danger: people or AI?
  • How would the AI race winner use the newly gained power to dominate existing structures? Will they have a reason to interact with them at all?
  • Universal basic income?
  • Is there something beyond intelligence? Intelligence 2.0
  • End-game: convergence or open-ended?
  • What would an AGI creator desire, given the possibility of building an AGI within one month/year?
  • Are there any goods or services that an AGI creator would need immediately after building an AGI system?
  • What might be the goals of AGI creators?
  • What are the possibilities of those that develop AGI first without the world knowing?
  • What are the possibilities of those that develop AGI first while engaged in sharing their research/results?
  • What would make an AGI creator share their results, despite having the capability of mass destruction (e.g. Internet paralysis) (The developer’s intentions might not be evil, but his defense to “nationalization” might logically be a show of force)
  • Are we capable of creating such a model of cooperation in which the creator of an AGI would reap the most benefits, while at the same time be protected from others? Does a scenario exist in which a software developer monetarily benefits from free distribution of their software?
  • How to prevent usurpation of AGI by governments and armies? (i.e. an attempt at exclusive ownership)

References

[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.

[2] Baum, S. D. (2016). On the promotion of safe and socially beneficial artificial intelligence. AI and Society (2011), 1–9.

[3] Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy, 8(2), 135–148.

[4] Geist, E. M. (2016). It’s already too late to stop the AI arms race — We must manage it instead. Bulletin of the Atomic Scientists, 72(5), 318–321.

[5] Conn, A. (2017). Can AI Remain Safe as Companies Race to Develop It?

[6] AI Roadmap Institute (2017). AVOIDING THE PRECIPICE: Race Avoidance in the Development of Artificial General Intelligence.

[7] Allen, Greg, and Taniel Chan. Artificial Intelligence and National Security. Report. Harvard Kennedy School, Harvard University. Boston, MA, 2017.

[8] Future of Life Institute. (2017). ASILOMAR AI PRINCIPLES developed in conjunction with the 2017 Asilomar conference.

Other links:


Report from the AI Race Avoidance Workshop was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.