This article accompanies a visual roadmap which you can view and download here.
Why are roadmaps important?
Roadmapping is a useful tool to allow us to look into the future, predict different possible pathways, and identify areas that might present opportunities or problems. The aim is to visualise different scenarios in order to prepare, and avoid scenarios that might lead to an undesirable future or even worse, disaster. It is also an exercise for visualizing a desired future and finding the optimal path towards achieving it.
Introducing the roadmap
This roadmap depicts three hypothetical scenarios in the development of an artificial general intelligence (AGI) system, from the perspective of an imaginary company (C1). The main focus is on the AI race, where stakeholders strive to reach powerful AI, and its implications on safety. It maps out possible decisions made by key actors in various “states of affairs”, which lead to diverse outcomes. Traffic-light color coding is used to visualise the potential outcomes with green showing positive outcomes, red — negative and orange — in-between.
The aim of this roadmap is not to present the viewer with all possible scenarios, but with a few vivid examples. The roadmap is primarily focusing on AGI, which presumably will have a transformative potential and would be able to dramatically affect society [1].
This roadmap intentionally ventures into some of the extreme scenarios to provoke the discussion on AGI’s role in paradigm shifts.
Scenario 1 - AI race: dangerous AGI is developed
Assuming that the potential of AGI is so great, and being the first to create it could give an unprecedented advantage [2] [3], there is a possibility that an AGI could be deployed before it is adequately tested. In this scenario C1 creates AGI while others still race to complete the technology. This could lead to C1 becoming anxious, deploying the AGI before safety is assured, and losing control of it.
What happens next in this scenario would depend on the nature of the AGI created. If the recursive self-improvement of AGI continues too fast for developers to catch up, the future would be out of humanity’s hands. In this case, depending on the objectives and values of the AGI, it could lead to a doomsday scenario or a kind of coexistence, where some people manage to merge with the AGI and reap its benefits, and others not.
However, if the self-improvement rate of the AGI is not exponential, there may be enough maneuvering time to bring it under control again. The AGI might start to disrupt the socio-economic structures [4], pushing affected groups into action. This could lead to some sort of AGI safety consortium, which includes C1, that could be dedicated to developing and deploying safety measures to bring the technology under control. Therefore, this consortium would be created out of necessity and would likely stay together to ensure AGI remains beneficial in the future. Once the AGI is under control it could theoretically lead to a scenario where a powerful and safe AGI can be (re)created transparently.
Powerful and safe AGI
The powerful and safe AGI outcome can be reached from both scenario 1 and 2 (see diagram). It is possible that some sort of powerful AGI prototype will go onto the market, and while it will not pose an existential threat, it will likely cause major societal disruptions and automation of most of the jobs. This could lead to the need for a form of a “universal basic income”, or an alternative model which enables the sharing of income and benefits of AGI among the population. For example, the general public could be able to claim their share in the new “AI economy” through mechanisms provided by an inclusive alliance (see below). Note that the role of governments as public support program providers might significantly reduce unless the governments have access to AGI alongside powerful economic players. Traditional levers the governments push to obtain resources through taxation might not be sufficient in a new AI economy.
Scenario 2 — AI race: focus on safety
In this scenario AGI is seen as a powerful tool, which will give its creator a major economic and societal advantage. It is not primarily considered here (as it is above) as an existential risk, but as a likely cause of many disruptions and shifts in power. Developers keep most research private and any alliances do not grow past superficial PR coalitions, however, a lot of work is done on AI safety. Two possible paths this scenario could take are a collaborative approach to development or a stealth one.
Collaborative approach
With various actors calling for collaboration on AGI development it is likely that some sort of consortium would develop. This could start off as an ad-hoc trust building exercise between a few players collaborating on “low stake” safety issues, but could develop into a larger international AGI co-development structure. Nowadays the way towards a positive scenario is being paved with notable initiatives including the Partnership on AI [5], IEEE working on ethically aligned design [6], the Future of Life Institute [7] and many more. In this roadmap a hypothetical organization of a global scale, where members collaborate on algorithms and safety (titled “United AI” analogous to United Nations), is used as an example. This is more likely to lead to the “Powerful and safe AGI” state described above, as all available global talent would be dedicated, and could contribute, to safety features and testing.
Stealth approach
The opposite could also happen and developers could work in stealth, still doing safety work internally, but trust between organizations would not be strong enough to foster collaborative efforts. This has the potential to go in many different paths. The roadmap focuses on what might happen if multiple AGIs with different owners emerge around the same time or if C1 has a monopoly over the technology.
Multiple AGIs
Multiple AGIs could emerge around the same time. This could be due to a “leak” in the company, other companies getting close at the same time, or if AGI is voluntarily given away by its creators.
This path also has various potentials depending on the creators’ goals. We could reach a “war of AGIs” where the different actors battle it out for absolute control. However, we could find ourselves in a situation of stability, similar to the post-WW2 world, where a separate AGI economy with multiple actors develops and begins to function. This could lead to two parallel worlds of people who have access to AGI and those who don’t, or even those who merge with AGI creating a society of AGI “gods”. This again could lead to greater inequality, or an economy of abundance, depending on the motivations of the AGI “gods” and whether they choose to share the fruits of AGI with the rest of humanity.
AGI monopoly
If C1 manages to keep AGI within its walls through team culture and security measures, it could go a number of ways. If C1 had bad intentions it could use the AGI to conquer the world, which would be similar to the “war of AGIs” (above). However, the competition is unlikely to stand a chance against such powerful technology. It could also lead to the other two end states above: if C1 decides to share the fruits of the technology with humanity, we could see an economy of abundance, and if it doesn’t, the society will likely be very unequal. However, there is another possibility explored and that is if C1 has no interest in this world and continues to operate in stealth once AGI is created. With the potential of the technology C1 could leave earth and begin to explore the universe without anyone noticing.
Scenario 3 - smooth transition from narrow AI to AGI
This scenario sees a gradual transition from narrow AI to AGI. Along the way infrastructure is built up and powershifts are slower and more controlled. We are already seeing narrow AI occupy our everyday lives throughout the economy and society with manual jobs becoming increasingly automated [8] [9]. This progression may give rise to a narrow AI safety consortium which focuses on narrow AI applications. This model of narrow AI safety / regulation could be used as a trust building space for players who will go on to develop AGI. However, actors who pursue solely AGI and choose not to develop narrow AI technologies might be left out of this scheme.
As jobs become increasingly automated, governments will need to secure more resources (through taxation or other means) to support the affected people. This gradual increase in support could lead to a universal basic income, or a similar model (as outlined above). Eventually AGI would be reached and again the end states would depend on the motivation of the creator.
What did we learn?
Although this roadmap is not a comprehensive outline of all possible scenarios it is useful to demonstrate some possibilities and give us ideas of what we should be focusing on now.
Collaboration
Looking at the roadmap it seems evident that one of the keys to avoiding a doomsday scenario, or a war of AGIs, is collaboration between key actors and the creation of some sort of AI safety consortium or even an international AI co-development structure with stronger ties between actors (“United AI”). In the first scenario we saw the creation of a consortium out of necessity after C1 lost control of the technology. However, in the other two scenarios we see examples of how a safety consortium could help control the development and avoid undesirable scenarios. A consortium that is directed towards safety, but also human well-being, could also help avoid large inequalities in the future and promote an economy of abundance. Nevertheless, identifying the right incentives to cooperate at each point in time remains one of the biggest challenges.
Universal basic income, universal basic dividend, or similar
Another theme that seems inevitable in an AI or AGI economy is a shift towards a “jobless society” where machines do the majority of jobs. A state where, due to automation, the predominant part of the world’s population loses work is something that needs to be planned for. Whether this is a shift to universal basic income, universal basic dividend [10] distributed from a social wealth fund which would invest into equities and bonds, or a similar model that will ensure the societal changes are compensated for, it needs to be gradual to avoid large scale disruptions and chaos. The above-mentioned consortium could also focus on the societal transition to this new system. Check out this post if you would like to read more on AI and the future of work.
Solving the AI race
The roadmap demonstrates implications of a technological race towards AI, and while competition is known to fuel innovation, we should be aware of the risks associated with the race and seek paths to avoid them (e.g. through increasing trust and collaboration). The topic of the AI race has been expanded in the General AI Challenge set up by GoodAI, where participants with different backgrounds from around the world have submitted their risk mitigation proposals. Proposals varied in their definition of the race as well as in their methods for mitigating the pitfalls. They included methods of self-regulation for organisations, international coordination, risk management frameworks and many more. You can find the six prize winning entries at https://www.general-ai-challenge.org/ai-race. We encourage the readers to give us feedback and build on the ideas developed in the challenge.
References
[1] Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI & SOCIETY, 31(2), 201–206.
[2] Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security, Technical Report. Harvard Kennedy School, Harvard University, Boston, MA.
[3] Bostrom, N. 2017. Strategic Implications of Openness in AI Development.
Global Policy 8: 135–148.
[4] Brundage, M., Shahar, A., Clark, J., Allen, G., Flynn, C., Farquhar, S., Crootof, R., & Bryson, J. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
[5] Partnership on AI. (2016). Industry Leaders Establish Partnership on AI Best Practices
[6] IEEE. (2017). IEEE Releases Ethically Aligned Design, Version 2 to show “Ethics in Action” for the Development of Autonomous and Intelligent Systems (A/IS)
[7] Tegmark, M. (2014). The Future of Technology: Benefits and Risks
[8] Havrda, M. & Millership, W. (2018). AI and work — a paradigm shift?. GoodAI blog Medium.
[9] Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Ko, R., and Sanghvi, S.
(2017). Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages. Report from McKinsey Global Institute.
[10] Bruenig, M. (2017). Social Welfare Fund for America. People’s Policy Project.
Roadmapping the AI race to help ensure safe development of AGI was originally published in AI Roadmap Institute Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.