AI for society: creating AI that supports equality, transparency, and democracy

By Jessica Montgomery, Senior Policy Adviser

The Royal Society’s artificial intelligence (AI) programme explores the frontiers of AI technologies, and their implications for individuals, communities, and society.

As part of our programme of international science and policy dialogue about AI, last year we worked with the American Academy of Arts and Sciences to bring together leading researchers from across disciplines to consider the implications of AI for equality, transparency, and democracy.

This blog gives some of the key points from discussions, which are summarised in more detail in the workshop discussions note (PDF).

AI for society

Today’s AI technologies can help create highly accurate systems, which are able to automate sophisticated tasks. As these technologies progress, researchers and policymakers are grappling with questions about how well these technologies serve society’s needs.

Experience of previous waves of transformative technological change shows that, even when society has a good understanding of the issues new technologies present, it is challenging to create a common vision for the future, to set in place measures that align the present with that desired future, and to engage collective action in ways that help bring it into being.

Where might this common vision for the future come from? International conventions on human rights and sustainable development already point to areas of internationally-agreed need for action, and a growing list of organisations have produced guidelines or principles aiming to shape the development of AI for societal benefit. However, the mechanisms for putting these principles into practice remain unclear. At the same time, there are many other ways in which the things that societies value seem to be contested. In this context, what might AI mean for core values like fairness, transparency, or democracy?

Fairness and equality

Real-world data is messy: it contains missing entries, it can be skewed or subject to sampling errors, and it is often re-purposed for different types of analysis. These sampling errors or other issues in data collection can skew the outputs of a machine learning system, influencing how well it works for different communities of users.

The models created by a machine learning system can also generate unfair outputs, even if trained on accurate data. In recruitment, for example, systems that make predictions about the outcomes of job offers or training can be influenced by biases arising from social structures that are embedded in data at the point of collection. A lack of diversity in the tech community can compound these technical issues, if it reduces the extent to which developers are aware of potential biases when designing machine learning systems.

In seeking to resolve these issues, both technology-enabled and human-led solutions can play a role. For example:

  • Initiatives to address issues of bias in datasets, for example Datasheets for Datasets (PDF), set out recommended usage for datasets that are made available for open use.
  • A combination of technical and domain insights can improve the performance of machine learning systems on ‘real world’ problems. This requires teams of people from a range of backgrounds and areas of expertise.

Interpretability and transparency

The terms ‘interpretability’ or ‘transparency’ mean different things to different people (PDF), and words such as interpretable, explainable, intelligible, transparent or understandable are often used interchangeably, or inconsistently, in debates about AI. But is this variability in meaning problematic?

There are many reasons why users or developers might want to understand why an AI system reached a decision: interpretability can help developers improve system design; it can help users assess risk or understand how a system might fail; and it might be necessary for regulatory or legal standards.

And there are different approaches to creating interpretable systems. Some AI is interpretable by design, while in other cases researchers can create tools to interrogate complex AI systems.

The significance attached to interpretability, and the type of interpretability that is desirable, will likely depend on the context and user. So the absence of a clear definition of interpretability might ultimately be helpful, if it encourages researchers to reach out to a range of stakeholders to understand what different communities need from AI systems.

Democracy and civil society

Early in the development of digital technologies, a great hope had been that they would enable people to connect and build communities in new ways, strengthening society and promoting new forms of citizen engagement. To some extent, this goal has been achieved: people have an opportunity to communicate with much broader – or much narrower – groups in ways that were not previously possible.

Many countries today are grappling with the unintended consequences of these networks. The information echo chambers that have existed in the physical world have found new manifestations in algorithmically-enabled filter bubbles, and the anonymity afforded by digital interactions has raised new questions also arise about the trustworthiness of online information.

Public and policy debates about AI and democracy have tended to concentrate on how changing patterns of news consumption might shape people’s political opinions. In response, could AI be used to improve the circulation of information, providing people with trustworthy information necessary to inform political debate? Perhaps, but insights from behavioural and social sciences show that the process of making political choices is influenced by emotional and social forces, as well as information.

Democracy is more than the exchange of information in campaigns and elections. It draws from a collection of institutions and civic interactions. Democracy persists because institutions preserve it: in the press and the electoral process, but also in courts, in schools, in hospitals, and more. If democracy resides in institutions, then how can AI support them? There is a need for spaces where people can develop civic networks or new civic institutions that allow people from different backgrounds to engage as citizens on common endeavours, as well as interacting online.

To read more about the key questions raised by this discussion, check out the meeting note (PDF), and you can also read more about the Society’s AI programme.

 

The impact of AI on work: implications for individuals, communities, and societies


By Jessica Montgomery, Senior Policy Adviser

Advances in AI technologies are contributing to new products and services across industries – from robotic surgery to debt collection – and offer many potential benefits for economies, societies, and individuals.

With this potential, come questions about the impact of AI technologies on work and working life, and renewed public and policy debates about automation and the future of work.

Building on the insights from the Royal Society’s Machine Learning study, a new evidence synthesis by the Royal Society and the British Academy draws on research across disciplines to consider how AI might affect work. It brings together key insights from current research and policy debates – from economists, historians, sociologists, data scientists, law and management specialists, and others – about the impact of AI on work, with the aim of helping policymakers to prepare for these.

Current understandings about the impact of AI on work

While much of the public and policy debate about AI and work has tended to oscillate between fears of the ‘end of work’ and reassurances that little will change in terms of overall employment, evidence suggests that neither of these extremes is likely. However, there is consensus that AI will have a disruptive effect on work, with some jobs being lost, others being created, and others changing.

Over the last five years, there have been many projections of the numbers of jobs likely to be lost, gained, or changed by AI technologies, with varying outcomes and using various timescales for analysis.

Most recently, a consensus has begun to emerge from such studies that 10-30% of jobs in the UK are highly automatable. Many new jobs will also be created. However, there remain large uncertainties about the likely new technologies and their precise relationship to tasks. Consequently, it is difficult to make predictions about which jobs will see a fall in demand and the scale of new job creation.

Implications for individuals, communities, and societies

Despite this uncertainty, evidence from previous waves of technological change – including the Industrial Revolution and the advent of computing – can provide evidence and insights to inform policy debates today.

Studies of the history of technological change demonstrate that, in the longer term, technologies contribute to increases in population-level productivity, employment, and economic wealth. However, such studies also show that these population-level benefits take time to emerge, and there can be periods in the interim where parts of the population experience significant disbenefits. In the context of the British Industrial Revolution, for example, studies show that wages stagnated for a period despite output per worker increasing. In the same period, technological changes enabled or interacted with large population movements from land to cities, ways of working at home and in factories changed, and there were changes to the distribution of income and wealth across demographics.

Evidence from historical and contemporary studies indicates that technology-enabled changes to work tend to affect lower-paid and lower-qualified workers more than others. For example, in recent years, technology has contributed to a form of job polarisation that has favoured higher-educated workers, while reducing the number of middle-income jobs, and increasing competition for non-routine manual labour.

This type of evidence suggests there are likely to be significant transitional effects as AI technologies begin to play a bigger role in the workplace, which cause disruption for some people or places. One of the greatest challenges raised by AI is therefore a potential widening of inequality.

The role of technology in changing patterns of work and employment

The extent to which technological advances are – overall – a substitute for human workers depends on a balance of forces. Productivity growth, the number of jobs created as a result of growing demand, movement of workers to different roles, and emergence of new jobs linked to the new technological landscape all influence the overall economic impact of automation by AI technologies. Concentration of market power can also play a role in shaping labour’s income share, competition, and productivity.

So, while technology is often the catalyst for revisiting concerns about automation and work, and may play a leading role in framing public and policy debates, it is not a unique or overwhelming force. Non-technological factors – including political, economic, and cultural elements – will contribute to shaping the impact of AI on work and working life.

Policy responses and ‘no regrets’ steps

In the face of significant uncertainties about the future of work, what role can policymakers play in contributing to the careful stewardship of AI technologies?

At workshops held by the Royal Society and British Academy, participants offered various suggestions for policy responses to explore, focused around:

  • Ensuring that the workers of the future are equipped with the education and skills they will need to be ‘digital citizens’ (for example, through teaching key concepts in AI at primary school-level, as recommended in the Society’s Machine Learning report);
  • Addressing concerns over the changing nature of working life, for example with respect to income security and the gig economy, and in tackling potential biases from algorithmic systems at work;
  • Meeting the likely demand for re-training for displaced workers through new approaches to training and development; and
  • Introducing measures to share the benefits of AI across communities, including by supporting local economic growth.

While it is not yet clear how potential changes to the world of work might look, active consideration is needed now about how society can ensure that the increased use of AI is not accompanied by increased inequality. At this stage, it will be important to take ‘no regrets’ steps, which allow policy responses to adapt as new implications emerge, and which offer benefits in a range of future scenarios. One example of such a measure would be in building a skills-base that is prepared to make use of new AI technologies.

Through the varying estimates of jobs lost or created, tasks automated, or productivity increases, there remains a clear message: AI technologies will have a significant impact on work, and their effects will be felt across the economy. Who benefits from AI-enabled changes to the world of work will be influenced by the policies, structures, and institutions in place. Understanding who will be most affected, how the benefits are likely to be distributed, and where the opportunities for growth lie will be key to designing the most effective interventions to ensure that the benefits of this technology are broadly shared.

 

Machine learning and AI for social good: views from NIPS 2017


By Jessica Montgomery, Senior Policy Adviser

In early December, 8000 machine learning researchers gathered in Long Beach for 2017’s Neural Information Processing Systems conference. In the margins of the conference, the Royal Society and Foreign and Commonwealth Office Science and Innovation Network brought together some of the leading figures in this community to explore how the advances in machine learning and AI that were being showcased at the conference could be harnessed in a way that supports broad societal benefits. This highlighted some emerging themes, at both the meeting and the wider conference, on the use of AI for social good.

The question is not ‘is AI good or bad?’ but ‘how will we use it?’

Behind (or beyond) the headlines proclaiming that AI will save the world or destroy our jobs, there lie significant questions about how, where, and why society will make use of AI technologies. These questions are not about whether the technology itself is inherently productive or destructive, but about how society will choose to use it, and how the benefits of its use can be shared across society.

In healthcare, machine learning offers the prospect of improved diagnostic tools, new approaches to healthcare delivery, and new treatments based on personalised medicine.  In transport, machine learning can support the development of autonomous driving systems, as well as enabling intelligent traffic management, and improving safety on the roads.  And socially-assistive robotics technologies are being developed to provide assistance that can improve quality of life for their users. Teams in the AI Xprize competition are developing applications across these areas, and more, including education, drug-discovery, and scientific research.

Alongside these new applications and opportunities come questions about how individuals, communities, and societies will interact with AI technologies. How can we support research into areas of interest to society? Can we create inclusive systems that are able to navigate questions about societal biases? And how can the research community develop machine learning in an inclusive way?

Creating the conditions that support applications of AI for social good

Applying AI to public policy challenges often requires access to complex, multi-modal data about people and public services. While many national or local government administrations, or non-governmental actors, hold significant amounts of data that could be of value in applications of AI for social good, this data can be difficult to put to use. Institutional, cultural, administrative, or financial barriers can make accessing the data difficult in the first instance. If accessible in principle, this type of data is also often difficult to use in practice: it might be held in outdated systems, be organised to different standards, suffer from compatibility issues with other datasets, or be subject to differing levels of protection. Enabling access to data through new frameworks and supporting data management based on open standards could help ease these issues, and these areas were key recommendations in the Society’s report on machine learning, while our report on data governance sets out high-level principles to support public confidence in data management and use.

In addition to requiring access to data, successful research in areas of social good often require interdisciplinary teams that combine machine learning expertise with domain expertise. Creating these teams can be challenging, particularly in an environment where funding structures or a pressure to publish certain types of research may contribute to an incentives structure that favours problems with ‘clean’ solutions.

Supporting the application of AI for social good therefore requires a policy environment that enables access to appropriate data, supports skills development in both the machine learning community and in areas of potential application, and that recognises the role of interdisciplinary research in addressing areas of societal importance.

The Royal Society’s machine learning report comments on the steps needed to create an environment of careful stewardship of machine learning, which supports the application of machine learning, while helping share its benefits across society. The key areas for action identified in the report – in creating an amenable data environment, building skills at all levels, supporting businesses, enabling public engagement, and advancing research – aim to create conditions that support the application of AI for social good.

Research in areas of societal interest

In addition to these application-focused issues, there are broader challenges for machine learning research to address some of the ethical questions raised around the use of machine learning.

Many of these areas were explored by workshops and talks at the conference. For example, a tutorial on fairness explored the tools available for researchers to examine the ways in which questions about inequality might affect their work.  A symposium on interpretability explored the different ways in which research can give insights into the sometimes complex operation of machine learning systems.  Meanwhile, a talk on ‘the trouble with bias’ considered new strategies to address bias.

The Royal Society has set out how a new wave of research in key areas – including privacy, fairness, interpretability, and human-machine interaction – could support the development of machine learning in a way that addresses areas of societal interest. As research and policy discussions around machine learning and AI progress, the Society will be continuing to play an active role in catalysing discussions about these challenges.

For more information about the Society’s work on machine learning and AI, please visit our website at: royalsociety.org/machine-learning

The stories we tell about technology: AI Narratives

By Susannah Odell and Natasha McCarthy

Technology narratives

The nature, promise and risks of new technologies enter into our shared thinking through narrative – explicit or implicit stories about the technologies and their place in our lives. These narratives can determine what is salient about the technologies, influencing how they are represented in media, culture and everyday discussion. The narratives can influence the dynamics of concern and aspiration across society; the ways and the contexts in which different groups and individuals become aware of and respond to mainstream, new and emerging technologies. The narratives available at a particular point in time, and who tells them, can affect the course of technology development and uptake in subtle ways.

Whilst stories about artificial intelligence have been around for centuries, the way we think about AI is evolving. The Royal Society and Leverhulme Centre for the Future of Intelligence are exploring the ways that narratives about AI might be influencing the development of the technology.

The longevity and influence of narratives

Exploring different technology areas can show how explicit framings of a technology – how it is presented to the wider world – can be influential and long-lasting in this respect. For example in nuclear energy, Lewis Strauss, the Chairman of the US Atomic Energy Commission in 1954, stated that nuclear power would create energy “too cheap to meter”. What turned out to be over-promising for this technology shaped the arguments of sceptics, and this image continues to be used by those who critique the technology. Early scientific optimism can create inadvertent and unexpected milestones that may – rightly or wrongly – influence how technology is perceived when those milestones are not met. Such framings can be hard to shake off and can dominate more complex and subtle considerations.

Diverse visions: aspirations and concerns

Instead of promoting single framings for technologies and their applications, sowing the seeds early on for multiple voices to be heard can promote diverse narratives and ensure that the technology develops in line with genuine societal needs. A greater diversity of both actors in the development of the technology and diversity in the stories we tell about AI may elucidate new uses and governance needs. This requires extensive and continued public dialogue; the Royal Society’s public dialogue on machine learning recently explored how these views can be context specific.

Encouraging credible, trustworthy and independent communicators who do not stand to benefit personally from the technology can create more realistic narratives around new technologies and science, especially when combined with greater scientific transparency and self-correction. Comprehensive scenario planning can build trustworthy narratives, helping to analyse possible worst case accident scenarios and substantially reduce future risk.

Widening the narratives on AI

Diversifying today’s stories about AI to ensure that they are reflective of the current technological development, will give us better ideas of how AI can be used to transform our lives. Dominant narratives focus on anthropomorphised AI, but the reality of AI includes systems that are distributed, embedded in complex systems and can be found in varied applications such as helping doctors detect breast cancer or increasing the responsiveness of emergency services to flooding incidents. Adding to existing narratives with stories from underrepresented voices can also help us, as citizens, policy-makers and scientists, to imagine new opportunities and expand our assessment of how AI should be regarded, regulated and harnessed for the best possible economic and societal outcomes.

This is why the Leverhulme Centre for the Future of Intelligence and the Royal Society are exploring how visions and narratives are shaping perceptions, the development of intelligent technology and trust in its use. Find more information.