A draft open standard for an Ethical Black Box

About 5 years ago we proposed that all robots should be fitted with the robot equivalent of an aircraft Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box (EBB). We argued that an ethical black box will play a key role in the processes of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility.

Since then, within the RoboTIPS project, we have developed and tested several model EBBs, including one for an e-puck robot that I wrote about in this blog, and another for the MIRO robot. With some experience under our belts, we have now drafted an Open Standard for the EBB for social robots – initially as a paper submitted to the International Conference on Robots Ethics and Standards. Let me now explain first why we need a standard, and second why it should be an open standard.

Why do we need a standard specification for an EBB? As we outline in our new paper, there are four reasons:

  1. A standard approach to EBB implementation in social robots will greatly benefit accident and incident (near miss) investigations. 
  2. An EBB will provide social robot designers and operators with data on robot use that can support both debugging and functional improvements to
    the robot. 
  3. An EBB can be used to support robot ‘explainability’ functions to allow, for instance, the robot to answer ‘Why did you just do that?’ questions from its user. And,
  4. a standard allows EBB implementations to be readily shared and adapted for different robots and, we hope, encourage manufacturers to develop and market general purpose robot EBBs.

And why should it be an Open Standard? Bruce Perens, author of The Open Source Definition, outlines a number of criteria an open standard must satisfy, including:

  • Availability: Open standards are available for all to read and implement.
  • Maximize End-User Choice: Open Standards create a fair, competitive market for implementations of the standard.
  • No Royalty: Open standards are free for all to implement, with no royalty
    or fee.
  • No Discrimination: Open standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor’s implementation.
  • Extension or Subset: Implementations of open standards may be extended, or offered in subset form.

These are *good* reasons.

The most famous and undoubtedly the most impactful Open Standards are those that were written for the Internet. They were, and still are, called Requests for Comments (RFCs) to reflect the fact that they were – especially in the early years, drafts for revision. As a mark of respect we also regard our draft 0.1 Open Standard for an EBB for Social Robots, as an RFC. You can find draft 0.1 in Annex A of the paper here.

Not only is this a first draft, it is also incomplete, covering only the specification of the data and its format, that should be saved in an EBB for social robots. Given that the EBB data specification is at the heart of the EBB standard, we feel that this is sufficient to be opened up for comments and feedback. We will continue to extend the specification, with subsequent versions also published on arXiv.

Let me know encourage comments and feedback. Please feel free to either submit comments to this blog post – this way everyone can see the comments – or by contacting me directly via email. All constructive comments that result in revisions to the standard will be acknowledged in the standard.

 

 

Ethics is the new Quality

I took part in the first panel at the BSI conference The Digital World: Artificial Intelligence.  The subject of the panel was AI Governance and Ethics. My co-panelist was Emma Carmel, and we were expertly chaired by Katherine Holden.

Emma and I each gave short opening presentations prior to the Q&A. The title of my talk was Why is Ethical Governance in AI so hard? Something I’ve thought about alot in recent months.

Here are the slides exploring that question.

 

And here are my words.

Early in 2018 I wrote a short blog post with the title Ethical Governance: what is it and who’s doing it? Good ethical governance is important because in order for people to have confidence in their AI they need to know that it has been developed responsibly. I concluded my piece by asking for examples of good ethical governance. I had several replies, but none were nominating AI companies.

So. why is it that 3 years on we see some of the largest AI companies on the planet shooting themselves in the foot, ethically speaking? I’m not at all sure I can offer an answer but, in the next few minutes, I would like to explore the question: why is ethical governance in AI so hard? 

But from a new perspective. 

Slide 2

In the early 1970s I spent a few months labouring in a machine shop. The shop was chaotic and disorganised. It stank of machine oil and cigarette smoke, and the air was heavy with the coolant spray used to keep the lathe bits cool. It was dirty and dangerous, with piles of metal swarf cluttering the walkways. There seemed to be a minor injury every day.

Skip forward 40 years and machine shops look very different. 

Slide 3

So what happened? Those of you old enough will recall that while British design was world class – think of the British Leyland Mini, or the Jaguar XJ6 – our manufacturing fell far short. “By the mid 1970s British cars were shunned in Europe because of bad workmanship, unreliability, poor delivery dates and difficulties with spares. Japanese car manufacturers had been selling cars here since the mid 60s but it was in the 1970s that they began to make real headway. Japanese cars lacked the style and heritage of the average British car. What they did have was superb build quality and reliability”*.

What happened was Total Quality Management. The order and cleanliness of modern machine shops like this one is a strong reflection of TQM practices. 

Slide 4

In the late 1970s manufacturing companies in the UK learned – many the hard way – that ‘quality’ is not something that can be introduced by appointing a quality inspector. Quality is not something that can be hired in.

This word cloud reflects the influence from Japan. The words Japan, Japanese and Kaizen – which roughly translates as continuous improvement – appear here. In TQM everyone shares the responsibility for quality. People at all levels of an organization participate in kaizen, from the CEO to assembly line workers and janitorial staff. Importantly suggestions from anyone, no matter who, are valued and taken equally seriously.

Slide 5

In 2018 my colleague Marina Jirotka and I published a paper on ethical governance in robotics and AI. In that paper we proposed 5 pillars of good ethical governance. The top four are:

  • have an ethical code of conduct, 
  • train everyone on ethics and responsible innovation,
  • practice responsible innovation, and
  • publish transparency reports.

The 5th pillar underpins these four and is perhaps the hardest: really believe in ethics.

Now a couple of months ago I looked again at these 5 pillars and realised that they parallel good practice in Total Quality Management: something I became very familiar with when I founded and ran a company in the mid 1980s.

Slide 6 

So, if we replace ethics with quality management, we see a set of key processes which exactly parallel our 5 pillars of good ethical governance, including the underpinning pillar: believe in total quality management.

I believe that good ethical governance needs the kind of corporate paradigm shift that was forced on UK manufacturing industry in the 1970s.

Slide 7

In a nutshell I think ethics is the new quality

Yes, setting up an ethics board or appointing an AI ethics officer can help, but on their own these are not enough. Like Quality, everyone needs to understand and contribute to ethics. Those contributions should be encouraged, valued and acted upon. Nobody should be fired for calling out unethical practices.

Until corporate AI understands this we will, I think, struggle to find companies that practice good ethical governance. 

Quality cannot be ‘inspected in’, and nor can ethics.

Thank you.


Notes.

[1] I’m quoting here from the excellent history of British Leyland by Ian Nicholls.

[2] My company did a huge amount of work for Motorola and – as a subcontractor – we became certified software suppliers within their six sigma quality management programme.

[3] It was competitive pressure that forced manufacturing companies in the 1970s to up their game by embracing TQM. Depressingly the biggest AI companies face no such competitive pressures, which is why regulation is both necessary and inevitable.

On sustainable robotics

The climate emergency brooks no compromise: every human activity or artefact is either part of the solution or it is part of the problem.

I’ve worried about the sustainability of consumer electronics for some time, and, more recently, the shocking energy costs of big AI. But the climate emergency has also caused me to think hard about the sustainability of robots. In recent papers we have defined responsible robotics as

the application of Responsible Innovation in the design, manufacture, operation, repair and end-of-life recycling of robots, that seeks the most benefit to society and the least harm to the environment.

I will wager that few robotics manufacturers – even the most responsible – pay much attention to repairability and recyclability of their robots. And, I’m ashamed to say, very little robotics research is focused on the development of sustainable robots. A search on google scholar throws up a handful of excellent papers detailing work on upcycled and sustainable robots (2018), sustainable robotics for smart cities (2018), green marketing of sustainable robots (2019), and sustainable soft robots (2020).

I was therefore delighted when, a few weeks ago, my friend and colleague Michael Fisher, drafted a proposal for a new standard on Sustainable Robotics. The proposal received strong support from the BSI robotics committee. Here is the formal notice requesting comments on Michael’s proposal: BS XXXX Guide to the Sustainable Design and Application of Robotic Systems.

So what would make a robot sustainable? In my view it would have to be:

  • Made from sustainable materials. This means the robot should, as far as possible, use recycled materials (plastics or metals), or biodegradable materials like wood. Any new materials should be ethically sourced.
  • Low energy. The robot should be designed to use as little energy as possible. It should have energy saving modes. If an outdoor robot then is should use solar cells and/or hydrogen cells when they become small enough for mobile robots. Battery powered robots should always be rechargeable.
  • Repairable. The robot would be designed for ease of repair using modular, replaceable parts as much as possible – especially the battery. Additionally the manufacturers should provide a repair manual so that local workshops could fix most faults.
  • Recyclable. Robots will eventually come to the end of their useful life, and if they cannot be repaired or recycled we risk them being dumped in landfill. To reduce this risk the robot should be designed to make it easy re-use parts, such as electronics and motors, and re-cycle batteries, metals and plastics.

These are, for me, the four fundamental requirements, but there are others. The BSI proposal adds also the environmental effects of deployment (it is unlikely we would consider a sustainable robot designed to spray pesticides as truly sustainable), or of failure in the field. Also the environmental effect of maintenance; cleaning materials, for instance. The proposal also looks toward sustainable, upcyclable robots as part of a circular economy.

This is Ecobot III, developed some years ago by colleagues in the Bristol Robotics Lab’s Bio-energy group. The robot runs on electricity extracted from biomass by 48 microbial fuel cells (the two concentric brick coloured rings). The robot is 90% 3D printed, and the plastic is recyclable.

I would love to see, in the near term, not only a new standard on Sustainable Robotics as a guide (and spur) for manufacturers, but the emergence of Sustainable Robotics as a thriving new sub-discipline in robotics.

Back to Robot Coding part 3: testing the EBB

In part 2 a few weeks ago I outlined a Python implementation of the ethical black box. I described the key data structure – a dictionary which serves as both specification for the type of robot, and the data structure used to deliver live data to the EBB. I also mentioned the other key robot specific code:

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

Having reached this point I needed a robot – and a way of communicating with it – so that I could both write getRobotData(spec)  and test the EBB. But how to do this? I’m working from home during lockdown, and my e-puck robots are all in the lab. Then I remembered that the excellent robot simulator V-REP (now called CoppeliaSim) has a pretty good e-puck model and some nice demo scenes. V-REP also offers multiple ways of communicating between simulated robots and external programs (see here). One of them – TCP/IP sockets – appeals to me as I’ve written sockets code many times, for both real-world and research applications. Then a stroke of luck: I found that a team at Ensta-Bretagne had written a simple demo which does more or less what I need – just not for the e-puck. So, first I got that demo running and figured out how it works, then used the same approach for a simulated e-puck and the EBB. Here is a video capture of the working demo.

So, what’s going on in the demo? The visible simulation views in the V-REP window show an e-puck robot following a black line which is blocked by both a potted plant and an obstacle constructed from 3 cylinders. The robot has two behaviours: line following and wall following. The EBB requests data from the e-puck robot once per second, and you can see those data in the Python shell window. Reading from left to right you will see first the EBB date and time stamp, then robot time botT, then the 3 line following sensors lfSe, followed by the 8 infra red proximity sensors irSe. The final two fields show the joint (i.e. wheel angles) jntA, in degrees, then the motor commands jntD. By watching these values as the robot follows its line and negotiates the two obstacles you can see how the line and infra red sensor values change, resulting in updated motor commands.

Here is the code – which is custom written both for this robot and the means of communicating with it – for requesting data from the robot.


# Get data from the robot and store it in spec[]
# while returning one of the following result codes
ROBOT_DATA_OK = 0
CANNOT_CONNECT = 1
SOCKET_ERROR = 2
BAD_DATA = 3
def getRobotData(spec):
    # This function connects, via TCP/IP to an ePuck robot running in V-REP

    # create a TCP/IP socket and connect it to the simulated robot
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    try:
        sock.connect(server_address_port)
    except:
        return CANNOT_CONNECT
    sock.settimeout(0.1) # set connection timeout
    
    # pack a dummy packet that will provoke data in response
    #   this is, in effect, a ‘ping’ to ask for a data record
    strSend = struct.pack(‘fff’,1.0,1.0,1.0)
    sock.sendall(strSend) # and send it to V-REP
    # wait for data back from V-REP
    #   expect a packet with 1 time, 2 joints, 2 motors, 3 line sensors, 8 irSensors  
    #   all floats because V-REP
    #   total packet size = 16 x 4 = 64 bytes
    data = b”
    nch_rx = 64 # expect this many bytes from  V-REP 
    try:
        while len(data) < nch_rx:
            data += sock.recv(nch_rx)
    except:
        sock.close()
        return SOCKET_ERROR
    # unpack the received data
    if len(data) == nch_rx:
        # V-REP packs and unpacks in floats only so
        vrx = struct.unpack(‘ffffffffffffffff’,data)
        # now move data from vrx[] into spec[], while rounding the floats
        spec[“botTime”] = [ round(vrx[0],2) ] 
        spec[“jntDemands”] = [ round(vrx[1],2), round(vrx[2],2) ]
        spec[“jntAngles”] = [round(vrx[3]*180.0/math.pi,2)
                             round(vrx[4]*180.0/math.pi,2) ]
        spec[“lfSensors”] = [ round(vrx[5],2), round(vrx[6],2), round(vrx[7],2) ]
        for i in range(8):
            spec[“irSensors”][i] = round(vrx[8+i],3)       
        result = ROBOT_DATA_OK
    else:       
        result = BAD_DATA
    sock.close()
    return result

The structure of this function is very simple: first create a socket then open it, then make a dummy packet and send it to V-REP to request EBB data from the robot. Then, when a data packet arrives, unpack it into spec. The most complex part of the code is data wrangling.

Would a real EBB collect data in this way? Well if the EBB is embedded in the robot then probably not. Communication between the robot controller and the EBB might be via ROS messages, or even more directly, by – for instance – allowing the EBB code to access a shared memory space which contains the robot’s sensor inputs, command outputs and decisions. But an external EBB, either running on a local server or in the cloud, would most likely use TCP/IP to communicate with the robot, so getRobotData() would look very much like the example here.

Back to Robot Coding part 2: the ethical black box

In the last few days I started some serious coding. The first for 20 years, in fact, when I built the software for the BRL LinuxBots. (The coding I did six months ago doesn’t really count as I was only writing or modifying small fragments of Python).

My coding project is to start building an ethical black box (EBB), or to be more accurate, a module that will allow a software EBB to be incorporated into a robot. Conceptually the EBB is very simple, it is a data logger – the robot equivalent of an aircraft Flight Data Recorder, or an automotive Event Data Recorder. Nearly five years ago I made the case, with Marina Jirotka, that all robots (and AIs) should be fitted with an EBB as standard. Our argument is very simple: without an EBB, it will be more or less impossible to investigate robot accidents, or near-misses, and in a recent paper on Robot Accident Investigation we argue that with the increasing use of social robots accidents are inevitable and will need to be investigated.

Developing and demonstrating the EBB is a foundational part of our 5-year EPSRC funded project RoboTIPS, so it’s great to be doing some hands-on practical research. Something I’ve not done for awhile.

Here is a block diagram showing the EBB and its relationship with a robot controller.

Box diagram of sensor, embedded artificial intelligence and actuation data being logged by the ethical black box

As shown here the data flows from the robot controller to the EBB are strictly one way. The EBB cannot and must not interfere with the operation of the robot. Coding an EBB for a particular robot would be straightforward, but I have set myself a tougher goal: a generic EBB module (i.e. library of functions) that would – with some inevitable customisation – apply to any robot. And I set myself the additional challenge of coding in Python, making use of skills learned from the excellent online Codecademy Python 2 course.

There are two elements of the EBB that must be customised for a particular robot. The first is the data structure used to fetch and save the sensor, actuator and decision data in the diagram above. Here is an example from my first stab at an EBB framework, using the Python dictionary structure:

# This dictionary structure serves as both 
# 1 specification of the type of robot, and each data field that
#   will be logged for this robot, &
# 2 the data structure we use to deliver live data to the EBB

# for this model let us create a minimal spec for an ePuck robot
epuckSpec = {
    # the first field *always* identifies the type of robot plus            # version and serial nos
    “robot” : [“ePuck”, “v1”, “SN123456”],
    # the remaining fields are data we will log, 
    # starting with the motors
    # ..of which the ePuck has just 2: left and right
    “motors” : [0,0],
    # then 8 infra red sensors
    “irSensors” : [0,0,0,0,0,0,0,0],
    # ..note the ePuck has more sensors: accelerometer, camera etc, 
    # but this will do for now
    # ePuck battery level
    “batteryLevel” : [0],
    # then 1 decision code – i.e. what the robot is doing now
    # what these codes mean will be specific to both the robot 
    # and the application
    “decisionCode” : [0]
    }

Whether a dictionary is the best way of doing this I’m not 100% sure, being new to Python (any thoughts from experienced Pythonistas welcome).

The idea is that all robot EBBs will need to define a data structure like this. All must contain the first field “robot”, which names the robot’s type, its version number and serial number. Then the following fields must use keywords from a standard menu, as needed. As shown in this example each keyword is followed by a list of placeholder values – in which the number of values in the list reflects the specification of the actual robot. The ePuck robot, for instance, has 2 motors and 8 infra-red sensors.

The final field in the data structure is “decisionCode”. The values stored in this field would be both robot and applications specific; for the ePuck robot these might be 1 = ‘stop’, 2 = ‘turn left’, 3 = ‘turn right’ and so on. We could add another value for a parameter, so the robot might decide for instance to turn left 40 degrees, so “decisionCode” : [2,40]. We could also add a ‘reason’ field, which would save the high-level reason for the decision, as in “decisionCode” : [2,40,”avoid obstacle right”] noting that the decision field could be a string as shown here, or a numeric code.

As I hope I have shown here the design of this data structure and its fields is at the heart of the EBB.

The second element of the EBB library that must be written for the particular robot and application, is the function which fetches data from the robot

# Get data from the robot and store it in data structure spec
def getRobotData(spec):

How this function is implemented will vary hugely between robots and robot applications. For our Linux enhanced ePucks with WiFi connections this is likely to be via a TCP/IP client-server, with the server running on the robot, sending data following a request from the client getRobotData(ePuckspec).   For simpler setups in which the EBB module is folded into the robot controller then accessing the required data within getRobotData() should be very straightforward.

The generic part of the EBB module will define the class EBB, with methods for both initialising the EBB and saving a new data record to the EBB. I will cover that in another blog post.

Before closing let me add that it is our intention to publish the specification of the EBB, together with the model EBB code, once it had been fully tested, as open source.

Any comments or feedback would be much appreciated.


Link to the original post here.

RoboTED: a case study in Ethical Risk Assessment

A few weeks ago I gave a short paper at the excellent International Conference on Robot Ethics and Standards (ICRES 2020), outlining a case study in Ethical Risk Assessment – see our paper here. Our chosen case study is a robot teddy bear, inspired by one of my favourite movie robots: Teddy, in A. I. Artificial Intelligence.

Although Ethical Risk Assessment (ERA) is not new – it is after all what research ethics committees do – the idea of extending traditional risk assessment, as practised by safety engineers, to cover ethical risks is new. ERA is I believe one of the most powerful tools available to the responsible roboticist, and happily we already have a published standard setting out a guideline on ERA for robotics in BS 8611, published in 2016.

Before looking at the ERA, we need to summarise the specification of our fictional robot teddy bear: RoboTed. First, RoboTed is based on the following technology:

  • RoboTed is an Internet (WiFi) connected device, 
  • RoboTed has cloud-based speech recognition and conversational AI (chatbot) and local speech synthesis,
  • RoboTed’s eyes are functional cameras allowing RoboTed to recognise faces,
  • RoboTed has motorised arms and legs to provide it with limited baby-like movement and locomotion.

And second RoboTed is designed to:

  • Recognise its owner, learning their face and name and turning its face toward the child.
  • Respond to physical play such as hugs and tickles.
  • Tell stories, while allowing a child to interrupt the story to ask questions or ask for sections to be repeated.
  • Sing songs, while encouraging the child to sing along and learn the song.
  • Act as a child minder, allowing parents to both remotely listen, watch and speak via RoboTed.

The tables below summarise the ERA of RoboTED for (1) psychological, (2) privacy & transparency and (3) environmental risks. Each table has 4 columns, for the hazard, risk, level of risk (high, medium or low) and actions to mitigate the risk. BS8611 defines an ethical risk as the “probability of ethical harm occurring from the frequency and severity of exposure to a hazard”; an ethical hazard as “a potential source of ethical harm”, and an ethical harm as “anything likely to compromise psychological and/or societal and environmental well-being”.


(1) Psychological Risks

 


(2) Security and Transparency Risks

 

(3) Environmental Risks

 

For a more detailed commentary on each of these tables see our full paper – which also, for completeness, covers physical (safety) risks. And here are the slides from my short ICRES 2020 presentation:

Through this fictional case study we argue we have demonstrated the value of ethical risk assessment. Our RoboTed ERA has shown that attention to ethical risks can

  • suggest new functions, such as “RoboTed needs to sleep now”,
  • draw attention to how designs can be modified to mitigate some risks, 
  • highlight the need for user engagement, and
  • reject some product functionality as too risky.

But ERA is not guaranteed to expose all ethical risks. It is a subjective process which will only be successful if the risk assessment team are prepared to think both critically and creatively about the question: what could go wrong? As Shannon Vallor and her colleagues write in their excellent Ethics in Tech Practice toolkit design teams must develop the “habit of exercising the skill of moral imagination to see how an ethical failure of the project might easily happen, and to understand the preventable causes so that they can be mitigated or avoided”.

My top three policy and governance issues in AI/ML

In preparation for a recent meeting of the WEF global AI council, we were asked the question:

What do you think are the top three policy and governance issues that face AI/ML currently?

Here are my answers.

1. For me the biggest governance issue facing AI/ML ethics is the gap between principles and practice. The hard problem the industry faces is turning good intentions into demonstrably good behaviour. In the last 2.5 years there has been a gold rush of new ethical principles in AI. Since Jan 2017 at least 22 sets of ethical principles have been published, including principles from Google, IBM, Microsoft and Intel. Yet any evidence that these principles are making a difference within those companies is hard to find – leading to a justifiable accusation of ethics-washing – and if anything the reputations of some leading AI companies are looking increasingly tarnished.

2. Like others I am deeply concerned by the acute gender imbalance in AI (estimates of the proportion of women in AI vary between ~12% and ~22%). This is not just unfair, I believe it too be positively dangerous, since it is resulting in AI products and services that reflect the values and ambitions of (young, predominantly white) men. This makes it a governance issue. I cannot help wondering if the deeply troubling rise of surveillance capitalism is not, at least in part, a consequence of male values.

3. A major policy concern is the apparently very poor quality of many of the jobs created by the large AI/ML companies. Of course the AI/ML engineers are paid exceptionally well, but it seems that there is a very large number of very poorly paid workers who, in effect, compensate for the fact that AI is not (yet) capable of identifying offensive content, nor is it able to learn without training data generated from large quantities of manually tagged objects in images, nor can conversational AI manage all queries that might be presented to it. This hidden army of piece workers, employed in developing countries by third party sub contractors and paid very poorly, are undertaking work that is at best extremely tedious (you might say robotic) and at worst psychologically very harmful; this has been called AI’s dirty little secret and should not – in my view – go unaddressed.

An updated round up of ethical principles of robotics and AI

This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication.

I previously listed principles published before December 2017 here; this blogpost appends those principles drafted since January 2018 (plus one in October 2017 I had missed). The principles are listed here (in full or abridged) with links, notes and references but without critique.

Scroll down to the next horizontal line for the updates.

If there any (prominent) ones I’ve missed please let me know.

Asimov’s three laws of Robotics (1950)

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.

Murphy and Wood’s three laws of Responsible Robotics (2009)

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 

These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 

These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:

6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 

See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)

  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.

An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)

  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.

This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

Montréal Declaration for Responsible AI draft principles (Nov 2017)

  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)

  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused

These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

Note that these principles have been revised and extended, in March 2019 (see below).

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)

  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an Ethical Black Box
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race

Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


Updated principles

Intel’s recommendation for Public Policy Principles on AI (October 2017)

  1. Foster Innovation and Open Development – To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology.
  2. Create New Human Employment Opportunities and Protect People’s Welfare – AI will change the way people work. Public policy in support of adding skills to the workforce and promoting employment across different sectors should enhance employment opportunities while also protecting people’s welfare.
  3. Liberate Data Responsibly – AI is powered by access to data. Machine learning algorithms improve by analyzing more data over time; data access is imperative to achieve more enhanced AI model development and training. Removing barriers to the access of data will help machine learning and deep learning reach their full potential.
  4. Rethink Privacy – Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology.
  5. Require Accountability for Ethical Design and Implementation – The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms.

These principles were announced in a blog post by Naveen Rao (Intel VP AI) here.

Lords Select Committee 5 core principles to keep AI ethical (April 2018)

  1. Artificial intelligence should be developed for the common good and
    benefit of humanity. 
  2. Artificial intelligence should operate on principles of intelligibility and
    fairness
  3. Artificial intelligence should not be used to diminish the data rights or
    privacy
    of individuals, families or communities. 
  4. All citizens have the right to be educated to enable them to flourish
    mentally, emotionally and economically alongside artificial intelligence. 
  5. The autonomous power to hurt, destroy or deceive human beings
    should never be vested in artificial intelligence.

These principles appear in the UK House of Lords Select Committee on Artificial Intelligence report AI in the UK: ready, willing and able? published in April 2019. The WEF published a summary and commentary here.
AI UX: 7 Principles of Designing Good AI Products (April 2018)

  1. Differentiate AI content visually – let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not.
  2. Explain how machines think – helping people understand how machines work so they can use them better
  3. Set the right expectations – especially in a world full of sensational, superficial news about new AI technologies.
  4. Find and handle weird edge cases – spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases.
  5. User testing for AI products (default methods won’t work here).
  6. Provide an opportunity to give feedback.

These principles, focussed on the design of the User Interface (UI) and User Experience (UX), are from Budapest based company UX Studio

The Toronto Declaration on equality and non-discrimination in machine learning systems (May 2018)

The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems does not succinctly articulate ethical principles but instead presents arguments under the following headings to address concerns “about the capability of [machine learning] systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people”.

  1. Using the framework of international human rights law The right to equality and non-discrimination; Preventing discrimination, and Protecting the rights of all individuals and groups: promoting diversity and inclusion
  2. Duties of states: human rights obligations State use of machine learning systems; Promoting equality, and Holding private sector actors to account
  3. Responsibilities of private sector actors human rights due diligence
  4. The right to an effective remedy

Google AI Principles (June 2018) 

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles. 

These principles were launched with a blog post and commentary by Google CEO Sundar Pichai here.

IBM’s 5 ethical AI principles (September 2018)

  1. Accountability: AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes.
  2. Value alignment: AI should be designed to align with the norms and values of your user group in mind.
  3. Explainability: AI should be designed for humans to easily perceive, detect, and understand its decision process, and the predictions/recommendations. This is also, at times, referred to as interpretability of AI. Simply speaking, users have all rights to ask the details on the predictions made by AI models such as which features contributed to the predictions by what extent. Each of the predictions made by AI models should be able to be reviewed.
  4. Fairness: AI must be designed to minimize bias and promote inclusive representation.
  5. User data rights: AI must be designed to protect user data and preserve the user’s power over access and uses

For a full account read IBM’s Everyday Ethics for Artificial Intelligence here.

Microsoft Responsible bots: 10 guidelines for developers of
conversational AI (November 2018)

  1. Articulate the purpose of your bot and take special care if your bot will support consequential use cases.
  2. Be transparent about the fact that you use bots as part of your product or service.
  3. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.
  4. Design your bot so that it respects relevant cultural norms and guards against misuse.
  5. Ensure your bot is reliable.
  6. Ensure your bot treats people fairly.
  7. Ensure your bot respects user privacy.
  8. Ensure your bot handles data securely.
  9. Ensure your bot is accessible.
  10. Accept responsibility.

Microsoft’s guidelines for the ethical design of ‘bots’ (chatbots or conversational AIs) are fully described here.

CEPEJ European Ethical Charter on the use of artificial
intelligence (AI) in judicial systems and their environment, 5 principles (February 2019)

  1. Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights.
  2. Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals.
  3. Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.
  4. Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits.
  5. Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.

The Council of Europe ethical charter principles are outlined here, with a link to the ethical charter istelf.
Women Leading in AI (WLinAI) 10 recommendations (February 2019)

  1. Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.
  2. Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.
  3. Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.
  4. Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.
  5. Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.
  6. Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.
  7. To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.
  8. Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees
  9. To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.
  10. To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.

Presented by the Women Leading in AI group at a meeting in parliament in February 2019, this report in Forbes by Noel Sharkey outlines both the group, their recommendations, and the meeting.

The NHS’s 10 Principles for AI + Data (February 2019)

  1. Understand users, their needs and the context
  2. Define the outcome and how the technology will contribute to it
  3. Use data that is in line with appropriate guidelines for the purpose for which it is being used
  4. Be fair, transparent and accountable about what data is being used
  5. Make use of open standards
  6. Be transparent about the limitations of the data used and algorithms deployed
  7. Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision
  8. Generate evidence of effectiveness for the intended use and value for money
  9. Make security integral to the design
  10. Define the commercial strategy

These principles are set out with full commentary and elaboration on Artificial Lawyer here.

IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (March 2019)

  1. Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
  2. Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.
  3. Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity.
  4. Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
  5. Transparency: the basis of a particular A/IS decision should always be discoverable.
  6. Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
  7. Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
  8. Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.

These amended and extended general principles form part of Ethical Aligned Design 1st edition, published in March 2019. For an overview see pdf here.


Ethical issues arising from the police use of live facial
recognition technology (March 2019) 

9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.

Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.

Floridi and Clement Jones’ five principles key to any
ethical framework for AI  (March 2019)

  1. AI must be beneficial to humanity.
  2. AI must also not infringe on privacy or undermine security
  3. AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. 
  4. AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairness
  5. We cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).

Luciano Floridi and Lord Tim Clement Jones set out, here in the New Statesman, these 5 general ethical principles for AI, with additional commentary.

The European Commission’s High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (April 2019)

  1. Human agency and oversight AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. 
  2. Technical robustness and safety A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm.
  3. Privacy and Data governance Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems.
  4. Transparency This requirement is closely linked with the principle of explicability and encompasses transparency of elements relevant to an AI system: the data, the system and the business models.
  5. Diversity, non-discrimination and fairness In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle. 
  6. Societal and environmental well-being In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle. 
  7. Accountability The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness

For more detail on each of these principles follow the links above.

Published on 8 April 2019, the EU HLEG AI ethics guidelines for trustworthy AI are detailed in full here.


Draft core principles of Australia’s Ethics Framework for AI (April 2019)

  1. Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
  2. Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes. 
  3. Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
  4. Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.
  5. Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
  6. Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
  7. Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
  8. Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.

These draft principles are detailed in Artificial Intelligence Australia’s Ethics Framework A Discussion Paper. This comprehensive paper includes detailed summaries of many of the frameworks and initiatives listed above, together with some very useful case studies.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

The pedestrian experiment

Followers of this blog will know that I have been working for some years on simulation-based internal models – demonstrating their potential for ethical robotssafer robots and imitating robots. But pretty much all of our experiments so far have involved only one robot with a simulation-based internal model while the other robots it interacts with have no internal model at all.

But some time ago we wondered what would happen if two robots, each with a simulation-based internal model, interacted with each other. Imagine two such robots approaching each other in the same way that two pedestrians approach each other on the sidewalk. Is it possible that these ‘pedestrian’ robots might, from time to time, engage in the kind of ‘dance’ that human pedestrians do when one steps to their left and the other to their right only to compound the problem of avoiding a collision with a stranger? The answer, it turns out, is yes!

The idea was taken up by Mathias Schmerling at the Humboldt University of Berlin, adapting the code developed by Christian Blum for the Corridor experiment. Chen Yang, one of my masters students, has now updated Mathias’ code and has produced some very nice new results.

Most of the time the pedestrian robots pass each other without fuss but in something between 1 in 5 and 1 in 10 trials we do indeed see an interesting dance. Here are a couple of examples of the majority of trials, when the robots pass each other normally, showing the robots’ trajectories. In each trial blue starts from the left and green from the right. Note that there is an element of randomness in the initial directions of each robot (which almost certainly explains the relative occurrence of normal and dance behaviours).

And here is a gif animation showing what’s going on in a normal trial. The faint straight lines from each robot show the target directions for each next possible action modelled in each robot’s simulation-based internal model (consequence engine); the various dotted lines show the predicted paths (and possible collisions) and the solid blue and green lines show which next action is actually selected following the internal modelling.

Here is a beautiful example of a ‘dance’, again showing the robot trajectories. Note that the impasse resolves itself after awhile. We’re still trying to figure out exactly what mechanism enables this resolution.

And here is the gif animation of the same trial:

Notice that the impasse is not resolved until the fifth turns of each robot.

Is this the first time that pedestrians passing each other – and in particular the occasional dance that ensues – has been computationally modelled?

All of the results above were obtained in simulation (yes there really are simulations within a simulation going on here), but within the past week Chen Yang has got this experiment working with real e-puck robots. Videos will follow shortly.


Acknowledgements.

I am indebted to the brilliant experimental work of first Christian Blum (supported by Wenguo Liu), then Mathias Schmerling who adapted Christian’s code for this experiment, and now Chen Yang who has developed the code further and obtained these results.

What is artificial intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

I start the keynote with Alan Turing’s famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin’s Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we’ve had computers that can expertly play chess for over 20 years, but we can’t yet build a robot that could go into your kitchen and make you a cup of tea (see also the Wozniak coffee test).

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn’t require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson’s robot Sophia (which some press reports have claimed as the world’s most advanced)? I argue it is not, since it is a chatbot AI – with a limited conversational repertoire – with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world’s best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world’s most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you – on the basis of your ill formed query – more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 – 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 – 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability – which comes very naturally to humans – to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics – a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it’s own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a ‘functional imagination’). I then outline our own work to use the same principle – a simulation based internal model – to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots – with their ability to model and predict the consequences of their own and others’ actions, in other words anticipate the future – may represent the first small steps toward thinking machines.


Related blog posts:
60 years of asking can robot think?

How intelligent are intelligent robots?

Robot bodies and how to evolve them

What is artificial intelligence? (Or, can machines think?)

Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils.

I start the keynote with Alan Turing’s famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin’s Thinker but also the largely unconscious thinking required to make a pot of tea.

I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we’ve had computers that can expertly play chess for 20 years, but we can’t yet build a robot that could go into your kitchen and make you a cup of tea.

In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence. I ask where would a robot vacuum cleaner be on this scale and propose that such a robot is about as smart as an e-coli (single celled organism). I then illustrate the difficulty of placing the Actroid robot on this scale because, although it may look convincingly human (from a distance), in reality the robot is not very much smarter than a washing machine (and I hint that this is an ethical problem).

In slide 7 I show how apparently intelligent behaviour doesn’t require a brain, with the Solarbot. This robot is an example of a Braitenberg machine. It has two solar panels (which look a bit like wings) acting as both sensors and power sources; the left hand panel is connected to the right hand wheel and vice versa. These direct connections mean that Solarbot can move towards the light and even navigate its way through obstacles, thus showing that intelligent behaviour is an emergent property of the interactions between body and environment.

In slide 8 I ask the question: What is the most advanced AI in the world today? (A question I am often asked.) Is it for example David Hanson’s robot Sophia (which some press reports have claimed as the world’s most advanced)? I argue it is not, since it is a chatbot AI – with a limited conversational repertoire – with a physical body (imagine Alexa with a humanoid head). Is it the DeepMind AI AlphaGo which famously beat the world’s best Go player in 2016? Although very impressive I again argue no since AlphaGo cannot do anything other than play Go. Instead I suggest that everyday Google might well be the world’s most advanced AI (on this I agree with my friend Joanna Bryson). Google is in effect a librarian able to find a book from an immense library for you – on the basis of your ill formed query – more or less instantly! (And this librarian is poly lingual too.)

In slides 9 I make the point that intelligence is not one thing that animals, robots and AIs have more or less of (in other words the linear scale shown on slides 5 and 6 is wrong). Then in slides 10 – 13 I propose four distinct categories of intelligence: morphological, swarm, individual and social intelligence. I suggest in slides 14 – 16 that if we express these as four axes of a graph then we can (very approximately) compare the intelligence of different organisms, including humans. In slide 17 I show some robots and argue that this graph shows why robots are so unintelligent; it is because robots generally only have two of the four kinds of intelligence whereas animals typically have three or sometimes all four. A detailed account of these ideas can be found in my paper How intelligent is your intelligent robot?

In the next segment, slides 18-20 I ask: how do we make Artificial General Intelligence (AGI)? I suggest that the key difference between current narrow AI and AGI is the ability – which comes very naturally to humans – to generalise knowledge learned in one context to a completely different context. This I think is the basis of human creativity. Using Data from Star Trek the next generation as a SF example of an AGI with human-equivalent intelligence as what we might be aiming for in the quest for AGI I explain that there are 3 approaches to getting there: by design, using artificial evolution or by reverse engineering animals. I offer the opinion that the gap between where we are now and Data like AGI is about the same as the gap between current space craft engine technology and warp drive technology. In other words not any time soon.

In the fourth segment of the talk (slides 21-24) I give a very brief account of evolutionary robotics – a method for breeding robots in much the same way farmers have artificially selected new varieties of plants and animals for thousands of years. I illustrate this with the wonderful Golem project which, for the first time, evolved simple creatures then 3D printed the most successful ones. I then introduce our new four year EPSRC funded project Autonomous Robot Evolution: from cradle to grave. In a radical new approach we aim to co-evolve robot bodies and brains in real-time and real-space. Using techniques from 3D printing new robot designs will literally be printed, before being trained in a nursery, then fitness tested in a target environment. With this approach we hope to be able to evolve robots for extreme environments, however because the energy costs are so high I do not think evolution is a route to truly thinking machines.

In the final segment (slides 25-35) I return to the approach of trying to design rather than evolve thinking machines. I introduce the idea of embedding a simulation of a robot in that robot, so that it has the ability to internally model itself. The first example I give is the amazing anthropomimetic robot invented by my old friend Owen Holland, called ECCEROBOT. Eccerobot is able to learn how to control it’s own very complicated and hard-to-control body by trying out possible movement sequences in its internal model (Owen calls this a ‘functional imagination’). I then outline our own work to use the same principle – a simulation based internal model – to demonstrate simple ethical behaviours, first with e-puck robots, then with NAO robots. These experiments are described in detail here and here. I suggest that these robots – with their ability to model and predict the consequences of their own and others’ actions, in other words anticipate the future – may represent the first small steps toward thinking machines.

Why ethical robots might not be such a good idea after all

Last week my colleague Dieter Vanderelst presented our paper: The Dark Side of Ethical Robots at AIES 2018 in New Orleans.

I blogged about Dieter’s very elegant experiment here, but let me summarise. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot’s logic it is transformed into a distinctly unethical robot – behaving either competitively or aggressively toward the proxy human.

Here are our paper’s key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviours require the same cognitive machinery with – in our implementation – only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

On the face of it, given that we can (at least in principle) build explicitly ethical machines then it would seem that we have a moral imperative to do so; it would appear to be unethical not to build ethical machines when we have that option. But the findings of our paper call this assumption into serious doubt. Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.

  1. First there is the risk that an unscrupulous manufacturermight insert some unethical behaviours into their robots in order to exploit naive or vulnerable users for financial gain, or perhaps to gain some market advantage (here the VW diesel emissions scandal of 2015 comes to mind). There are no technical steps that would mitigate this risk, but the reputational damage from being found out is undoubtedly a significant disincentive. Compliance with ethical standards such as BS 8611 guide to the ethical design and application of robots and robotic systems, or emerging new IEEE P700X ‘human’ standards would also support manufacturers in the ethical application of ethical robots. 
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings. Here the danger arises from the possibility that either the user or a technical support engineer mistakenly, or deliberately, chooses settings that move the robot’s behaviours outside an ‘ethical envelope’. Much depends of course on how the robot’s ethics are coded, but one can imagine the robot’s ethical rules expressed in a user-accessible format, for example, an XML like script. No doubt the best way to guard against this risk is for robots to have no user adjustable ethics settings, so that the robot’s ethics are hard-coded and not accessible to either users or support engineers. 
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking. Given that cases of white-hat hacking of cars have already been reported, it’s not difficult to envisage a nightmare scenario in which the ethics settings for an entire fleet of driverless cars are hacked, transforming those vehicles into lethal weapons. Of course, driverless cars (or robots in general) without explicit ethics are also vulnerable to hacking, but weaponising such robots is far more challenging for the attacker. Explicitly ethical robots focus the robot’s behaviours to a small number of rules which make them, we think, uniquely vulnerable to cyber-attack.

Ok, taking the most serious of these risks: hacking, we can envisage several technical approaches to mitigating the risk of malicious hacking of a robot’s ethical rules. One would be to place those ethical rules behind strong encryption. Another would require a robot to authenticate its ethical rules by first connecting to a secure server. An authentication failure would disable those ethics, so that the robot defaults to operating without explicit ethical behaviours. Although feasible, these approaches would be unlikely to deter the most determined hackers, especially those who are prepared to resort to stealing encryption or authentication keys.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts. Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviours.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I’m not so sure. I now believe that – even with strong ethical governance – the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

Here is the full reference to our paper:

Vanderelst D and Winfield AFT (2018), The Dark Side of Ethical Robots, AAAI/ACM Conf. on AI Ethics and Society (AIES 2018), New Orleans.

A round up of robotics and AI ethics: part 1 principles


This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there are any (prominent) ones I’ve missed please let me know.

Asimov’s three laws of Robotics (1950)

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov’s short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.


Murphy and Wood’s three laws of Responsible Robotics (2009)

  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 

These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 

These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:

    6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

    7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

    8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

    9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

    10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

    11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

    12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

    13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

    14. Shared Benefit: AI technologies should benefit and empower as many people as possible.

    15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 

See the ACM announcement of these principles here. The principles form part of the ACM’s updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)>

  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.

An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)

  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.

This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

Montréal Declaration for Responsible AI draft principles (Nov 2017)

  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)

  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused

These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)

  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race

Drafted by UNI Global Union‘s Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology

Here are the slides I gave recently as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.

Slide 2:

The FP7 TRUCE Project invited a number of scientists – mostly within the field of Artificial Life – to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which – following some iteration – appeared in the collected volume Beta Life.

Slide 3:

More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers – Allen Ashley, Jule Owen and Stephen Oram – to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.

Slide 4:

My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.