Many thanks to the twenty four people who braved “the Beast from the East” to attend our third meetup. The great contribution made by everyone was really inspirational.

When we decided that the “Ethics of Artificial Intelligence” would be an ideal topic for our third meetup on the 27th February little did we appreciate the complexity and challenges facing society. To many the ethics of AI is about “how many jobs will be lost?”, “will there be massive unemployment”, “will the working week be shorter” or “what will we do with all the spare time” etc. But in reality, it is far more complex than that although these questions need to be resolved, of course.

Through a contact of my fellow Ethical Reading Director, Gill Ringland, we were introduced to Ian Harris of Z/Yen, a director of one of the City of London’s leading commercial think-tanks.

During my initial conversations with Ian about the detailed subject matter and the format of the evening, it became clear that Ian wanted to focus on the positive ethical implications of AI and the beneficial aspects to society rather than just the negative aspects such as the loss of jobs, a potential increase in inequality and so on.

The way he did this was very clever – he is a pro at this after all!  After a 10-15 minute introduction we broke out into small discussion groups to discuss two topics – Autonomous Vehicles and Care Robots. Each group was asked to imagine the year is 2028 (not that far away) and we are part of a Citizens Panel asked to consider the ethical issues of these technologies as widespread adoption of these technologies is looming.  In Ian’s scenario, governments, regulators, industry lobby groups and consumer groups have suddenly woken up to the reality of these new technologies, recognising there has been far too little thought given to the ethical and societal issues arising from the widespread use of these technologies.

Autonomous Vehicles

Two major players have emerged in the marketplace for autonomous vehicles; Tuba and Flugel.  In truth, the competitors don’t get on very well with each other, nor with the regulators.  It is only fair to say, though, that the number of accidents caused by or involving autonomous vehicles is proportionally tiny, which has caused a public opinion shift hugely in favour of autonomous vehicles on safety grounds.  Your Citizens Panel has been asked to grapple with the following ethical questions:

  • Should some roads (or eventually all public highways) be designated for the use of autonomous vehicles only?  In other words, should human driver-controlled vehicles eventually be prohibited in public, on safety grounds?
  • The autonomous vehicle suppliers are very cagey about revealing the artificial intelligence algorithms that deal with danger responses.  For example, if the autonomous car has only two options – either to hit and kill a child in the road or to swerve into a wall to avoid the child, endangering the passenger’s life and itself…
    • how should the vehicle respond in such circumstances?
    • what details should be mandatory for the suppliers to provide towards regulatory approval and in actual accident reporting?

Care Robots

Regulation in this area is almost non-existent – much in contrast with regulation around drones and military-use robots.  As with autonomous vehicles, public opinion has been won round by heart-warming case studies showing Alzheimer’ clients enjoying improved quality of life through regular interaction with their companion-bots and autistic clients making educational and behavioural progress beyond those of their peers who do not have robotic tutors. But there are also reports that a minority of clients are very uneasy with robotic carers; some clients exhibit abusive behaviours towards the robotic carers and then continue those difficult behaviours with human carers.  Your Citizens Panel has been asked to grapple with the following ethical questions:

  • Should all care clients have a choice between robotic and human carers.  If so, how should that choice be determined, monitored and potentially revisited, especially in cases where the client is unable to express a clear preference for themselves? To what extent should objective measures of care plan progress take precedence over subjective judgements of clients and/or their attorneys/relatives?
  • Does society have an ethical duty to try and minimise moral injury to clients through their abusive behaviour towards robotic carers?  If so, is that duty there for the sake of the client, for the sake of the robotic carer and/or for the sake of a potential human carer?
  • Is it ethical to programme a robotic carer to trigger emotions in e.g. Alzheimer’ or autistic clients, in ways that we might consider manipulative in other situations?
  • Are your thoughts on any of these questions different depending on whether the client receiving the care is very young, very old and/or very cognitively impaired?

My group of six “bright and intelligent people” (that’s how we described ourselves anyway) chose the autonomous vehicles scenario with the debate quickly developing into a discussion balancing liberty/human-rights/freedom and safety i.e. people’s lives. We decided that when a certain percentage of driverless vehicles on the road was reached then a ban on human-controlled vehicles would be introduced. However we could not agree on what that threshold level would be!

The general conclusion from the two groups looking at the autonomous vehicles scenario was that the vehicle algorithms should be programmed to avoid injury to people outside of the vehicle as, by then (2028) the safety measures in the autonomous vehicles would be so amazing that no occupants would be injured. It was generally felt that, of course, autonomous vehicle suppliers must not only report all accidents but near misses as well.

There was also an interesting discussion of the ethics of care robots vs human carers and the responsibilities of each which identified a number of thorny issues in providing a service for vulnerable people.

So, its clear the Ethics of AI is  far more nuanced than any of us imagined. Our group barely scratched the surface of the topic in our discussion and society as a whole needs to start learning now how to make decisions in a more compassionate, respectful and responsible way.

The Directors of Ethical Reading would like thank Ian Harris for his talk and contribution. Incidentally, he scored just shy of 100% in the event survey. This marginal shortfall in perfection will hopefully encourage Ian to visit us again in Reading. We certainly hope so! The session was, as have all previous meetups, rated as “Excellent” by those attending.

The next meetup is on Tuesday 27th March on the subject of “Bullying in the workplace” at the London Road Campus of the University of Reading.

The event was kindly hosted by the Oracle Shopping Centre and we wish to convey our thanks to them.