The Worshipful Company of Information Technologists (WCIT) organised a debate at the IT Hall on 2 April and drew about 50 people.

The event aimed to cut through the hype on artificial intelligence (AI) Ethics and provide thought leadership to the 6 charities that have so far joined the AI/ML Learning Exchange (which was launched late last year as a result of the WCIT Charity Award). In a battle of the titans, WCIT Professor Richard Harvey, representing Academia, debated the motion “There is no such thing as AI Ethics – just Ethics” with Chris Rees (Chairman of the WCIT Ethics Panel and BCS President) representing Commerce.

Richard Harvey and Chris Rees jointly commented:
“This Debate is not a battle between blow-hard politicians.  We both hope to present cases that are reasonable, our genuine desire at the end of the debate is have heard some powerful and convincing truths”.

Professor Richard Harvey argued that no new ethical regulation is needed for AI and that to put new AI regulatory burdens on countries or sectors would be counter-productive. He cited a number of examples where an issue for which people had called for AI regulation was in fact covered by existing systems.

The use of data by Facebook and Google has raised calls for regulation. This data usage does involve AI in their analysis. However Professor Harvey argued that the problem was sharp (unethical) commercial practices in some companies, rather than AI-specific.

He thought that in the case of the introduction of autonomous vehicles, the Jeremy Bentham principle of “greatest good over the least pain” applies – and that the problems were of attributing responsibility for failures, rather than the ethics of the artificial intelligence. There are ethical issues raised in some circumstances such as the use of AI in drugs trials, and these should be covered by regulation of drugs trials, for instance on vulnerable people.

He concluded by pointing out that countries had different systems in many areas, such as the age of consent, and that looking for a global code of AI ethics would be counter-productive.

Chris Rees started by emphasising that he was not tackling the issue of human level AI – known as AGI – but rather the current level of AI which is related to specific roles or tasks; and making the distinction – is it a new ethical issue posed by AI?

He suggested in six different domains, this was so.

The first was bias in AI driven face recognition and legal systems where the persistence and replication of bias introduced a qualitative difference from human systems. The second was the impact on jobs – destroying jobs and creating new ones – the House of Lords had proposed a code to cover the effect of introducing AI into the workplace.

The third was harmlessness – an instance was the use of AI guided drones without human intervention – an ethical problem is raised when a drone kills innocent civilians. He also suggested that AI specific ethics and regulation are needed to cover assigning responsibility when an AI system goes wrong.

The use of AI avatars impersonating human beings – a test case was an avatar booking a hair appointment or a restaurant – an audience posed with this thought that this was unethical. And finally, explainability – as deep learning creates systems which make decisions which cannot be explained, AI specific ethical codes are needed to protect those impacted, sa for instance in the case of refusal of a mortgage,

The audience discussion focused on the hard case of unintended consequences of AI – were these qualitatively different from unintended consequences of use or failure of other types of engineering or software systems? It was felt that codes of ethics were often sector based – as for lawyers, accountants, medical trials etc – and that these should be reviewed and if necessary made more detailed to cover AI applications.