CIO

Ethics and liability - mistakes by machines

The increasing prevalence of AI tools designed to target and influence consumer behaviour and assist organisations regarding decision-making has sparked widespread international debate and calls for responsible development and integration of AI technologies, write Liz Blythe, Michael Taylor and Zoe Sims of Russell McVeagh

The increasing prevalence of AI tools designed to target and influence consumer behaviour and assist organisations regarding decision-making has sparked widespread international debate and calls for responsible development and integration of AI technologies particularly in applications that can directly impact individuals. It has also called into question issues around the attribution of legal responsibility when things go wrong. 

Why are ethics relevant?   

AI technology is increasingly being used to make decisions which directly affect human behaviour and, in some cases, human rights. While you may not be bothered why an algorithm has chosen to send you a particular shopping recommendation in a pop up when online shopping - you might be more inclined to worry if new insurance technology uses previously unobtainable insights about your health and genetic make up to increase your health or life insurance premiums, or restrict your access to these products entirely.

One application that has sparked particularly fierce debate is the use of AI technology to make decisions that affect people's rights and freedoms. AI systems that purport to be able to determine a person's risk profile and propensity to reoffend are already being used in some jurisdictions to inform judicial decisions regarding bail, parole and sentencing. However, there has been significant controversy around whether these algorithms are acting in a biased fashion and, more fundamentally, whether this is an appropriate context in which to employ AI-based decision-making.  

Many have called for clear guidelines restricting the use of AI tools that influence behaviour and make automated decisions that impact human beings' access to credit, insurance and other products or fundamental rights and freedom, and requiring a higher standard of transparency, accountability and reliability where any such implementations are permitted.

These calls for change have not gone unnoticed and legislators, regulators and commentators globally are increasingly giving serious consideration to these issues. In 2018, the UK House of Lords Select Committee on AI recommended a voluntary AI code of conduct, and proposed five core principles, emphasising the importance of human rights in the design and development of AI.  

Europe's General Data Protection Regulation contemplates automated decision making by any means where it provides legal effects concerning, or otherwise significantly affecting data subjects and imposes restrictions on this type of behaviour. We expect this to be just the start of regulation in this area.

But what happens when things go wrong?

Even if we do set boundaries requiring responsible design, development and operation of AI technologies, inevitably mistakes will be made. When things go wrong, people will look to recover their loss, but can we rely on our traditional legal concepts regarding attribution of liability to do so?

The concept of causation is at the heart of attributing legal responsibility. In contract and tort law, to recover compensation, a plaintiff must show that the defendant's breach caused a loss. This concept is also enshrined in legislation. Take self-driving cars, for example, in the event of an accident, who is at fault if no humans were controlling the vehicles involved? The answer is not straightforward.

AI presents a new challenge when using traditional legal concepts to determine fault (and, in turn, compensation). The decisions made by AI-powered machines may not be a direct consequence of, or attributable to, whatever human involvement there has been (e.g. human programming or remembering to run updates to software).  

For example, AI technology learns from the data it has access to, which may be wholly or partly out of the programmer's control and, where something goes wrong, it may be unclear whether the programmer who initially developed the algorithm is legally responsible. In fact, once an algorithm powered by machine learning has been running for some time, even the programmer who initially developed it may not be able to explain why it is making the decisions it is. Microsoft's AI Twitterbot "Tay" is an example of this. 

Tay was shut down just 16 hours after its launch due to making offensive remarks in response to learning from it interactions with human Twitter users. Who should be responsible in these circumstances? The Twitter users who trolled the algorithm? Or Microsoft whose name was attached to the project? Or perhaps one of their suppliers who assisted in building it and did not employ controls to stop this sort of thing happening?

This is a tough question to answer and demonstrates the difficulties of using a human centred concept of responsibility to attribute liability in the case of a self-teaching, and ever learning, machine.  

It is not clear how legislators and regulators will deal with this problem and New Zealand courts have not yet had to grapple with the issue. Possible legal solutions could include broadening statutory no-fault systems, such as those adopted under the Accident Compensation Act 2001, or broadening the scope of vicarious liability to find one of the human actors involved (however remote their involvement may have been in the relevant incident) liable for the purposes of compensation.  

It is only a matter of time before concerns about the ethical use of AI and attribution of legal liability come to a head. It is clear that the law is not currently designed to deal with these issues in a clear-cut way, but it is likely that changes in this area may be on the horizon. Early-adopters should keep this in mind when implementing new AI technology as it will be important to ensure that unwanted liability does not attach in unexpected ways and that implementations err on the right side of the ethical debate so that your technology does not later become prohibited or require a costly re-engineering effort to comply with those inevitable changes in law.

Liz Blythe is technology partner, Michael Taylor is senior associate and Zoe Sims is solicitor at Russell McVeagh