CIO

CIO upfront: She’ll be right? Government review of algorithms shows need for caution

A recent report identifying widespread government use of algorithms in New Zealand identifies the need for more transparency, accountability and consistency. But does it downplay privacy, bias and ethical risks? Frith Tweedie of EY writes about the challenges ahead.

The “Algorithm Assessment Report” describes algorithms as having an “essential role” in supporting the provision of government services.

While it recognises the “fresh challenges” introduced by new and evolving technology - cheerfully described as “opportunities” - the report seems keen to avoid the current media fascination with stories of killer robot armies taking our jobs and ruling the world. But does it end up downplaying the risks and does it provide a sufficiently strong base for improvement as the New Zealand government moves towards the use of artificial intelligence (AI)?

The review was conducted by Statistics NZ and the Department of Internal Affairs to assess existing algorithms and their uses across 14 government agencies. It’s the first review of its kind in New Zealand and internationally, with a stated objective of ensuring New Zealanders are informed and have confidence in how the government uses algorithms.

Growing awareness of risks

Risks associated with algorithms and artificial intelligence (AI) are also increasingly well known. Stories of racist robots, privacy breaches and opaque proprietary algorithms are now widely reported. Closer to home, recent high-profile headlines involving the Ministry of Social Development, ACC and Immigration New Zealand are likely to have helped focus attention on these issues.

A set of tailored data and AI ethical principles can help organisations be clear on key privacy, data ethics and other considerations relevant to their work.

Frith Tweedie, EY

Many of the algorithm case studies discussed in the report carry real privacy, bias and discrimination risks. For example, the Police supports decision-making by frontline staff by using two algorithms to assess the risk of future domestic violence offending. The combination is used to identify an "overall level of concern" for the safety of the people involved. But it appears that ethnicity may be a data variable in those tools, increasing risks of bias and discrimination.

Inferences like assumptions and predictions about future behaviour may also be privacy-invasive, counterintuitive and unable to be verified at the time of decision-making. The Department of Corrections uses an algorithm to calculate the probability that an individual offender will be re-convicted and re-imprisoned for new offending within five years - immediately bringing to mind the controversial COMPAS tool used in the US.

Ethnicity is not a variable in the Corrections algorithm – unlike the COMPAS tool - but age, gender, age at first offence, number of court appearance and convictions all are.

Inconsistent assurance efforts

The report found an inconsistent approach to assurance around the development of algorithms, with the type and extent of such assurance varying between agencies. It makes several recommendations but is light on detail about how to achieve them.

The report says government agencies should:

  • Develop formal policies to balance automated and human decision-making

  • Implement processes to ensure privacy, ethics, and human rights considerations are considered during algorithm development and procurement

  • Explain in clear and simple terms how algorithms may affect people

  • implement processes to regularly review algorithms for unfair, biased or discriminatory outcomes

  • Share best practice with support from external privacy, ethics, and data experts.

What’s missing?

Data quality and governance

The report found government agencies could do more to understand data limitations. It acknowledges that “without high quality data, and appropriate data management, the accuracy and predictive ability of any algorithm can be compromised”.

But the report doesn’t include any recommendations around what agencies should be doing to manage data quality issues – what is commonly referred to as “garbage in, garbage out” (GIGO).

As Amazon recently found out, an algorithm is only as good as the data it learns from. It scrapped its AI recruitment tool after the tool taught itself that male candidates were preferable. The problem arose because the system was trained on data submitted by applicants over a 10-year period – much of which came from men.

Comprehensive data governance strategies and processes that focus on data quality, labelling and visibility of potentially biased data will help manage GIGO and data quality risks. In addition, a set of tailored data and AI ethical principles can help organisations be clear on key privacy, data ethics and other considerations relevant to their work.

Black box risks

The report recommends greater transparency around algorithms. In an interview on Radio New Zealand, the Government Chief Data Steward advocated agencies sharing their code with the public.

But how can such transparency be achieved when confidential commercial solutions have been deployed? The report found the most common approach to the development and procurement of algorithms by agencies involved contracting external expertise into an internal development process.

Commercial partners may understandably be reluctant to share the internal workings of their algorithms for fear of disclosing trade secrets and sources of competitive advantage. But that makes it hard to monitor and challenge decisions and other outputs.

Unfortunately the report doesn’t include any suggestions on how to manage these challenges. Government agencies will need to approach their procurement processes carefully, building transparency requirements into contracts with third party providers and carefully considering issues like ownership of intellectual property. Algorithmic Impact Assessments and monitoring for unintended consequences become even more important.

What should be done?

The report acknowledges the review has been necessarily limited in scope. Subsequent phases are proposed, including reviewing wider and different government use of algorithms. But the report recognises that additional resources would be required - a future decision for the Government.

Overall the report is a largely comprehensive look at government agencies’ use of algorithms in New Zealand. But its perky approach risks downplaying some of the ethical, privacy and bias challenges and its recommendations lack detail.

Government decision makers would be well advised to explore the growing wealth of global insights on how to guide innovation in a way that builds and sustains trust. Leading tactics observed internationally to build algorithmic and AI ecosystems include:

  • Ethics board: Provide independent guidance on ethical considerations and capture perspectives that go beyond a purely technological focus. Advisors should be drawn from ethics, law, philosophy, privacy and science to provide diverse perspectives and insights on issues that may have been overlooked by the development team.

  • Design standards: Help define governance and accountability mechanisms and enable management to identify what is and is not acceptable.

  • Validation tools: Help ensure algorithms are performing as intended and are producing accurate, fair and unbiased outcomes.

  • Internal awareness training: For executives and developers on legal and ethical considerations and responsibilities to build trust in the use of algorithms and AI.

  • Independent audits: Of ethical and design issues to test and validate algorithms and AI systems and evaluate the governance model and controls across the entire data life cycle. Given that AI in particular is still in its infancy, this rigorous approach to testing is key to safeguard against unintended outcomes.

China, France, Canada, Germany and, most recently, Estonia, have already taken this step. EY's Keith Strier is a senior advisor on Estonia’s recently announced AI strategy and action plan. He says guidelines for how to deploy AI, and to what extent, will be critical for governments in the new global economy to stay relevant.

“Not having a national AI action plan will mean a failure to really coordinate focused resources, which, in a smaller country is more essential because there’s more limited resources.” 

New Zealand would do well to follow suit.

Frith Tweedie is Digital Law Leader at EY

Credit: Dreamstime