Menu
OK Computer? Ensuring safe and fair use of algorithms

OK Computer? Ensuring safe and fair use of algorithms

The algorithms fundamental to artificial intelligence have huge potential to benefit business, government and society. But how can we ensure transparency and accountability when algorithms are used for important decision making? Frith Tweedie of EY writes about steps to help manage algorithmic risk in both the public and private sectors.

Google search. Facebook’s News Feed. “You might also enjoy” suggestions on Amazon.

These are all are driven by powerful computer algorithms. And those algorithms – sets of rules that tell a computer what to do and how to do it – are increasingly applied to many different aspects of our lives.  

The potential benefits are vast. Algorithms are central to artificial intelligence (AI) and especially machine learning, a form of AI where algorithms learn through their experience of using data. AI can identify patterns in data, analyse and solve complex problems, automate tedious or dangerous work and increase productivity. New Zealand’s AI Forum estimates AI has the potential to increase GDP by up $54 billion by 2035. Global AI-derived business value is forecast to reach $3.9 trillion in 2022.

But algorithms can also introduce significant risks. If not identified and properly managed, those risks can result in bias and discrimination, privacy breaches, operational risk, financial loss, reputation damage and overall loss of trust.

Tailored internal AI ethical principles and policies will help organisations be clear what they want to achieve, whether they have the right data for those purposes and what the key data privacy, ethics and bias considerations might be

Frith Tweedie, EY

Algorithms are often used to process large amounts of personal information and make key decisions about credit, education, employment and health care. Around the world, stories of racist robots, gender-biased display ads, inadequate training leading to unsafe cancer diagnoses, killer robot armies, privacy breaches and opaque proprietary algorithms being used in the criminal justice system are increasingly common.

These risks are especially potent for machine learning and deep learning, a subset of machine learning that is much harder for humans to decipher. Even engineers who build systems that seem relatively simple - such as apps and websites that use deep learning to serve ads - cannot always fully explain their behaviour.

Government review

Those risks, combined with high-profile headlines involving the Ministry of Social Development, ACC and Immigration New Zealand, have prompted a review of government use of algorithms.

The review aims to ensure New Zealanders are informed and have confidence in how the government uses algorithms. It will initially focus on operational algorithms used in decisions directly impacting individuals or groups.  Underpinning the analysis are the Principles for Safe and Effective use of Data and Analytics, a joint effort by Stats NZ and the Office of the Privacy Commissioner.

Representatives from Statistics NZ indicate widespread but diverse use of algorithms across government. Since the NZ government’s use of deep learning-style techniques is yet to approach US or Chinese levels, Statistics NZ sees this as a “good time to get things right”. And while there are no immediate plans to extend the focus beyond government, they see that a “good tool is useful beyond government” and may ultimately interest the private sector.

Private sector challenges

Commercial, operational and reputation risks can arise from bad data and poorly trained algorithms, often aggravated by inadequate data governance. Poor credit risk decisions, lost revenue, failed marketing campaigns and customer churn are all real challenges in a range of industries.

Questions persist as to whether New Zealand boards are equipped to understand and oversee the use of AI technologies. Both boards and senior management need to be ready to ask questions about how to optimise potential AI benefits while also identifying and managing potential risks.

What can be done?

The following steps can help manage algorithmic risk in both the public and private sectors.

Clear objectives and principles: Tailored internal AI ethical principles and policies will help organisations be clear what they want to achieve, whether they have the right data for those purposes and what the key data privacy, ethics and bias considerations might be.

Data quality and governance: Implementing data strategies focused on data availability, acquisition, labelling and governance will help manage data quality risks.

Monitor, test and audit: Processes for assessing and overseeing algorithmic data inputs, workings, and outputs (like adapting Privacy by Design principles or implementing Algorithmic Impact Assessments) enable transparency and risk identification. Independent internal and/or external third party reviews should be used to provide objective audits.

Regulation: Emerging AI frameworks, policies and legislation not only set rules but also reflect societal expectations. Europe’s General Data Protection Regulation requires individuals to be given “meaningful information” about the logic used in automatic decision-making. New Zealand legislators have chosen not to follow that lead in the Privacy Bill, with the omission of mandatory Privacy Impact Assessments and automated decision-making transparency arguably a missed opportunity.

Consider using AI to identify bias: AI itself can be used to identify bias and other risk factors. Microsoft is one of several organisations building tools to automatically identify bias in a range of different AI algorithms.

Overall it’s clear the proliferation of powerful algorithms impacting all aspects of our lives will only increase. Organisations need to spot and manage potential harm using a risk-based approach – a movie recommendation model will have a vastly different risk profile to one used in a medical environment or to determine access to social support,  for example.

We also need to recognise that AI is still a nascent and immature field. Algorithms, machine learning and the rest are still only tools programmed by humans. It is our responsibility to develop and use them to enhance humanity and not to amplify existing problems.

Frith Tweedie is Digital Law Leader at EY

Get the latest on digital transformation: Sign up for  CIO newsletters for regular updates on CIO news, career tips, views and events. Follow CIO New Zealand on Twitter:@cio_nz

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags disclosureprivacyGoogleFacebookcareeramazonalgorithmsAIdigital dividedata scientistmachine learningPrivacy by DesignEYchatbotsprivacy by design and defaultethics of big dataanalytics economyCambridge Analytica scandaldigital humans

More about ACCAmazonBillFacebookGoogleLeaderMicrosoftNewsOPCTwitter

Show Comments