Having appropriate human checkpoints to scrutinise the technology, and its outputs, is important
AI and other emerging technologies have the potential to create efficiencies, unlock valuable insights from data, transform customer experience and increase profitability. These opportunities are enticing, yet before diving in, it is essential that organisations fully understand the technology, how it will work in the context of their business and where the key risk areas lie. Three key factors should be considered first:
AI learns by analysing patterns and features in data. AI technology is therefore only as good as the data it has been trained on and it is important to ensure that the right data has been used. What constitutes the "right data" will depend in part on the context that the algorithm will be deployed in. Data sets used to train algorithms must be sufficient, fit for purpose and free from bias.
Sufficient: A data set will be "sufficient" if it contains enough examples to enable the algorithm to recognise, and appropriately deal with, exceptions such that it does not react in an unexpected ways and produce unreliable results.
Fit for purpose: A data set exhibiting consumer preferences in France may not be able to teach an algorithm how to accurately predict consumer preferences in New Zealand. In this sense, the data may not be "fit for purpose" and may cause an algorithm trained on it to produce unreliable results if applied outside of France, unless augmented with additional context-specific data.
Free from bias: Data sets can contain inherent biases. If a data set exhibits biases within it, algorithms trained on that data may learn, and perpetrate, that bias. It is therefore important to check, and normalise, data prior to exposing an algorithm to it. For example, a data set of historic employment data could be used to train a recruitment algorithm, but if the industry has historically suffered from a gender imbalance then the algorithm may learn to prefer the characteristics of the dominant gender, and perpetrate the imbalance.
These factors are particularly important when using AI outputs for decision-making purposes, as inaccurate or incomplete data sets could cause organisations to make ill-informed decisions. This could carry with it the risk of significant legal liability, including under the Human Rights Act 1993, where relying on the outputs of an algorithm leads to discriminatory decision-making.
An additional point to consider is that, if an organisation is planning on using any data concerning individuals (particularly if it constitutes personal information), it is important to have consent from such individuals to process their information in this way. This has become increasingly important in the context of the European Union's new privacy law, the General Data Protection Regulation, which has raised the bar for obtaining consent and the use of customers' personal data by organisations where those customers ordinarily reside in the EU.
Prior to implementing AI technology into your business, it is important to diligence the algorithm itself to ensure that it is acting reliably, transparently and not in a biased fashion. Inaccurate or discriminatory outputs are key risks associated with AI technology and those issues can result not just from issues in the data sets that the algorithm was trained on, but also from the programming of the algorithm itself.
Data sets used to train algorithms must be sufficient, fit for purpose and free from bias
A popular theory commonly termed AI's "white guy" problem, essentially suggests that if a population of coders lacks diversity it will result in algorithms being coded in such a way as to inherit that particular group's unconscious biases, resulting in discriminatory outcomes. Whether or not that theory is correct, it is essential that organisations diligence algorithms thoroughly prior to implementation to ensure that they are producing reliable results, in a transparent way and without bias.
To assist with this, many businesses are now engaging external advisors to conduct algorithmic audits. Although algorithmic auditing is still a relatively new practice, companies such as Deloitte and Accenture do already provide services in this area.
Whether using consultants or in-house expertise, it is worth diligencing algorithms prior to implementation rather than looking to rely solely on contractual remedies later if things go wrong. This is particularly important when you are looking to procure technology from startups or growth stage organisations who may not have deep pockets.
Although AI technology can create vast efficiencies through automation and reducing the amount of human involvement required in a given process, it is important that we do not underestimate the continuing importance of the role of humans.
Having appropriate human checkpoints to scrutinise the technology, and its outputs, is important to ensure that the algorithm is at all times working suitably, not acting in a biased fashion and is still meeting organisational needs. This approach allows issues to be addressed at the earliest opportunity and before resulting in significant cost to the business. Human checkpoints need to be built into the technology and it is important that you consider the transparency of the tools you are looking to employ and whether they provide you with the ability to understand and assess their ongoing operation.
Each of the above factors above are essential to mitigating against the risks of implementing new AI technology and ensuring that the implementation effort really is worth the investment. These initiatives also assist an organisation to have, and maintain, a sound understanding of the technology, how it is operating and any issues associated with the same.
Contributed by Russell McVeagh Technology Partner Liz Blythe and Solicitor Zoe Sims.
Sign up for CIO newsletters for regular updates on CIO news, career tips, views and events. Follow CIO New Zealand on Twitter:@cio_nz
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.