Menu
CIO upfront: Tackling one of the biggest issues today - the ethics of AI

CIO upfront: Tackling one of the biggest issues today - the ethics of AI

We all have a role to play in this and need to be aware of what is happening in the wider ecosystem of technologies, writes Russell Berg of Quanton

Credit: Photo 121386079 © Alexandersikov - Dreamstime.com

If you don’t have an ethics and privacy committee, get one

If we want to understand the potential risk, we only have to look at the example of Cambridge Analytica, who are accused of using data to influence political outcomes in the United Kingdom and America. 

But despite its importance, it’s a conversation that hasn’t evolved much at all in the past two years. We’re still hearing the same discussions, with little action or moving forward. And that’s a big issue. 

The significant challenge we have is that the technological capability and the adoption and application of the technologies, are moving faster than we as a society can gain awareness, be educated and have the necessary conversations. 

The bigger issue behind that is that we don’t have frameworks or government policy in New Zealand to protect people and provide a framework for businesses to work within. 

I recently travelled to San Francisco for the AI Summit, where ethics was a key topic. AI ethics is an important conversation in our work. 

We deal with automation and if algorithms are producing probabilistic outcomes, then we’re actioning those outcomes – we have a role to play in that and need to be aware of what is happening in the wider ecosystem of technologies.

So, what were some of my key takeaways - for Kiwi business and government – on the topic of ethics and AI?

There are three core issues: Bias, privacy and transparency.

Bias: Getting prejudice out of the equation

One of the fundamental pieces of AI is the ability to use algorithms for predictive analytics and probability-based decision making. The risk then is prejudice to a target group in a total population due to erroneous assumption.

Bias isn’t a new concept. It’s existed forever – it’s a factor that has affected the validity and integrity of research results which inform human decision making.

The problem in the case of Ai is that processes based on algorithmic decision making are infinitely scalable and therefore have the potential to negatively impact greater numbers of people in society. 

There are five core types of bias: 

  1. Sample bias: When collected data doesn’t accurately represent the environment the programme is expected to run in. 

  2. Exclusion bias: Happens as a result of excluding some feature(s) from our dataset, usually under the umbrella of cleaning data. 

  3. Observer bias: The tendency for people annotating the data, to see what they expect to see, instead of what might be explicitly in front of them. 

  4. Prejudice bias: Happens as a result of cultural influences or stereotypes. 

  5. Measurement bias: Systematic value distortion happens when there’s an issue with the device used to observe or measure the data. 

Broadly speaking all the five issues come down to data quality issues or people issues derived from those responsible for the curation and/or annotation of the data.

Three things for companies to consider

  1. Diversity. Diversity in the teams designing, developing and testing algorithms is widely regarded as a key step. And not just diversity in the demographics, cultural and religious backgrounds, but also the geographic backgrounds (that is, people from different countries) – especially for algorithms with a high ethical impact such as recruitment screening or healthcare provision. 

  2. Test, test, test. The old methods of testing aren’t enough with machine learning models and algorithms, which are continually evolving. Instead there has to be continuous validation. So, test, test, test.

  3. Have a safe place for when bias is discovered. Companies need a plan in place, a safe response which can be implemented if unintentional bias (with negative impact) is discovered. You have to be able to turn the algorithm off and revert to business rules and safe heuristic methods, without stopping the business. 

Privacy: An exchange where the consumer must be considered

Just like bias, privacy isn’t a new issue. The same questions exist today that have existed for decades. 

  • What data is being collected?

  • Who has access to it? 

  • How it will be stored? 

  • How is it being protected?

  • What it will be used for?

Put simply, data is the fuel for AI and smart technologies. The more we use these technologies, and the more we scale them,  the greater the demand for data will be, more which will be consumed and more which will be created. 

In a traditional sense one of the biggest issues is data breaches i.e. through misintent or maliciousness personal data being released into public forums. The potential magnitude of data breaches is now greater than ever. 

Another view is that the privacy of individuals is gradually being eroded over time through the ongoing utilisation of smart technology and the information being collected and aggregated.

Think about the technology you use daily. 

When using Google Maps, your location is constantly being collected. When you give permission for your microphone to be used in apps like messenger or Alexis, it collects information constantly. The collection of data, however, is not necessarily linked to the moment of utilisation, the access and collection is often carte blanc.  

Within this conversation I have an issue – from a personal perspective: The ability of an individual to control the level of data they are providing relative to the minimum viable requirement for the utilisation of technology, or access to a service. 

Technology and service providers are overextending their power to gain the greatest possible access to our data, relative to the utilisation of their products. 

There’s the popularly held belief that Facebook is ‘listening’ to you if you have provided access to your microphone, which you must, to use video calling in Facebook Messenger. 

Not just while you’re using the app, mind, but always. Each piece, in and of itself, might not be significant. But they’re being aggregated, creating a much bigger, and more complete, picture of us than many of us realise. Likewise, with Google, you provide your location details everywhere you go, if you have given Google access to your location data. Which you must, to use Google Maps. Again, it’s not just while you’re using the app, but always. 

Sometimes that collection of data is for our benefit and the shared benefit of other users. But many of those reasons can be capitalistic in nature to the benefit of the service provider and some are to the detriment of the user and other users – Cambridge Analytica being an example. 

So, what’s the solution?

There’s no simple solution. But this article in Forbes presented some basic principles: 

  • AI systems must be transparent.

  • An AI must have a ‘deeply rooted’ right to the information it is collecting.

  • Consumers must be able to opt out of the system.

  • The data collected and the purpose of the AI must be limited by design.

  • Data must be deleted upon consumer request.

I’m not saying they’re complete, but I think those are a good starting point and from them come two possibilities: The regional or global adoption of a set of principles enabling the technology sector to be self-governing, or legislation. 

While possible, the global adoption of principles by the technology and service sectors seems unlikely – there really isn’t any incentive for service providers to do this.  

Which leaves us with the second option, that of legislation. 

All the trends we have talked about are not unnoticed and we’re already seeing the first stages of legislation with GDPR and the United States following suit. We are yet to see regulation of this nature fully flow through to New Zealand. 

Ultimately, it’s something legislation will be required in order to solve. One of the big challenges for legislation will be how we balance the demand for applications and utilisation, with the protection of citizens and their rights. We have to acknowledge that it’s an exchange: You get to utilise an app like Maps, but in return you are surrendering your data. But it has to be balanced.

And one of the significant challenges we have is the vastly different view members of society take from a generational perspective. 

Gen Z don’t think twice about sharing their data and don’t necessarily understand the consequences of the information they are sharing. It’s a different story for Gen X. That makes privacy a hard concept to define and, in turn, to safeguard.

Personally, I would like to see legislation which mandates that data can only be collected based on the direct utilisation of a product, service or application. With the user providing additional consent for further data collection. It is something that can only be solved by legislation. 

The concerning issue is the conversation is not being given precedence in New Zealand

The most important thing the New Zealand government can do, barring moving fast with legislation, is driving the importance of the conversation and ensuring it is front of mind for New Zealanders – and that we fully understand the issues and are educated. 

The greatest issue is that this conversation is not front and centre for the general population. As an issue there is no pressing need to bring it to the surface and address it. It will continue to grow as an issue beneath the surface until something happens that compels society to engage in the conversation. 

If the events linked to Cambridge Analytica are not a compelling enough give the ethics and privacy debate some precedence, what kind of event will it take? 

Every government has a responsibility to step into this domain to protect citizens and should be proactively participating in this conversation.

Transparency – and your chance to be an industry leader

The final piece is transparency, and the openness, communication and accountability that it implies. 

For AI to be adopted it has to be trusted.  While a level of trust exists implicitly, it will be further built – or destroyed – based on the level of transparency. 

Ensuring transparency can take many forms, from advising people how they will be measured or assessed by an algorithm, or how factors are weighted for the algorithm, to communicating how decisions have been made and the factors involved in determining an outcome. 

Companies shouldn’t see this as a burden. Instead, those that behave ethically will be doing themselves a great favour if they increase their transparency. 

Remembering that algorithms may not be static, and especially when combined with machine learning, can be ever evolving and self-learning.  Transparency is going to become one of the greatest tools with respect to bias, identifying where bias comes into models, as people alert companies to bias when it pops up. 

Businesses which are prepared to really own that space are going to start coming to the forefront and setting the standard – something that can only be of benefit to them.

Until we get to the point where we have clear legislation and frameworks the only thing we really have is transparency. I think transparency is one of the greatest areas where legislation has a role to play in relation to AI and smart technologies. 

There are three action points for New Zealand businesses:

  1. Ethics and privacy committee. If you don’t have one, get one. It’s more important than ever before. 

  2. Educate them. Once you have an ethics and privacy committee, make sure you invest in them with education around AI and smart technologies, particularly the ones you’re using, or planning to use, in your business. 

  3. Empower them. Give them greater mandate, and greater scope. Build transparency into your products and services. And consider dedicating full time capability to the issue of ethics and privacy. In most cases those on current ethics and privacy committees have other jobs in the business as well. But we’re at the point now where the demands of ethics mean businesses need to dedicate full time capability to it. 

The issue of ethics in AI is complex, rapidly evolving – and one we can’t shy away from. It’s time to push the topic front and centre not just for a select few, but for all of New Zealand society in order to make real progress.

Russell Berg is general manager for customer engagement at Quanton

Sign up for CIO newsletters for regular updates on CIO news, career tips, views and events. Follow CIO New Zealand on Twitter:@cio_nz

 

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags privacyethicstrusttransparencyCambridge Analytica

More about FacebookGoogleMessengerTechnologyTestTwitter

Show Comments