Menu
Artificial intelligence and the challenge of trust

Artificial intelligence and the challenge of trust

As with any technology, AI can have great benefits, but we need to be thoughtful about how it is deployed, writes Tony Stewart of Xero.

What is the context that we’re building this technology for, and will it make people feel good, or is it a bit too invasive?

Tony Stewart, Xero

We’re entering into a stage that anthropologist and futurist Professor Genevieve Bell described at Xerocon as ‘the fourth wave of industrialisation’, an era of cyber-physical systems – drones, robots and self-driving cars powered by artificial intelligence (AI).

As with any technology, artificial intelligence can have great benefits, but we need to be thoughtful about how it is deployed. One one hand, AI can automate tedious processes and remove the leg work from manual tasks. On the other hand, it can be used to be invasive and even manipulative.

If it’s very clearly positioned as serving your interests, this use of AI can be really positive – I think we’d all agree that recognising fraudulent behaviour on your bank account is a good thing. It can also be used in a negative way to manipulate you – such as recognising your political affiliations and swaying you to vote, or not to vote, or even to vote for a particular person or party.

The context of what we’re using technology for is more important than ever before. This is particularly true for AI.

AI is great for processing data in a way that humans can’t.  It can do things at speed and at enormous scale. For example it can be used to recognise patterns and anomalies across data, which is very useful for predicting the cash flow of a business or automating aspects of accounting.

AI has many different applications –  a simple one is in taking events of information in the real world and turning that into data. In a social context, an example is image recognition. You take a photo of your friends, upload it to social media and your friends are recognised and tagged to allow you to find them again easily.

In a business context, we can use image recognition to capture the information that’s on a receipt and turn that into data that we can use to automate a task – such as auto-completing online expense claims.

People tend to find the AI either really useful – when it’s in a fairly safe context like recognising a receipt – or quite creepy – when you take a picture of 100 people and the machine can identify everyone.

People tend to find the AI either really useful – when it’s in a fairly safe context like recognising a receipt – or quite creepy – when you take a picture of 100 people and the machine can identify everyone

Tony Stewart, Xero

That’s where this idea of context and trust comes through and these are concepts we should be keeping front of mind when developing AI. What is the context that we’re building this technology for, and will it make people feel good, or is it a bit too invasive?

AI can be a great servant in the small business arena. The big challenge is to get trust in your system, and that trust has to be earned. So, how does a platform that uses AI go about earning the trust of potential users?

The system must leave me in control

Imagine if instead of making suggestions about what I might like, Amazon simply bought them for me. For me to use that service it would have to be incredibly accurate in predicting what I wanted and that Amazon had my interests in mind. I don’t think I’d ever get to that level of trust. However, I am okay with the site making suggestions, leaving me to decide if I am going to follow them.  

In the small business platform sense, AI functions often replace manual data entry of some sort. Information that a person enters is often subject to error – they’re in a hurry, they’re overworked, or they don’t think the accuracy of their input is overly important.  AI can play a very useful role here to remove a burden from a person but still leave them in the driving seat. In order to be effective, the AI has to be at least as accurate as manual entry (and in our experience, due to the factors mentioned earlier, it often is more accurate than a person) and give the chance for the person to correct anything.

Clarity and transparency

Another key way to gain and retain trust is to be transparent with how the technology works and how data is used. I talked about social licence in August and this is a key element of transparency – ensuring users are aware of how their data will be used and give consent for you to do so.

Similarly, transparency is required when it comes to the platform itself. It’s important that users have a level of understanding about how the platform and the AI technology behind it. If a mistake is made, it’s essential that users can see why it was made and the reasoning behind it.

The interface

I, like many others, have a few amusing experiences with auto-correct when texting. Usually the stakes are pretty low and it’s just an amusing story, but it does illustrate the challenge around the user interface when a system intervenes to make a suggestion. Autocorrect is by and large a very useful feature, but we tend to remember the mistakes a lot more - and we blame autocorrect rather than ourselves for accepting the correction.

Adopting usability standards and an attractive, easy to use interface plays a huge part in whether users will trust the platform. The online world is big, complicated and messy, and that’s why the user interface is so important. Anything that is difficult to use or understand is going to put people off really quickly. Similarly, anything that is simple to use and attractive to look at is more likely to be tried and eventually trusted by its users.

The need for strong usability and attractiveness in a customer facing product  is one reason why companies often start with internal facing AI first. The most important thing about an AI system that detects fraud is how effective it is, not if it has an attractive interface.   

AI is not the future of technology – it’s already ingrained in nearly daily life and its presence will continue to increase. Keeping the element of trust, and how to mitigate this challenge, in mind when considering using this technology is essential for a successful platform.

Tony Stewart is chief product, platform and data officer at Xero

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags strategyData managementplatformxeroAIanthropologytrustchief digital officercxDXethics of big datadigital ecosystemanthropologistIndustry 4.0genevieve bellsocial licenseXerocon 2018Tony Stewart

More about AmazonXero

Show Comments