Menu
CIO upfront: Will people trust a ‘digital human’ the same way they trust people?

CIO upfront: Will people trust a ‘digital human’ the same way they trust people?

With AI advancing by the day, this is a question begging to be answered, writes Piers Smith of FaceMe

Trust is waning in this digital era

Trust is a broad term. You trust your friends and family with personal matters, and perhaps your neighbour to water your plants while you’re on holiday. You trust your bank to keep your money safe, your doctor to advise on your health and your taxi driver to get you somewhere safely.

There are plenty of different levels to trust, but only really two types—cognitive and affective.

Cognitive trust is based on the belief the other party has competence, reliability and dependability—the taxi driver, for instance. Affective trust is the belief an emotional bond will create interpersonal care between parties—your family or, to a lesser extent, your neighbour.

The word “parties” and not “people” is important, because with AI advancing by the day, there’s a question begging to be asked and answered. When Digital Humans are an ingrained part of both our personal and professional lives, can/will we trust them in the same ways and to the same degrees we trust humans?

The answer is probably more nuanced than a simple yes or no, depending on the role that Digital Human is playing.

  • Would you trust a Digital Human to drive your car safely?

  • Would you trust it to advise you on your health?

  • Would you trust it to keep your money safe?

  • Would you trust it to look after your home?

  • Would you trust it to keep your personal information safe?

You likely have different responses to each. More to the point, will you trust a Digital Human to do these things more than you trust a living, breathing one?

There’s a very real case that yes, people will build this level of deep trust with AI-based Digital Humans. Though also some issues we need to naturally be careful of, too, as we step into this exciting new world.

The state of trust in 2019

To start off, we need to quickly look at trust in the modern day—and, unfortunately, cognitive trust has taken a battering in the digital age.

There are many, many high-profile examples of eroded trust in the dependability and competence of even the largest global companies to keep our most sensitive data safe.

In 2018 alone, hackers accessed information on some 500 million users of Marriott hotels; another breach at Facebook leaked the personal information of more than 100 million users; biometric data of more than 1 billion Indian residents was maliciously accessed.

British Airways, Quora, Google, Cathay Pacific and thousands more big brands had some form of data breach last year, caused by anything from third parties with a grudge to unsecured APIs and sub-par technology.

This modern environment undermines the confidence people have in the types of information they can share and the trust that it will be respected.

In the gusts of global change, with uncertainty on the rise, digital trust could be the new global commodity

Piers Smith

Yesterday’s golden child, all things digital, may be losing its lustre and cultivating a growing mistrust in technology. Cryptocurrencies like Ethereum and social media are increasingly vulnerable to hacking or foreign intervention. And statistics keep scaring us with the threat of robots taking our jobs.

In many ways, trust is waning in this digital era. And many businesses know it.

In PwC’s latest CEO Survey, 84 per cent of top business leaders around the world say AI is good for society, though 37 per cent also say their top priority with AI is ensuring it’s trustworthy. PwC calls trustworthiness in AI the “US$15.7 trillion question”, referencing the size of GDP reward for getting it right. For context, the United States’ GDP is around $20 trillion today.

There’s risk and an erosion of trust, but also opportunity. Ultimately, technology is here to stay; but so is the human propensity for connection. So, how can we create digital solutions that actually build trust and meaningful interactions for people?

The answer to this question could lie in the emerging category called Digital Humans. Without a doubt, this technology, wrapped in a new kind of human skin, could be a gamechanger to nurture affective digital trust.

Wait, do those emotions really matter?

As humans, we bring emotions into all our interactions, even when those feelings are irrational or illogical. We shout at traffic lights or command the laptop to shut down quicker. We cry in movies, even though they’re not real life.

We are constantly filtering our perceived world through the sieve of our internal state of being.

If we create experiences that are emotionally negative, we will hinder healthy personal growth. But if we create positive experiences to promote happiness and security, we incubate an attitude of affective trust. Unlike cognitive trust that is a ‘heady’ confidence, based on someone’s (or something’s) reliable track record, affective trust is more a product of the heart.

It comes out of authentic intimacy, empathy, a feeling that friendship is the end goal. The Chinese would call it guanxi, and it’s not built overnight. But, once established, it can catalyse powerful momentum and unlock profound new meaning, connections and customer brand value for industries and organisations.

'Digital Humans can offer 100 per cent anonymity… Suddenly, there’s a ‘confession room’ where full disclosure is safe'
'Digital Humans can offer 100 per cent anonymity… Suddenly, there’s a ‘confession room’ where full disclosure is safe'

How do Digital Humans help build trust?

So what does guanxi have to do with Digital Humans? A whole lot, in fact.

Digital humans are designed with the best of humanity in mind. Ideally, every aspect of their design, both verbal and non verbal, is feeding into one cohesive experience that breeds affective trust.

They aim to enhance human rapport by being emotionally intuitive, responsive to social cues, visually approachable and highly anthropomorphic. And research is showing these kind of embodied interactions drive engagement, which opens all kinds of possibilities as to how systems can help people.

But unlike humans, what Digital Humans also have to offer is 100 per cent anonymity. This is proving to be a major change for fields like mental health. Now, suddenly, there’s a ‘confession room’ where full disclosure is safe. The therapist isn’t shocked or judgmental in the slightest—in fact, the Digital Human seems to become more compassionate the more you share.

Take University of Southern California’s (USC)’s brainchild, Ellie, as an example - an AI therapist designed to identify post-traumatic stress disorder (PTSD) in post-deployment military veterans.  As one author notes, Ellie’s design stops at the point of being entirely real—a design on purpose—so patients will feel more comfortable to share with an automated therapist, impervious to offence, opinion or slip of the tongue.

And interestingly enough, the quality of honest disclosure is not compromised when patients know their story will eventually end up in the hands of a human professional who can properly treat their condition. Says Gale Lucas, a social psychologist at USC’s Institute for Creative Technologies: “it’s about what’s happening in the moment—having a safe place to talk.”

A growing body of evidence is suggesting the greater risk of stigmatisation and emotional vulnerability involved, the more patients tend to prefer the digital therapist. As in the case of major depression, Cardiac Coach's study found that most of these patients opened more easily to an anonymous digital space than they did to a one-on-one human engagement.

The implications: digital therapy could be a key to unlock deeper diagnostic therapy, such as domestic violence, or any kind of high-sensitivity case requiring extreme levels of affective trust.

With great power comes great responsibility

But some of these capabilities come as double-edged swords, of which we tend only to imagine their benefits.

The other side of anonymity is that we can deploy psychological and social triggers without human accountability. In some cases, this can lead to a very bad customer experience.

Take WestJet’s chatbot as an example, which responded to a passenger’s cheerful review with a referral to a suicide prevention hotline.

It was a mistake—a mistake that could have led to a very different outcome had the customer not been willing to laugh it off. And mistakes like that can be profound when trust has formed, potentially leading people into situations that are not just a poor customer experience but potentially dangerous. In a study for the USAF, a robot introduced itself to people as an expert and made explicit appeals to trust, then asked the same people if they would follow it even when it made obvious mistakes, and even when the stakes were as high as escaping from a burning building.

Co-designing for embodied, meaningful interactions is not just building systems in a better way. It’s about embracing principles that create the building blocks of trust. People will trust a chatbot like Nadia if they know where she comes from. They have to know the system behind the robot before they can trust the experience.

That means the design journey needs to be cracked open so customers can peer into the creative process and understand what drove your decision making. Why did you choose this gender, this voice, this particular character? What capabilities does it offer, and what are the ethical risks involved with these capabilities? How did bias towards traditional gender roles inform the final product and character chosen? If design is targeted to foster emotional connections and create safe spaces, it so follows that designers themselves need to lead this process. Customer/client transparency begins with the provider.

In the gusts of global change, with uncertainty on the rise, digital trust could be the new global commodity. Like any commodity, it can be easily acquired and quickly lost. But those who can keep the human need for connection at the heart of new digital solutions, and who can build and maintain trust over time, will be able to leverage this into digital solutions that can help people in ways we’re only just starting to imagine.

Piers Smith is cognitive architect at FaceMe

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags privacyAIdisruptiontrustFaceMeethics of big datasocial licensedigital human

More about British AirwaysCreativeFacebookFaceMeGoogleUSC

Show Comments