Menu
Organisations prepare for more ethical and responsible use of AI: study

Organisations prepare for more ethical and responsible use of AI: study

Those that have been more successful with the technology recognise AI oversight is not optional

Many data scientists are hungry for something more specific and technical AI ethics codes

Rumman Chowdhury, Accenture

A new study indicates more business leaders are taking steps to ensure responsible use of artificial intelligence (AI) within their organisations.

Most AI adopters – which now account for 72 per cent of organisations globally – hold ethics training for their technologists (70 per cent) and have ethics committees to review the use of AI (63 per cent).

AI leaders – organisations rating their deployment of AI “successful” or “highly successful” – also take the lead on responsible AI efforts, says the study.

Almost all (92 per cent) train their technologists in ethics compared to 48 per cent of other AI adopters.

The findings are based on a global survey among 305 business leaders, more than half of them chief information officers, chief technology offers and chief analytics officers.

The study was commissioned by SAS, Accenture Applied Intelligence and Intel, and conducted by Forbes Insights in July 2018.

As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested in the ethics of AI as a practical matter than ever, according to the study AI Momentum, Maturity and Models for Success.

Commenting on the results of the study, Melvin Greer, chief data scientist Americas at Intel Corporation, says, “AI has a real impact on our lives already, highlighting the importance of having a strong ethical framework surrounding its use. That’s where groups that are pressing on ethical issues can play an important role as we move ahead.”

AI ethics codes

Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people, says Rumman Chowdhury, responsible AI lead at Accenture Applied Intelligence.

“These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’.

We would be seeing more widespread use of driverless cars if government oversight and automaker-level governance capabilities were able to keep up with the technology itself. The technical capabilities are ahead of our ability to cope with the technology

Oliver Schabenberger, SAS

“They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society,” says Chowdhury.

“Meanwhile, many data scientists are hungry for something more specific and technical. That’s what we need to be moving toward.”

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognise that oversight is not optional for these technologies.

Nearly three-quarters (74 per cent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 per cent). Additionally, 43 per cent of AI leaders shared that their organisation has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 per cent).

“The ability to understand how AI makes decisions builds trust and enables effective human oversight," says Yinyin Liu, head of data science for Intel AI Products Group.

"For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”

The survey notes that companies are taking steps toward ethical AI and ensuring AI oversight because they know that faulty AI output can cause repercussions.

Of the organisations that have either already deployed AI or are planning to do so, 60 per cent stated that they are concerned about the impact of AI-driven decisions on customer engagement – for example, that their actions will not show enough empathy or customers will trust them less.

The data also suggests that companies that have been more successful with AI tend to have more rigorous oversight processes in place.

For example, 74 per cent of successful companies report they review their AI output at least weekly, compared with 33 per cent of those that are less successful.

And 43 per cent of successful companies have a process for augmenting or overriding questionable results, compared with 28 per cent of companies that haven’t yet found success in their AI initiatives.

Many believe that despite these positive signs, oversight processes have a long way to go before they catch up with advances in AI technology.

“Although we are still in the very early phases of AI, the technology is already well ahead of the marketplace when it comes to the processes and procedures organisations have in place to provide oversight,” says Oliver Schabenberger, chief operating officer and chief technology officer at SAS.  

“For example, we would be seeing more widespread use of driverless cars if government oversight and automaker-level governance capabilities were able to keep up with the technology itself. The technical capabilities are ahead of our ability to cope with the technology.”

He notes AI leaders also recognise the strong connection between analytics and their AI success.

Of those, 79 per cent report that analytics plays a major or central role in their organisation’s AI efforts compared to only 14 per cent of those who have not yet benefited from their use of AI.

“Those who have deployed AI recognise that success in AI is success in analytics,” says Schabenberger. “For them, analytics has achieved a central role in AI.”

The trust factor

The report points out the results of the survey suggest at a minimum, business and government leaders trust AI.

But many are rightfully concerned about their employees’ lack of trust in AI, as seen by their concern on the technology’s impact on their jobs.

“There are even signs that some of these leaders harbour pockets of doubt about the extent to which they can trust AI outputs,” the report states.

“Trust will play perhaps a larger role in the evolution of AI than it has for any technology in recent memory. That’s because of its self-perpetuating automation – when it is working as designed, AI is using data to ‘’learn’ and evolve on its own, often leading to outputs that few could have anticipated.

“It’s far easier for humans to trust technologies over which they exercise full, constant control. AI, by contrast, demands a leap of faith,” it states.

“Once we learn how to balance our desire to control AI with the flexibility it requires to operate at its highest level, AI will engender real trust. Getting there will require education, transparency, clear ethical guidelines – and patience.”

“As with any new technology that’s quickly gaining traction, there will be challenges to overcome,” concludes Ross Gagnon, research director at Forbes Insights.

“But the opportunities AI presents are seemingly endless, from operational efficiencies to increased productivity and revenue. The question executives should be asking themselves is not whether to deploy AI, but how quickly?”

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags intelbusiness intelligenceaccenturesasAIethics of big dataOliver Schabenberger#AnalyticsXanalytics economyethics of AI

More about IntelSAS

Show Comments