Menu
AI behaviour forensic, privacy and customer trust specialists are emerging career paths

AI behaviour forensic, privacy and customer trust specialists are emerging career paths

Majority of large organisations will hire these specialists by 2023, reports Gartner

Promoting diversity in AI teams, data and algorithms, and promoting people skills is a great start

Jim Hare, Gartner

As users’ trust in artificial intelligence (AI) and machine learning (ML) solutions dip due to incidents of irresponsible privacy breaches and data misuse, a new career path is emerging.

This is for behaviour forensic, privacy and customer trust specialists who are tasked to reduce brand and reputation risks resulting from missteps on the use of new technologies such as AI and machine learning.

In fact, analyst firm Gartner predicts that by 2023, 75 per cent of large organisations will hire these specialists.

Gartner notes that bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models.

Moreover, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.

“New tools and skills are needed to help organisations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk,” says Jim Hare, research vice president at Gartner.

“More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators.”

Gartner notes that increasingly, some sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks.

In addition, organisations such as Facebook, Google, Bank of America, MassMutual and NASA are hiring or have already appointed AI behaviour forensic specialists who focus mainly on uncovering undesired bias in AI models before these are deployed.

These specialists are validating models during the development phase and continue to monitor them once they are released into production, as unexpected bias can be introduced because of the divergence between training and real-world data.

“While the number of organisations hiring ML forensic and ethics investigators remains small today, that number will accelerate in the next five years,” says Hare.

Consulting service providers, meanwhile, will launch new services to audit and certify that the ML models are explainable and meet specific standards before models are moved into production.

Gartner says open-source and commercial tools specifically designed to help ML investigators identify and reduce bias are also emerging.

Some organisations have launched dedicated AI explainability tools to help their customers identify and fix bias in AI algorithms.

Commercial AI and ML platform providers are adding capabilities to automatically generate model explanations in natural language.

There are also open-source technologies such as Local Interpretable Model-Agnostic Explanations (LIME) that can look for unintended discrimination before it gets baked into models.

These and other tools can help ML investigators examine the “data influence” of sensitive variables such as age, gender or race, on other variables in a model, says Hare. 

“They can measure how much of a correlation the variables have with each other to see whether they are skewing the model and its outcomes.”

Hare says data and analytics leaders and CDOs are not immune to issues related to AI missteps and lack of governance.

“They must make ethics and governance part of AI initiatives and build a culture of responsible use, trust and transparency. Promoting diversity in AI teams, data and algorithms, and promoting people skills is a great start,” says Hare.

“Data and analytics leaders must also establish accountability for determining and implementing the levels of trust and transparency of data, algorithms and output for each use case.

“It is necessary that they include an assessment of AI explainability features when assessing analytics, business intelligence, data science and ML platforms,” Hare concludes.

NASA, Facebook and Google are hiring or have already appointed AI behaviour forensic specialists who primarily focus on uncovering undesired bias in AI models before these are deployed
NASA, Facebook and Google are hiring or have already appointed AI behaviour forensic specialists who primarily focus on uncovering undesired bias in AI models before these are deployed

Sign up for  CIO newsletters for regular updates on CIO news, career tips, views and events. Follow CIO New Zealand on Twitter:@cio_nz

Send news tips and comments to divina_paredes@idg.co.nz @divinap

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags privacyGartnercareeralgorithmsAIconsumerdata scientisttrustData sciencechief data officerJim Hareethics of big datainclusiondata breachdiversity

More about Bank of AmericaFacebookGartnerGoogleNASATwitter

Show Comments