Menu
How artificial intelligence can eliminate bias in hiring?

How artificial intelligence can eliminate bias in hiring?

AI and machine learning can help identify diverse candidates, improve the hiring pipeline and eliminate unconscious bias.

Diversity (or lack of it) is still a major challenge for tech companies. Poised to revolutionize the world of work in general, some organizations are leveraging technology to root out bias, better identify and screen candidates and help close the diversity gap.

That starts with understanding the nature of bias, and acknowledging that unconscious bias is a major problem, says Kevin Mulcahy, an analyst with Future Workplace and co-author of The Future Workplace Experience: 10 Rules for Managing Disruption in Recruiting and Engaging Employees. AI and machine learning can be an objective observer to screen for bias patterns, Mulcahy says.

"The challenge with unconscious bias is that, by definition, it is unconscious, so it takes a third-party, such as AI, to recognize those occurrences and point out any perceived patterns of bias. AI-enabled analysis of communication patterns about the senders or receivers -- like gender or age -- can be used to screen for bias patterns and present the pattern analysis back to the originators," Mulcahy says.

[ Related story: How gender-neutral job postings decrease time-to-hire ]

Setting strategy

Once bias is identified, though, how can companies address it? The first steps are to identify that bias is an issue and set strategies at all levels of the organization to address and change it, Mulcahy says.

"Companies deal with unconscious bias in various ways, beginning with recognizing that it exists, stating a desire to change and setting strategies and goals to accomplish and track changes. One key to addressing observable bias is to tie examples of bias to their promotions and bonus payments. Simultaneously, you can explicitly demonstrate that examples of inclusion will help promotion and earn bonuses," Mulcahy says.

It's also important to measure and track improvement as well as negative patterns of behavior, Mulcahy says. AI interfaces can be a great source of objective measurements of patterns of behavior between workers. But it can't all be left to technology, Mulcahy says.

"You have to create a culture of 'If you see something, say something' that goes as high up as executive leadership. Do not expect lower-ranking people to call out examples of bias if senior people have not provided permission and led by example. You can still use AI to help; machines know no hierarchy, and can provide analytical reports back to workers of all levels, but there has to be a human element," he says.

Accountability, too, is key in changing patterns of bias. Everyone in an organization must take some responsibility for sourcing and promoting individuals who are different from themselves, and AI can help by analyzing patterns in new hires or promotions where workers are either recommending candidates that are measurably the same or measurably different than themselves, Mulcahy says.

"This type of exception reporting can objectively hold managers more accountable for explaining, in writing, the justification for any hire that is similar in age, gender and race to themselves. Shifting the burden of detecting patterns of sameness to a machine, while simultaneously placing the burden on the hiring managers for explaining sameness quickly exposes weak arguments for continued sameness and sends a more powerful message that diversity is valued," he says.

[ Related story: Battling gender bias in IT ]

Bias-free solutions

Of course, you have to make sure that any AI or machine-learning products used in this way are themselves free of bias -- or as free as possible, says Aman Alexander, product management director, CEB Sunstone Analytics, an algorithmic assessment platform for recruiting and hiring.

"AI/machine learning can help close the diversity gap, as long as it is not susceptible to human bias. For example, recruiting contact center employees could provide AI/machine learning models with the historical application forms of hired contact center employees with high customer satisfaction scores. This allows the model to pick up on the subtle application attributes/traits and not be impacted by on-the-job, human biases," Alexander says.

By simply using an automated, objective process like this, it's possible to drastically reduce the scope for human bias. If, for example, fairly trained AI/machine learning tools are used to whittle an applicant pool down from 100 applicants to the final 10 interviewees, that means that 90 percent of the pool reduction would be done in a process immune to any human biases, Alexander explains.

[ Related story: 5 ways to reduce bias in your hiring practices ]

Unintentional outcomes

There are, however, some unintentional adverse impacts you must screen for andeliminate, Alexander says. "Say an AI/machine learning model could accurately pick up on a statistical relationship between college football quarterbacks and high performance in sales roles. That relationship could be predicated on entirely meritocratic causative factors -- the quarterback's role demands a combination of mental skill, decision making and influencing skills to lead a team, which translates well into high-pressure sales roles. Of course, an unintended consequence of that is excluding female applicants, by virtue of the fact that only men can be college football quarterbacks. The algorithm would then be favoring a trait which it would never find in women, thus disadvantaging them," he says.

The great thing about technology like this, though, is that unlike human biases, it's much easier to audit and remove those biases, he says. Algorithms can be tested by using them to score thousands of applicants, and then testing the demographic and/or gender breakdown of those applicants.

"If, on average, across all the traits the algorithm values it is disproportionately excluding women or any other group of people, the algorithm can be identified and corrected iteratively and seamlessly," Alexander says.

In addition to the general potential for AI to identify unconscious bias, some technology companies are leveraging AI and machine learning to address specific manifestations of bias, like gendered language in job descriptions, identifying diverse candidates at the top of the funnel and sourcing candidates that already exist in a company's applicant tracking system.

Watch your language

Companies often blame their lack of diversity on a shortage of qualified candidates. Using AI and machine learning to help organizations remove gender-biased language from job descriptions can diversify your applicant pool and therefore improve the chances of hiring female technologists, says Kieran Snyder, CEO and co-founder of Textio, a machine learning platform that analyzes language patterns.

The concept of "gendered" job listings refers to the use of male- or female-skewing terms within job descriptions. This idea has been gaining recognition since it was researched by the American Psychological Association, whose findings illustrated how some seemingly innocuous words could actually signal a gender bias in job ads, says Snyder, and that can impact the number of women who apply for open roles as well as the number who are eventually hired.

"It stands to reason that if you reach a wider pool of applicants, , you're much more likely to have more applicants which, in turn, improves diversity and speeds up the recruiting and hiring process," Snyder says.

Textio's predictive index uses data from more than 45 million job posts and combines that data with hiring outcomes to gauge the hiring performance and the gender tone of job postings, says Snyder. The tool is designed to make a quantitative judgment about the effectiveness of the language used in job postings and help companies tweak them to attract better, more diverse candidates, Snyder says.

Look internally first

AI and machine learning can also help uncover candidates you may already have stored in your ATS to improve your pipeline and speed up search, says Steve Goodman, CEO and founder of talent rediscovery software company Restless Bandit.

"We're an AI and data analytics company, and our tools hook into your ATS or your HR systems, look at the applications and résumés you've already got and then compare those to open job descriptions." Restless Bandit's product removes unconscious bias by eliminating names, geographical information and any other information about a candidate that could trigger bias, unconscious or otherwise, he says.

Companies are sitting on a treasure trove of applications, resumes and potential talent and yet, they continue to pay recruiters to search far and wide for candidates, Goodman says. Not only does Restless Bandit's tool help make hiring more efficient, it improves an employer's brand in the eyes of potential candidates, he says.

"How many times have you heard, 'Thanks, we'll keep your resume on file' and then never hear from that company again? It's a huge problem. Your application just disappears into the abyss. But imagine how the candidate feels about you if you can call them and say, 'let's look at these other jobs you're qualified for, even if they aren't what you applied to' -- it's showing you really pay attention to them," Goodman says.

Recruiting relevance

Finally, AI and machine learning are being applied to perform more targeted, relevant candidate searches, says Shon Burton, CEO of HiringSolved, a company that searches the web for publicly available candidate data, and then compiles that into candidate profiles, Burton says.

"The resume is dead. Ten years ago, we would get all the information about a candidate from a resume; now, we get it from the web, and it's constantly updated -- it's a high-frequency data stream. What we do is look at all publicly available data across the web and our software looks for certain relevance layers to find talent," Burton says.

It's different than a Boolean search filter, which requires much more manual input and effort and is based on eliminating "wrong" answers from a data set. HiringSolved's algorithms rank potential candidates based on information from their public profiles and how relevant that information is to the particular search parameters of the specific job description, he says.

"If you are going out and trying to identify candidates, you have a massive-scale data problem right off the bat -- you're looking at something like a billion social profiles, and you have to determine what's relevant, what's not, what is information about the same person, what's out of date, and make inferences about that data," Burton says. Using AI and machine learning to search speeds up the process and makes it more efficient, he says, and makes it easier to find diverse candidates at top of the hiring funnel.

If a company's looking for a female Linux systems administrator, for example, the software can search for and screen all potential female candidates with the relevant amount of experience, the right credentials and education and put them in front of a hiring manager or recruiter. It goes beyond just looking at job titles and past work experience, too, and can focus on specific skills that could make someone a great sysadmin with Linux experience, opening up the potential talent pool because you're opting people in instead of filtering people out, Burton says.

"What is then up to the humans involved is to make inferences. How old is the data? How reliable is it? That kind of thing. But the technology can greatly speed up the process," he says.

Future potential

One of the largest advantages AI/machine learning tools have over human-only processes is that they can be empirically trained and validated, says CEB Sunstone Analytics' Alexander. Sometimes, the traits companies thought mattered most in quality hires turn out to be inconsequential, while traits considered "negative" actually make candidates more likely to succeed. With AI and machine learning technology, companies can rely solely on actual statistical relationships for these decisions, he says.

"The human mind is not designed for the type of pattern recognition that can be most helpful in making hiring decisions. For example, most people would be able to rattle off a list of the many traits they desire or avoid in an ideal candidate, but would have no idea what the relative success or failure rate is of people who exhibit those traits. They therefore don't have any data to justify their beliefs," he says. AI and machine learning analysis, however, can provide hard data that either confirms or denies recruiters', hiring managers' or executives' beliefs about the types of hires they should be making, he says.

Potential for these solutions

Many of these technologies are already deployed in the market and selling to Fortune 100 companies, and their potential scope and applicability is only going to increase, says Alexander.

"There are some major contributing factors at play. Greater data capture by companies' ATS and HR information systems means greater capabilities for AI and machine learning tech to provide meaningful impact. More data accessibility through semantic and natural language programming tools have enabled us to break down and unlock the meaning and data within unstructured data like free form text, or data in messy data formats. This provides for richer data sets upon which to train algorithms. Cheaper compute power with scalable cloud computing and increased by use from the HR and recruiting community are also helping drive adoption," he says.

Whether by eliminating unconscious bias in general or attacking specific manifestations of bias in recruiting, screening and hiring talent, AI and machine learning has potential to level the playing field for women and other underrepresented minorities and provide a competitive advantage for companies, says Victoria Espinel, CEO of industry advocacy group BSA The Software Alliance. These technologies can play a major role in opening up career and economic opportunities to as many people as possible, and specifically compensating for biases in order to reduce or remove them, she says.

"I think there's enormous potential for these kinds of technologies, and it's really important to apply them in this way. Software can definitely play a role in mitigating any negative impact of our biases and making sure there's a wide range of perspectives and creative thinking that goes into designing products and solutions so they can be accessible to the widest range of people," Espinel says.

Related Video

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about BSACEBLinux

Show Comments