Will AI Reduce Gender Bias in Hiring?

By Tomas Chamorro-Premuzic
September 3, 2020

Artificial intelligence is disrupting every area of life, including the way organizations find talent. Companies are generally aware of the return on investment that comes from finding the right person for the right job. McKinsey estimated that, for highly complex roles, star employees can be expected to produce 800 percent more than average performers. And a recent Harvard Business School study showed that there are even bigger benefits to avoiding toxic workers.


But organizations are often unable to attract the right talent, as they rely on intuitive rather than data-driven identification practices. Indeed, too many leaders are hired on the basis of their technical expertise, political influence or interview performance. As I illustrate in my latest book, “Why Do So Many Incompetent Men Become Leaders? (And How to Fix It),” most companies focus on the wrong traits, hiring based on confidence rather than competence, charisma rather than humility and narcissistic tendencies rather than integrity — which explains the surplus of incompetent and male leaders. The result is a pathological disconnection between the qualities that seduce us in a leader and those that are needed to be an effective one.


An interesting question that arises is to what degree new technologies could help us reduce error, noise and bias in our talent identification processes. Would women be better off if AI and algorithms were in charge of hiring? Previous research has highlighted a clear inconsistency around gender and leadership. On the one hand, women are often evaluated more negatively by others, even when there are few granular behavioral differences between women and men. On the other, large scale meta-analyses suggest that women have a slight advantage when it comes to the soft skills that predispose individuals to be more effective leaders, and that they generally adopt more effective leadership styles than men do. For instance, if leaders were selected on the basis of their emotional intelligence, self-awareness, humility, integrity and coachability, the majority of leaders would be female rather than male.


And yet there have been salient news stories recently indicating that AI may actually contribute to even more bias and adverse impact on or against women — and that when algorithms are trained to emulate human recruiters, they may not just reproduce human biases, but may also exacerbate them, engaging in a much more efficient form of discrimination.


Of course, if AI is trained with biased data — for instance, if we teach it to predict which candidates will be rated positively by human interviewers — it will not just emulate, but also exacerbate, human bias: augmenting it and making it far more efficient. This can be addressed by teaching AI to predict relevant and objective outcomes, rather than mimic human intuition.


In addition, there are reasons to expect AI-talent tools to be more accurate and predictive than humans (and not just because humans are generally bad at this). Our favorite method for screening and vetting candidates — including leaders — is the interview, and large-scale scientific studies have shown that interviews are most predictive when they are highly structured. Whereas in-person/analog interviews are hard to standardize, video interviews allow us to put people through exactly the same experience, capture millions of data points on their behaviors (e.g., what they say, how they say it, language use, body language and microexpressions) and remove prejudiced human observers from the process. It is safe to assume that automating all unstructured and humanly rated interviews would reduce bias and nepotism while increasing meritocracy and predictive accuracy. This should be good for women (and bad for men).


One of the big advantages of AI is that, aside from being better at spotting things (i.e., millions of data points), it is also superior at ignoring things. Imagine an ethical, well-meaning and open-minded human who has every intention of being fair in his hiring practices and is therefore determined to avoid gender bias in his — let’s assume he is male — hiring process. Regardless of how hard he tries, it will be very hard for him to ignore candidates’ gender. Imagine him sitting in front of a female candidate, repeating to himself: “I must not think about the fact that this person is a woman,” or “I must not let this person’s gender interfere with my evaluation.” In fact, the more he tries to suppress this thought, the more prominent it will be in his mind. This will also lead to distraction or overcompensation. In contrast, AI can be trained to ignore people’s gender and focus only on the relevant signals of talent or potential. For example, algorithms can be trained to pick up relevant signals of emotional quotient, competence or communication skills, while being truly blind to gender. This would definitely favor women.


The critical factor in order for this to work is that organizations identify real performance data to train the algorithms. If AI is taught to predict or anticipate human preferences — like whether a candidate will be liked by their (human) boss once they are hired — we can expect bias to remain … and be augmented. However, if AI is trained to identify the actual drivers of performance — defined broadly as an individual’s contribution to the organization — then we can expect a much fairer, more accurate and replicable assessment of people’s potential. This, again, should be good for women.


For those who are interested not just in helping women to be more represented in the leadership ranks, but also in improving the quality of our leaders, there are clearly reasons to be hopeful about AI. However, many of the emerging innovations in this brave new world of technologically enhanced and data-driven talent identification are still a work in progress, and we need to ensure that they are not only accurate, but that they are also ethical and legal alternatives to existing methods. Above all, it is time to admit that most of the practices that are in place are far from effective, and that they have contributed too much of the unfairness and nepotism that governs the average workplace. So here’s to finding the necessary self-awareness to begin to improve.


Copyright 2019 Harvard Business School Publishing Corp. Distributed by The New York Times Syndicate.

 

Join AAPL today


 

Topics:

How One Fast-Food Chain Keeps Its Turnover Rates Absurdly Low
5 Tips for Managing an Underperformer — Remotely