Building Ethical AI for Talent Management


In the search for talent, artificial intelligence can play a very specific role: giving organizations more accurate and more efficient predictions of a candidate’s work-related behaviors and performance potential. Unlike traditional recruitment methods, AI is able to find patterns people can’t see.


To rapidly improve talent management and take full advantage of the power and potential AI offers, we need to shift our focus from developing more ethical human resources systems to developing more ethical AI. Removing bias from AI is not easy, but it’s far easier than removing it from humans themselves.

Organizations using AI for talent management, at any stage, should start by taking the following steps:

Educate candidates and obtain their consent: Ask prospective employees to opt in or to provide their personal data to the company, informing them that it will be analyzed, stored and used by AI systems for making HR-related decisions. You should also be sure to preserve candidate anonymity to protect personal data and comply with General Data Protection Regulation, California privacy laws and similar regulations.

Invest in systems that optimize for fairness and accuracy: Much academic research indicates that while cognitive ability tests are a consistent predictor of job performance, their deployment has adverse impact on underrepresented groups, particularly individuals with a lower socio-economic status. Companies interested in boosting diversity often have to de-emphasize traditional cognitive tests so that diverse candidates are not disadvantaged in the process. This is known as the fairness/accuracy trade-off. Luckily, there is increasing evidence that AI could overcome this trade-off by deploying more dynamic and personalized scoring algorithms that are sensitive as much to accuracy as to fairness, optimizing for a mix of both. Further, because these new systems now exist, we should question whether the widespread use of traditional cognitive assessments, which are known to have an adverse impact on minorities, should continue without some form of bias mitigation.

Develop open-source systems and third-party audits: Hold your company accountable by allowing others to audit the tools it uses to analyze job applications. One way to do that is by developing nonproprietary yet critical aspects of the AI technology the organization uses in open source. For proprietary components, third-party audits conducted by credible experts in the field are a tool companies can use to show the public how they are mitigating bias.

Follow the same laws — as well as data collection and usage practices — used in traditional hiring: Any data that shouldn’t be collected or included in a traditional hiring process for legal or ethical reasons should not be used by AI systems. Private information about physical, mental or emotional conditions, genetic information and substance use or abuse should never be entered.

If organizations address these issues, we believe that ethical AI could vastly improve organizations by not only reducing bias in hiring but also by enhancing meritocracy. Further, it will be good for the global economy. People from a wider range of socio-economic backgrounds will have more access to better jobs — which can help create balance and begin to remedy class divides.


Copyright 2019 Harvard Business School Publishing Corp. Distributed by The New York Times Syndicate


Join AAPL today



Advanced Practice Provider Leadership: The Evolution of a Well-Defined Structure
Transcription of SoundPractice Podcast: Dr. Laura Hills on The Awful, No Good, Very Bad Employee, New Book on the Problem Employee.