Summary:
Can new AI and algorithms be used to hire employees, manage processes, and automate other decisions? Harvard Business Review explores the idea.
The use of artificial intelligence and algorithms to manage business processes, hire employees and automate routine organizational decision making is increasing.
But the reality is that, at least in some cases , humans display strong feelings of aversion to the use of autonomous algorithms. If AI is to become an important management tool in our organizations, algorithms need to be seen as trusted advisers to human decision-makers.
We analyzed the responses of 136 participants in an online simulation. Participants were told they would be partnered up with another person to work on a task. They then received a judgment score indicating their partner’s trustworthiness, determined by either an algorithm or a 15-minute conversation between the leader of the study and the unknown person.
Our results suggest that people think of humans and algorithms as good at providing different types of information, including about whom to trust. Humans are more intuitive, more socially adept and better at taking up another person’s perspective. But algorithms can provide information about whom to trust when information is less intuitive and more factual.
So, although humans were judged to possess the necessary social skills to assess trustworthiness, they did not feel that the use of an algorithm would provide less reliable trust information. When participants were asked which assessment method they preferred to use, most participants opted to use AI (61 percent) rather than the judgments of the human (39 percent).
What are the implications of these findings for organizations?
First, many team projects are often temporary, and trust needs to be built quickly. Our findings show that AI presents a reliable and legitimate assessment tool to facilitate this type of “cognitive” trustworthiness information. Second, social skills like perspective-taking, intuition and sensitivity are prerequisites to determine someone’s trustworthiness, and are considered to be uniquely human. Our findings nevertheless indicate that when it comes down to starting a work relationship with a colleague, algorithms seem to be accepted as being equally reliable.
Third, supervisors will have to learn to develop a sense of awareness about when it is effective to delegate assessments of the work climate to an algorithm. Finally, supervisors will also have to learn how to communicate the trustworthiness information provided by AI to their teams in ways that will not be ignored.
(David De Cremer is a professor at NUS Business School, National University of Singapore. Jack McGuire is a lab manager and research assistant at Judge Business School, University of Cambridge. Yorck Hesselbarth is a doctoral student at ESCP Europe in Berlin. Ke Michael Mai is an assistant professor at NUS Business School, National University of Singapore.)
Copyright 2019 Harvard Business School Publishing Corp. Distributed by The New York Times Syndicate.
Topics
Technology Integration
Quality Improvement
Strategic Perspective
Related
Politics: Finesse and ActionSafety Should Be a Performance DriverHospital Credentialing for the CMO — Why and HowRecommended Reading
Quality and Risk
Politics: Finesse and Action
Quality and Risk
Safety Should Be a Performance Driver
Operations and Policy
Hospital Credentialing for the CMO — Why and How
Operations and Policy
Fostering Inclusive Practices for Physicians
Problem Solving
Successfully Managing Workplace Conflict
Problem Solving
Fostering a Culture of Employee Engagement