Summary:
Companies are deploying AI and robotics in ways that disrupt traditional training techniques such as mentoring and on-the-job learning. We need to understand how to combine the old with the new.
COMPANIES ARE DEPLOYING AI AND ROBOTICS IN WAYS THAT DISRUPT TRADITIONAL TRAINING TECHNIQUES SUCH AS MENTORING AND ON-THE-JOB LEARNING. WE NEED TO UNDERSTAND HOW TO COMBINE THE OLD WITH THE NEW.
IN BRIEF...
|
It’s 6:30 in the morning, and Kristen is wheeling her prostate patient into the OR. She’s a senior resident, a surgeon in training. Today she’s hoping to do some of the procedure’s delicate, nerve-sparing dissection herself. The attending physician is by her side, and their four hands are mostly in the patient, with Kristen leading the way under his watchful guidance. The work goes smoothly, the attending backs away, and Kristen closes the patient by 8:15, with a junior resident looking over her shoulder. She lets him do the final line of sutures. She feels great: The patient’s going to be fine, and she’s a better surgeon than she was at 6:30.
Fast-forward six months. It’s 6:30 AM again, and Kristen is wheeling another patient into the OR, but this time for robotic prostate surgery. The attending leads the setup of a thousand-pound robot, attaching each of its four arms to the patient. Then he and Kristen take their places at a control console 15 feet away. Their backs are to the patient, and Kristen just watches as the attending remotely manipulates the robot’s arms, delicately retracting and dissecting tissue. Using the robot, he can do the entire procedure himself, and he largely does. He knows Kristen needs practice, but he also knows she’d be slower and would make more mistakes. So she’ll be lucky if she operates more than 15 minutes during the four-hour surgery. And she knows that if she slips up, he’ll tap a touch screen and resume control, very publicly banishing her to watch from the sidelines.
Surgery may be extreme work, but until recently surgeons in training learned their profession the same way most of us learned how to do our jobs: We watched an expert, got involved in the easier work first, and then progressed to harder, often riskier tasks under close supervision until we became experts ourselves. This process goes by lots of names: apprenticeship, mentorship, on-the-job learning (OJL). In surgery it’s called See one, do one, teach one.
Critical as it is, companies tend to take on-the-job learning for granted; it’s almost never formally funded or managed, and little of the estimated $366 billion companies spent globally on formal training in 2018 directly addressed it. Yet decades of research show that although employer-provided training is important, the lion’s share of the skills needed to reliably perform a specific job can be learned only by doing it. Most organizations depend heavily on OJL: A 2011 Accenture survey, the most recent of its kind and scale, revealed that only one in five workers had learned any new job skills through formal training in the previous five years.
Today OJL is under threat. The headlong introduction of sophisticated analytics, AI, and robotics into many aspects of work is fundamentally disrupting this time-honored and effective approach. Tens of thousands of people will lose or gain jobs every year as those technologies automate work, and hundreds of millions will have to learn new skills and ways of working. Yet broad evidence demonstrates that companies’ deployment of intelligent machines often blocks this critical learning pathway: My colleagues and I have found that it moves trainees away from learning opportunities and experts away from the action, and overloads both with a mandate to master old and new methods simultaneously.
How, then, will employees learn to work alongside these machines? Early indications come from observing learners engaged in norm-challenging practices that are pursued out of the limelight and tolerated for the results they produce. I call this widespread and informal process shadow learning.
OBSTACLES TO LEARNING
MY DISCOVERY OF SHADOW learning came from two years of watching surgeons and surgical residents at 18 top-rated teaching hospitals in the United States. I studied learning and training in two settings: traditional (“open”) surgery and robotic surgery. I gathered data on the challenges robotic surgery presented to senior surgeons, residents, nurses, and scrub technicians (who prep patients, help glove and gown surgeons, pass instruments, and so on), focusing particularly on the few residents who found new, rule-breaking ways to learn. Although this research concentrated on surgery, my broader purpose was to identify learning and training dynamics that would show up in many kinds of work with intelligent machines.
To this end, I connected with a small but growing group of field researchers who are studying how people work with smart machines in settings such as internet start-ups, policing organizations, investment banking, and online education. Their work reveals dynamics like those I observed in surgical training. Drawing on their disparate lines of research, I’ve identified four widespread obstacles to acquiring needed skills. Those obstacles drive shadow learning.
1. TRAINEES ARE BEING MOVED AWAY FROM THEIR “LEARNING EDGE.”
Training people in any kind of work can incur costs and decrease quality, because novices move slowly and make mistakes. As organizations introduce intelligent machines, they often manage this by reducing trainees’ participation in the risky and complex portions of the work, as Kristen found. Thus trainees are being kept from situations in which they struggle near the boundaries of their capabilities and recover from mistakes with limited help—a requirement for learning new skills.
The same phenomenon can be seen in investment banking New York University’s Callen Anthony found that junior analysts in one firm were increasingly being separated from senior partners as those partners interpreted algorithm-assisted company valuations in M&As. The junior analysts were tasked with simply pulling raw reports from systems that scraped the web for financial data on companies of interest and passing them to the senior partners for analysis. The implicit rationale for this division of labor? First, reduce the risk that junior people would make mistakes in doing sophisticated work close to the customer; and second, maximize senior partners’ efficiency: The less time they needed to explain the work to junior staffers, the more they could focus on their higher-level analysis. This provided some short-term gains in efficiency, but it moved junior analysts away from challenging, complex work, making it harder for them to learn the entire valuation process and diminishing the firm’s future capability.
2. EXPERTS ARE BEING DISTANCED FROM THE WORK.
Sometimes intelligent machines get between trainees and the job, and other times they’re deployed in a way that prevents experts from doing important hands-on work. In robotic surgery, surgeons don’t see the patient’s body or the robot for most of the procedure, so they can’t directly assess and manage critical parts of it. For example, in traditional surgery, the surgeon would be acutely aware of how devices and instruments impinged on the patient’s body and would adjust accordingly; but in robotic surgery, if a robot’s arm hits a patient’s head or a scrub is about to swap a robotic instrument, the surgeon won’t know unless someone tells her. This has two learning implications: Surgeons can’t practice the skills needed to make holistic sense of the work on their own, and they must build new skills related to making sense of the work through others.
Benjamin Shestakofsky, now at the University of Pennsylvania, described a similar phenomenon at a pre-IPO start-up that used machine learning to match local laborers with jobs and that provided a platform for laborers and those hiring them to negotiate terms. At first the algorithms weren’t making good matches, so managers in San Francisco hired people in the Philippines to manually create each match. And when laborers had difficulty with the platform—for instance, in using it to issue price quotes to those hiring, or to structure payments—the start-up managers outsourced the needed support to yet another distributed group of employees, in Las Vegas. Given their limited resources, the managers threw bodies at these problems to buy time while they sought the money and additional engineers needed to perfect the product. Delegation allowed the managers and engineers to focus on business development and writing code, but it deprived them of critical learning opportunities: It separated them from direct, regular input from customers—the laborers and the hiring contractors—about the problems they were experiencing and the features they wanted.
3. LEARNERS ARE EXPECTED TO MASTER BOTH OLD AND NEW METHODS.
Robotic surgery comprises a radically new set of techniques and technologies for accomplishing the same ends that traditional surgery seeks to achieve. Promising greater precision and ergonomics, it was simply added to the curriculum, and residents were expected to learn robotic as well as open approaches. But the curriculum didn’t include enough time to learn both thoroughly, which often led to a worst-case outcome: The residents mastered neither. I call this problem methodological overload.
Shreeharsh Kelkar, at UC Berkeley, found that something similar happened to many professors who were using a new technology platform called edX to develop massive open online courses (MOOCs). EdX provided them with a suite of course-design tools and instructional advice based on fine-grained algorithmic analysis of students’ interaction with the platform (clicks, posts, pauses in video replay, and so on). Those who wanted to develop and improve online courses had to learn a host of new skills—how to navigate the edX user interface, interpret analytics on learner behavior, compose and manage the course’s project team, and more—while keeping “old school” skills sharp for teaching their traditional classes. Dealing with this tension was difficult for everyone, especially because the approaches were in constant flux: New tools, metrics, and expectations arrived almost daily, and instructors had to quickly assess and master them. The only people who handled both old and new methods well were those who were already technically sophisticated and had significant organizational resources.
4. STANDARD LEARNING METHODS ARE PRESUMED TO BE EFFECTIVE.
Decades of research and tradition hold trainees in medicine to the See one, do one, teach one method, but as we’ve seen, it doesn’t adapt well to robotic surgery. Nonetheless, pressure to rely on approved learning methods is so strong that deviation is rare: Surgical-training research, standard routines, policy, and senior surgeons all continue to emphasize traditional approaches to learning, even though the method clearly needs updating for robotic surgery.
Sarah Brayne, at the University of Texas, found a similar mismatch between learning methods and needs among police chiefs and officers in Los Angeles as they tried to apply traditional policing approaches to beat assignments generated by an algorithm. Although the efficacy of such “predictive policing” is unclear, and its ethics are controversial, dozens of police forces are becoming deeply reliant on it. The LAPD’s PredPol system breaks the city up into 500-foot squares, or “boxes,” assigns a crime probability to each one, and directs officers to those boxes accordingly. Brayne found that it wasn’t always obvious to the officers—or to the police chiefs—when and how the former should follow their AI-driven assignments. In policing, the traditional and respected model for acquiring new techniques has been to combine a little formal instruction with lots of old-fashioned learning on the beat. Many chiefs therefore presumed that officers would mostly learn how to incorporate crime forecasts on the job. This dependence on traditional OJL contributed to confusion and resistance to the tool and its guidance. Chiefs didn’t want to tell officers what to do once “in the box,” because they wanted them to rely on their experiential knowledge and discretion. Nor did they want to irritate the officers by overtly reducing their autonomy and coming across as micromanagers. But by relying on the traditional OJL approach, they inadvertently sabotaged learning: Many officers never understood how to use PredPol or its potential benefits, so they wholly dismissed it—yet they were still held accountable for following its assignments. This wasted time, decreased trust, and led to miscommunication and faulty data entry—all of which undermined their policing.
SHADOW LEARNING RESPONSES
Faced with such barriers, shadow learners are bending or breaking the rules out of view to get the instruction and experience they need. We shouldn’t be surprised. Close to a hundred years ago, the sociologist Robert Merton showed that when legitimate means are no longer effective for achieving a valued goal, deviance results. Expertise—perhaps the ultimate occupational goal—is no exception: Given the barriers I’ve described, we should expect people to find deviant ways to learn key skills. Their approaches are often ingenious and effective, but they can take a personal and an organizational toll: Shadow learners may be punished (for example, by losing practice opportunities and status) or cause waste and even harm. Still, people repeatedly take those risks, because their learning methods work well where approved means fail. It’s almost always a bad idea to uncritically copy these deviant practices, but organizations do need to learn from them.
Following are the shadow learning practices that I and others have observed:
SEEKING STRUGGLE. Recall that robotic surgical trainees often have little time on task. Shadow learners get around this by looking for opportunities to operate near the edge of their capability and with limited supervision. They know they must struggle to learn, and that many attending physicians are unlikely to let them. The subset of residents I studied who did become expert found ways to get the time on the robots they needed. One strategy was to seek collaboration with attendings who weren’t themselves seasoned experts. Residents in urology—the specialty having by far the most experience with robots—would rotate into departments whose attendings were less proficient in robotic surgery, allowing the residents to leverage the halo effect of their elite (if limited) training. The attendings were less able to detect quality deviations in their robotic surgical work and knew that the urology residents were being trained by true experts in the practice; thus they were more inclined to let the residents operate, and even to ask for their advice. But few would argue that this is an optimal learning approach.
What about those junior analysts who were cut out of complex valuations? The junior and senior members of one group engaged in shadow learning by disregarding the company’s emerging standard practice and working together. Junior analysts continued to pull raw reports to produce the needed input, but they worked alongside senior partners on the analysis that followed.
In some ways this sounds like a risky business move. Indeed, it slowed down the process, and because it required the junior analysts to handle a wider range of valuation methods and calculations at a breakneck pace, it introduced mistakes that were difficult to catch. But the junior analysts developed a deeper knowledge of the multiple companies and other stakeholders involved in an M&A and of the relevant industry and learned how to manage the entire valuation process. Rather than function as a cog in a system they didn’t understand, they engaged in work that positioned them to take on more-senior roles. Another benefit was the discovery that, far from being interchangeable, the software packages they’d been using to create inputs for analysis sometimes produced valuations of a given company that were billions of dollars apart. Had the analysts remained siloed, that might never have come to light.
TAPPING FRONTLINE KNOW-HOW. As discussed, robotic surgeons are isolated from the patient and so lack a holistic sense of the work, making it harder for residents to gain the skills they need. To understand the bigger picture, residents sometimes turn to scrub techs, who see the procedure in its totality: the patient’s entire body; the position and movement of the robot’s arms; the activities of the anesthesiologist, the nurse, and others around the patient; and all the instruments and supplies from start to finish. The best scrubs have paid careful attention during thousands of procedures. When residents shift from the console to the bedside, therefore, some bypass the attending and go straight to these “superscrubs” with technical questions, such as whether the intra-abdominal pressure is unusual, or when to clear the field of fluid or of smoke from cauterization. They do this despite norms and often unbeknownst to the attending.
And what about the start-up managers who were outsourcing jobs to workers in the Philippines and Las Vegas? They were expected to remain laser focused on raising capital and hiring engineers. But a few spent time with the frontline contract workers to learn how and why they made the matches they did. This led to insights that helped the company refine its processes for acquiring and cleaning data—an essential step in creating a stable platform. Similarly, some attentive managers spent time with the customer service reps in Las Vegas as they helped workers contend with the system. These “ride alongs” led the managers to divert some resources to improving the user interface, helping to sustain the start-up as it continued to acquire new users and recruit engineers who could build the robust machine learning systems it needed to succeed.
REDESIGNING ROLES. The new work methods we create to deploy intelligent machines are driving a variety of shadow learning tactics that restructure work or alter how performance is measured and rewarded. A surgical resident may decide early on that she isn’t going to do robotic surgery as a senior physician and will therefore consciously minimize her robotic rotation. Some nurses I studied prefer the technical troubleshooting involved in robotic assignments, so they surreptitiously avoid open surgical work. Nurses who staff surgical procedures notice emerging preferences and skills and work around blanket staffing policies to accommodate them. People tacitly recognize and develop new roles that are better aligned with the work — whether or not the organization formally does so.
Consider how some police chiefs reframed expectations for beat cops who were having trouble integrating predictive analytics into their work. Brayne found that many officers assigned to patrol PredPol’s “boxes” appeared to be less productive on traditional measures such as number of arrests, citations, and FIs (field interview cards—records made by officers of their contacts with citizens, typically people who seem suspicious). FIs are particularly important in AI-assisted policing, because they provide crucial input data for predictive systems even when no arrests result. When cops went where the system directed them, they often made no arrests, wrote no tickets, and created no FIs.
Recognizing that these traditional measures discouraged beat cops from following PredPol’s recommendations, a few chiefs sidestepped standard practice and publicly and privately praised officers not for making arrests and delivering citations but for learning to work with the algorithmic assignments. As one captain said, “Good, fine, but we are telling you where the probability of a crime is at, so sit there, and if you come in with a zero [no crimes], that is a success.” These chiefs were taking a risk by encouraging what many saw as bad policing, but in doing so they were helping to move the law enforcement culture toward a future in which the police will increasingly collaborate with intelligent machines, whether or not PredPol remains in the tool kit.
CURATING SOLUTIONS. Trainees in robotic surgery occasionally took time away from their formal responsibilities to create, annotate, and share play-by-play recordings of expert procedures. In addition to providing a resource for themselves and others, making the recordings helped them learn, because they had to classify phases of the work, techniques, types of failures, and responses to surprises.
Faculty members who were struggling to build online courses while maintaining their old-school skills used similar techniques to master the new technology. EdX provided tools, templates, and training materials to make things easier for instructors, but that wasn’t enough. Especially in the beginning, far-flung instructors in resource-strapped institutions took time to experiment with the platform, make notes and videos on their failures and successes, and share them informally with one another online. Establishing these connections was hard, especially when the instructors’ institutions were ambivalent about putting content and pedagogy online in the first place.
Shadow learning of a different type occurred among the original users of edX—well-funded, well-supported professors at topflight institutions who had provided early input during the development of the platform. To get the support and resources they needed from edX, they surreptitiously shared techniques for pitching desired changes in the platform, securing funding and staff support, and so on.
LEARNING FROM SHADOW LEARNERS. Obviously shadow learning is not the ideal solution to the problems it addresses. No one should have to risk getting fired just to master a job. But these practices are hard-won, tested paths in a world where acquiring expertise is becoming more difficult and more important.
The four classes of behavior shadow learners exhibit—seeking struggle, tapping frontline know-how, redesigning roles, and curating solutions—suggest corresponding tactical responses. To take advantage of the lessons shadow learners offer, technologists, managers, experts, and workers themselves should:
Ensure that learners get opportunities to struggle near the edge of their capacity in real (not simulated) work so that they can make and recover from mistakes
Foster clear channels through which the best frontline workers can serve as instructors and coaches
Restructure roles and incentives to help learners master new ways of working with intelligent machines
Build searchable, annotated, crowdsourced “skill repositories” containing tools and expert guidance that learners can tap and contribute to as needed
The specific approach to these activities depends on organizational structure, culture, resources, technological options, existing skills, and, of course, the nature of the work itself. No single best practice will apply in all circumstances. But a large body of managerial literature explores each of these, and outside consulting is readily available.
More broadly, my research, and that of my colleagues, suggests three organizational strategies that may help leverage shadow learning’s lessons:
1. KEEP STUDYING IT. Shadow learning is evolving rapidly as intelligent technologies become more capable. New forms will emerge over time, offering new lessons. A cautious approach is critical. Shadow learners often realize that their practices are deviant and that they could be penalized for pursuing them. (Imagine if a surgical resident made it known that he sought out the least-skilled attendings to work with.) And middle managers often turn a blind eye to these practices because of the results they produce—as long as the shadow learning isn’t openly acknowledged. Thus learners and their managers may be less than forthcoming when an observer, particularly a senior manager, declares that he wants to study how employees are breaking the rules to build skills. A good solution is to bring in a neutral third party who can ensure strict anonymity while comparing practices across diverse cases. My informants came to know and trust me, and they were aware that I was observing work in numerous work groups and facilities, so they felt confident that their identities would be protected. That proved essential in getting them to open up.
2. ADAPT THE SHADOW LEARNING PRACTICES YOU FIND TO DESIGN ORGANIZATIONS, WORK, AND TECHNOLOGY. Organizations have often handled intelligent machines in ways that make it easier for a single expert to take more control of the work, reducing dependence on trainees’ help. Robotic surgical systems allow senior surgeons to operate with less assistance, so they do. Investment banking systems allow senior partners to exclude junior analysts from complex valuations, so they do. All stakeholders should insist on organizational, technological, and work designs that improve productivity and enhance on-the-job learning. In the LAPD, for example, this would mean moving beyond changing incentives for beat cops to efforts such as redesigning the PredPol user interface, creating new roles to bridge police officers and software engineers, and establishing a cop-curated repository for annotated best practice use cases.
3. MAKE INTELLIGENT MACHINES PART OF THE SOLUTION. AI can be built to coach learners as they struggle, coach experts on their mentorship, and connect those two groups in smart ways. For example, when Juho Kim was a doctoral student at MIT, he built ToolScape and LectureScape, which allow for crowdsourced annotation of instructional videos and provide clarification and opportunities for practice where many prior users have paused to look for them. He called this learnersourcing. On the hardware side, augmented reality systems are beginning to bring expert instruction and annotation right into the flow of work. Existing applications use tablets or smart glasses to overlay instructions on work in real time. More-sophisticated intelligent systems are expected soon. Such systems might, for example, superimpose a recording of the best welder in the factory on an apprentice welder’s visual field to show how the job is done, record the apprentice’s attempt to match it, and connect the apprentice to the welder as needed. The growing community of engineers in these domains have mostly been focused on formal training, and the deeper crisis is in on-the-job learning. We need to redirect our efforts there.
FOR THOUSANDS OF YEARS, advances in technology have driven the redesign of work processes, and apprentices have learned necessary new skills from mentors. But as we’ve seen, intelligent machines now motivate us to peel apprentices away from masters, and masters from the work itself, all in the name of productivity. Organizations often unwittingly choose productivity over considered human involvement, and learning on the job is getting harder as a result. Shadow learners are nevertheless finding risky, rule-breaking ways to learn. Organizations that hope to compete in a world filling with increasingly intelligent machines should pay close attention to these “deviants.” Their actions provide insight into how the best work will be done in the future, when experts, apprentices, and intelligent machines work, and learn, together.
In Brief (this could run separately in a box near top of story)THE PROBLEM: The rush of intelligent machines and sophisticated analytics into many aspects of work means that trainees are losing opportunities to acquire skills through on-the-job learning (OJL).THE OUTCOME: In medicine, policing, and other fields, people are finding rule-breaking ways to acquire needed expertise out of the limelight. This “shadow learning” is tolerated for the results it produces, but it can exact a personal and an organizational toll.THE SOLUTION: In response, organizations should carefully uncover and study shadow learning; adapt practices that develop organizational, technological and work designs that enhance OJL; and make intelligent machines part of the solution.
Copyright 2019 Harvard Business School Publishing Corp. Distributed by The New York Times SyndicateTopic: Technology, Education
Topics
Critical Appraisal Skills
Technology Integration
Action Orientation
Related
If Strategy Is So Important, Why Don’t We Make Time for It?Successfully Managing Workplace ConflictFostering a Culture of Employee EngagementRecommended Reading
Problem Solving
If Strategy Is So Important, Why Don’t We Make Time for It?
Problem Solving
Successfully Managing Workplace Conflict
Problem Solving
Fostering a Culture of Employee Engagement
Operations and Policy
When a Coworker You Don’t Like Becomes Your Boss
Operations and Policy
Employee Retention: Crucial for Continuity and Cost Control
Operations and Policy
Fostering Inclusive Practices for Physicians