Summary:
What would happen if CEOs began writing employee communications using gen AI? Would anyone notice? Probably not, according to a new study. But that doesn’t mean CEOs should use it to write everything. Here’s what they can do to effectively and transparently write with gen AI.
Given how effective AI is at mimicking human writing, it should come as no surprise that CEOs are experimenting with using it to draft personal messages. Harvard Business School research from 2018 calculated that 24% of the average executive’s day is allocated to electronic communication. But what would happen if CEOs began drafting employee communications using gen AI?
A group of researchers studied this question last year at Zapier, an American software company with more than 700 employees. Harvard Business School professor Prithwiraj Choudhury and colleagues Bart Vanneste (University College London), Xi Kang (Vanderbilt), and Amirhossein Zohrehvand (Leiden University) wanted to determine whether employees could distinguish between messages written by AI and those written by Zapier CEO Wade Foster. Using a large language model developed by the company’s employees, Choudhury and his colleagues trained a chatbot on Foster’s Slack and email messages, public statements, and other communications. The “Wade Bot,” as the researchers began calling it, was specifically designed to write like Foster.
“AI and automation are having a major effect on how and where we work,” Choudhury says. “But when we think about the next frontier for generative AI, one of the possibilities is that personal bots will communicate, attend meetings, listen in, and speak on our behalf. We wanted to determine the extent to which this might be possible today.”
The researchers crowdsourced questions for Foster to reply to from 105 employees. The CEO and the Wade Bot both produced answers, then employees tried to tell them apart. They correctly identified AI-written responses 59% of the time.
There was another surprising finding. When the employees were asked to assess whether the answers were useful, the ones they believed to be AI-generated were rated as less helpful, even when Foster actually wrote them.
The researchers then conducted a second study to determine whether the findings would be different if evaluators had no familiarity with the writer. They recruited 218 people in the United States and asked them to read answers that the participants believed were given by CEOs during company earnings calls. While some answers were from real earnings calls, others were generated by an AI tool that mimicked the CEOs’ styles. The study found that when people thought an answer was created by AI, they rated it as less helpful even if it came from a human. Conversely, when they thought a CEO had given an answer, they found it more helpful even if it was from AI.
In simple terms, people place more trust in and find more value in statements they believe come from a human rather than technology. This suggests that while AI can produce helpful information, people’s perceptions play a big role in how it is received.
Still, most leaders will use gen AI in some form or fashion. Fifty percent of U.S. CEOs say their companies have already automated content creation with it, according to a 2024 Deloitte survey. Seventy-five percent say they have personally used or are using the technology. If you decide to do the same, Choudhury recommends following three guidelines:
Be transparent.
If your employees discover that you’ve outsourced your communications without telling them, they may start to believe every message you send is drafted by a bot. Transparency is essential to building trust and allaying people’s aversion to gen AI. Clearly communicate the tool’s role and benefits. Tell employees what you think is acceptable use of it, what your guidance is for pulling text and data from its responses, and which data you use and avoid when prompting and training bots.
Your company should have rules for the extent to which employees are allowed, and even expected, to use AI. Follow the rules yourself, obviously, but also let people know how you’re drawing on the technology within these parameters. This will help encourage constructive usage and answer related questions others have.Use AI for impersonal messaging.
The technology is more effective for formal communications, such as shareholder letters or strategy memos, according to Choudhury’s research. Avoid it for personal communications, especially with people who know you well.
In the Zapier experiment, employees with longer tenures were more likely to spot AI-generated responses. Those who had been around three years or more, for example, had a higher accuracy rate in their guesses (62%) than newer workers did (58%). Choudhury believes their familiarity with Foster’s communication style limited the bot’s success in imitating him. But a CEO’s personal style is much less discernible in formal writing than it is in an email to a subordinate or a post on LinkedIn.
“The target use case today should really be communicating with strangers or automating the drab parts of writing,” Choudhury says. “You could use AI to answer questions about your pricing strategy or what you expect to happen to interest rates for the next year. But I wouldn’t use it to write an email to a board member about your last vacation.”Triple-check your work.
Don’t rely on gen AI to autonomously produce answers. The technology is often wrong, and it tends to rely too heavily on jargon and buzzwords. These challenges present big risks for CEOs, whose messages can have a dramatic impact on employees, shareholders, and customers. Rather than simply copying and pasting AI’s responses into a document, review and fact-check every word—especially for important and sensitive messages. You should also ask an editor to review your technology-generated communications (or any communications, really) to ensure their meaning, tone, and personality align with what you intended.
“Never press send without reading and fact-checking the message,” Choudhury says. “Gen AI is a great tool that will save CEOs a lot of time, but I don’t think you can let it run completely on its own. Even if it answers only one question incorrectly, you will suffer huge unintended consequences.”
About the research: “The Wade Test: Generative AI and CEO Communication,” by Prithwiraj Choudhury et al. (working paper)
“When I Use Gen AI, I Must Stand by Everything in the Message”
Wade Foster is cofounder and CEO of Zapier, a fully remote global software company. He recently spoke with HBR about Professor Choudhury’s experiment and how a chatbot trained on his writing changed his creative process. Edited excerpts of the conversation follow.
How do you use generative AI for writing?
I use it to draft long emails and Slack messages. It’s also good at composing answers to FAQs and handling templated documents like press releases. People often email me asking for advice, and some of those questions I’ve answered many, many times. But awhile back I uploaded a document with a bank of answers to our gen AI tool. When a repeat question comes in, I can use the tool to produce a response that’s genuinely mine. I don’t have to write it all over again or relocate the data behind the answer. It’s just there for me.
How transparent are you with employees about how you use AI?
Everyone at Zapier knows I use gen AI to help me write, but I don’t disclose that I’ve used it every time I send a message. There are certain areas where we have hard guidelines and policies about AI. How we are allowed to use customer data, for example, is very strict. We actively enforce adherence to these guidelines, so any work with customer data must be done 100% transparently and with the utmost care. But outside of these scenarios, we simply expect our people to use good judgment. I don’t need to know that gen AI helped you write a PowerPoint deck, but I do need to know that you’ve read and fact-checked every word in that deck. The same goes for me: When I use gen AI, I decide whether to use what it produces, and I must stand by everything in the message.
How has gen AI changed your writing process?
My first drafts are way more polished than they used to be. I’ve never had ghostwriters or PR people writing for me; I would say that 90% of my communication has always come from me. The people who have edited my work or have given feedback still do, except now they know to look for the standard things we all must look for in writing assisted by gen AI, such as hallucinations and stunted, robot language. They also catch stuff that AI wouldn’t because they understand Zapier-specific context, such as our culture, our strategy, and our customer base. Generic large language models don’t have that context, so they’re not going to be able to give me advice or feedback that requires that level of specificity.
Did the bot’s imitation of your writing impress you?
It was hit-or-miss. There were times when I read an answer and thought, That’s better than what I could have come up with! I never read an answer and thought, That’s not me! But there were times when I thought, I kind of disagree. The longer I thought about these answers, the better I could trace a response back to something I’d written in the past. But the bot was lacking the context and nuance of my past answers. Others might not have realized that it was off, but because it was imitating me, I noticed.
What other gen AI use cases would you like to try?
We’re discussing whether a chatbot version of me, like the one used in Professor Choudhury’s experiment, can serve as a proxy for me. So, if you’re an employee struggling with a particular situation and you want to know how I would feel about it, you can chat with the Wade Bot to see what advice it offers. There are ways in which a bot version of me would be a better sounding board for employees than I would. The bot is more patient, it’s never angry from a previous meeting, it doesn’t judge you and your question. I try not to do these things, but I’m human. We would need to post caveats and disclaimers so that people don’t treat the Wade Bot’s responses as my directives, but I think it would create another outlet for help.
Copyright 2025 Harvard Business School Publishing Corporation. Distributed by The New York Times Syndicate.
Topics
Technology Integration
Influence
Trust and Respect
Related
Disrupting the Healthcare Status QuoThe Work of Leadership“Profiles in Success”: Certified Physician Executives Share the Value and ROI of their CPE EducationRecommended Reading
Motivations and Thinking Style
Disrupting the Healthcare Status Quo
Motivations and Thinking Style
The Work of Leadership
Professional Capabilities
“Profiles in Success”: Certified Physician Executives Share the Value and ROI of their CPE Education
Professional Capabilities
Preparing Physician Leaders: Essential Skills for the Future of Healthcare
Operations and Policy
Dealing with Distressed Physicians: What Leaders Need to Know
Operations and Policy
How a Legacy Financial Institution Went All In on Gen AI