The End of Recommendation Letters

Professors, like their students, use ChatGPT to get out of doing their assignments.

Image of an apple and a cup of desk supplies, in pixelated form with 1s and 0s
Illustration by Joanne Imperio / The Atlantic

Early spring greened outside the picture window in the faculty club. I was lunching with a group of fellow professors, and, as happens these days when we assemble, generative artificial intelligence was discussed. Are your students using it? What are you doing to prevent cheating? Heads were shaken in chagrin as iced teas were sipped for comfort.

But then, one of my colleagues wondered: Could he use AI to generate a reference letter for a student? Faculty write loads of these every year, in support of applications for internships, fellowships, industry jobs, graduate school, university posts. They all tend to be more or less the same, yet they also somehow take a lot of time, and saving some of it might be nice. Other, similar ideas spilled out quickly. Maybe ChatGPT could help with grant proposals. Or syllabi, even? The ideas seemed revelatory, but also scandalous.

Scandalous because we faculty, like all faculty everywhere, were drawn into an educators’ panic about AI over the winter. When ChatGPT began to spread around the internet last December, fears of its impact gripped our profession: The college essay is dead! It’s the end of high-school English! Students will let computers do their homework! Task forces were launched to investigate. Syllabi were updated with academic-integrity warnings. Op-eds were written. And now, in the faculty club, we professors were musing over how to automate our own assignments?

Large language models can be pretty bad at generating accurate facts and knowledge. But they’re pretty darn good at creating plausible renditions of the work output you don’t care that much about. It is here, where exhaustion meets nuisance, that AI brings students and faculty together.

Take reference letters. ChatGPT can’t explain why you would (or wouldn’t) recommend a specific individual for a specific role, but it can give you a detailed template. A University of Texas professor I spoke with uses AI as a starting point for both lecture content and reference-letter writing. “Quite generic,” the faculty member reported, “but then the average letter is … ?” I’m withholding the faculty member’s name to protect this person from feared reprisal. A shortcut like this can easily be seen as shirking work, but with so much work to do, maybe something has to give. ChatGPT seemed to cut the time involved in writing letters by half.

“A dirty secret of academe is that most professors have a cache of letters separated into different categories,” says Matt Huculak, another AI-using academic and the head of advanced research services at the University of Victoria libraries. They’ll typically have folders full of excellent, good, and average ones, which can be adjusted and repurposed as appropriate. But Huculak wondered if AI might help break that chain, especially for top students. So he asked ChatGPT to write an “excellent” reference letter, and then, instead of using it as a template, he treated it as an enemy. He opened the ChatGPT output in one window and tried to compose the very opposite of what he saw: an anti-formulaic recommendation letter. “What I wrote ended up feeling like the most ‘human’ and heartfelt letter I’ve written in a long time,” he told me. The student won a prestigious scholarship at Cambridge.

Nothing was stopping Huculak from applying the same technique to one of his own formulaic letters, striving to produce its inverse. But having a machine “lay the genre bare,” as Huculak put it, somehow gave him the comfort to play around with the material. It also broke him of the terror of the blank page.

Stephanie Kane, who teaches at George Mason University, also told me that ChatGPT upends the difficulty of creating something out of nothing. When she began developing a syllabus for a new class, she asked ChatGPT to generate ideas, “kind of like a rubber duck that talks back.” Kane quickly discovered that ChatGPT can’t be trusted to suggest readings that actually exist, but it could suggest topics or concepts. Kane also asked colleagues on social media, as faculty tend to do, but that burdens her colleagues. “I think ChatGPT was better, honestly. It doesn’t judge, so I could ask any questions I want without being worried of sounding silly or unprepared,” she said.

Huculak and Kane hoped to overcome platitude, but Hank Blumenthal, a film producer who has worked in both industry and academia, looked to ChatGPT to gain more insight into cliché. Having been passed over for academic jobs in his area, Blumenthal wondered if his required position statement on diversity, equity, and inclusion might have been too unusual for their tastes. “My current diversity statement is about all the movies I produced where I hired Black, Asian, female, diverse crew, directors, actors,” he told me. “Still, I think schools want something else.” Given what ChatGPT can do, Blumenthal said, “I was looking to see what might be the expected discourse.”

Blumenthal doesn’t want ChatGPT to take a diversity position on his behalf. Rather, he hopes that it can help him conform to expectations. “I sought the differences between what I had done and the expected versions,” he told me. Likewise, an American University professor I spoke with copped to using AI to generate the formal “assessment criteria” that now must be a part of course and degree proposals, for example. “It did a great job at sounding like the sort of thing someone evaluating a course without knowing anything about the field would want to hear,” the professor said. The generated material was good enough to make it into the actual proposal. (I granted the professor anonymity so that the proposals would not be penalized for incorporating computer-generated text.)

A common lament about large language models holds that, having been trained on piles of existing material, they can’t provide originality. But a professor isn’t often charged with saying something truly new. Much of what we do all day is office work: writing letters, processing forms, compiling reports. AI can tame that labor, or at least offer a feeling of superiority over it.

That may be true for students too. They also feel overwhelmed and overworked: stretched thin by different professors, who each have no idea what the others have demanded; suffocated by tuition costs; confused about their future prospects; and tested by the transition to adulthood. Students come to college first for the college experience, and second to learn and earn credentials. Their faculty may view class assignments as unalloyed goods that would be sullied by a chatbot’s intervention, while students see them as distractions from the work of making sense of who they are. In that respect, AI only helps to clear away annoying obstacles, so we all can move along to doing things that really matter.

Ian Bogost is a contributing writer at The Atlantic.