Skip Navigation

Prompt and Circumstance

illustration by Jiwon Lim ’26, an Illustration major at RISD and Illustrator for BPR

Emily Dickinson has lost her most beloved friend. Penned in alternating lengths, angles, and fervor, the em-dash was once a cure-all for the emphatic writer. It offered a moment of respite, prompting reflection on the text. Or — it represented a breath of hesitation. I elegize its sudden death in 2024, when this treasured punctuation mark was paraded across the internet as the most effective way to identify AI. 

There are endless Reddit posts, Medium blogs, and LinkedIn posts coaching internet users on how to “not sound like AI.” Eager to capitalize on this internet witchhunt, companies like Grammarly and Quillbot have launched “AI Humanizer” tools promising to make AI-generated content sound less robotic. The crusade goes beyond the infamous em-dash: Servile positivity is dead, corporate verbiage like “delve” is dead, and triplets (oops!) are so, so dead. 

Few groups scorn AI-writing as much as academic cultural elites. A study at the University of California, Berkeley, found that the only university disciplines since 2022 whose syllabi did not ease up on restrictive policies against AI-use were in the arts and humanities. Across all departments, 79 percent of syllabi still ban the use of AI for drafting and revising essays. 

Professors make laughably few efforts to conceal this derision. Benjamin Parker, Associate Professor of English at Brown University, put it quite concisely in his Spring 2026 ENGL0500T syllabus: “Assignments completed with the help of AI will be irrelevant, thoughtless, useless, and non-responsive. If the answers produced by AI look good to you, you have missed the point of everything.” 

Parker’s words, while biting and admittedly motivational, overlook a key irony in academia’s critique of AI-generated writing: The largest portion of Large Language Model (LLM) training data is pulled from publicly accessible text, including academic repositories like arXiv, PubMed, BookCorpus, and Project Gutenberg. ChatGPT’s tell-tale writing style — and corresponding content — is not its own. It is coached on decades of academics across disciplines, from anthropology to economics, who relied on specific grammatical elements and a detached, impersonal voice as a lingua franca to convey their taste-maker status. The very writing style these professors now scorn is the language of their own manuscripts.

The democratization of academic language poses a serious challenge for the cultural elite. If their expertise can be replicated by brutish technology, then all of those agonizing years as a broke PhD student were for naught!

Historical precedent points to a very simple solution: Lean into human irreplicability. By the late 19th century, photography let flawless portraits of the family unit grow ubiquitous. For all of their years of schooling and apprenticeship, classically-trained painters could not compete with the photorealism or speed of a camera. A coterie of young painters realized that if their profession was to survive, they had to recommit to tactile human expression. You may know them today as the Impressionists

Monet’s famous red sunrise, which dapples across calm waters, captures a fleeting reflection of light that eluded the era’s long-exposure cameras. Impasto — thick, visible, and messy brush strokes — drew attention to the laborious and passionate process of creation, in contrast to the camera’s simple click. Viewers could infer the pacing of the artist’s paint strokes from the final product, speeding up with excitement and slowing down to introspect. Paintings became a reflection of the artist’s personal feelings toward the subject matter. 

There is no definitive evidence that all academic writing will face a similar shift away from bloated, impersonal syntax. But if the direct reader address and casual language in Parker’s syllabus is exemplative of anything, it is that academics, too, are bristling against tradition. They must reinvent themselves as the Impressionists did. In the face of AI’s flatness, writing must become varied and personal.

Still, we cannot skirt the influence of LLMs forever. Compared to previous technologies, the accessibility of LLMs enables its influence to disseminate astoundingly quickly. A recent Max Planck Institute for Human Development study found a “seep-in” effect of AI-language on everyday speech, where even conversational, non-scripted podcasts have shown a statistically significant uptick in common AI vocabulary.

The causes go beyond passive consumption and regurgitation of AI writing. Just as elementary schoolers are trained to fill-in-the-blank for the “correct” word a sentence is reaching toward, laptops, imbued with increasingly advanced predictive text, gradually program our brains to write emails in the style of Gemini. When predictive text was in its nascent stages, researchers at Harvard discovered that writers whose phones had the predictive keyboard feature used significantly fewer “unpredicted” words. They speculated that texters would actually substitute their own word choice with the predicted text suggestions. 

The 2020 landscape pales in comparison to the richness of LLM-generated predictive text now. Gmail’s “Smart Compose” feature is underlaid with Gemini to ghost-write the ends of sentences. When I type “Thanks for…,” Gemini finishes it up with “…taking the time to meet with me today. It was great catching up!” 

When Gemini autocompletes a sentence, the brain stops searching for alternative endings — a tendency scholars call cognitive anchoring. Similar to what the 2020 Harvard study found, internal vocabulary dwindles to match that of Gemini: Hitting “tab” is far simpler than thinking for yourself. 

Whether copied and pasted directly from ChatGPT or through subconscious adoption, AI-generated writing now saturates most corners of the internet, including the process of job-seeking. A 2025 survey found that two-thirds of job candidates use AI for resume and cover letter writing, interview practice, or career guidance. 

Levin Brinkman, a co-author of the Max Planck study, explained that it is natural for humans to “copy what someone else is doing if we perceive them as being knowledgeable or important.”

So what of those who dispute AI’s intellect? Professors certainly resist AI-seep, but what about their equally well-educated (if not a bit more financially-minded) peers from the good old college days? 

The economic elite — the bankers, product managers, and lawyers of the world — have also not been so quick to accept the normalization of AI-use amongst job applicants. Well-written cover letters are no longer an opportunity to demonstrate interest in the employer and signal quality of education. Twenty-five percent of recruiters find AI-use when writing cover letters “unacceptable.” Another survey reported that 20 percent would reject candidates with AI-generated cover letters altogether. Employers can no longer make easy inferences about a candidate’s quality of education through the quality of their prose due to widespread AI-use, rendering traditional cover letters useless.

The em-dash may be dead. The polished cover letter as a marker of professional competence  may be dead. But if humans can be credited for anything, it is our uncanny ability to make an in-group out of anything: to clutch onto shibboleth for dear life. The ability to write well has always been — as it will always be — a status symbol. AI-generated writing promises to overturn hierarchies, but techno-optimists overestimate just how far taste-makers will go to hang on to their titles. What has fallen may yet rise again. Elite habitus is bound to get a whole lot more personal.

SUGGESTED ARTICLES