Skip Navigation

Academic Interference

Original illustration by Irene Chung ’24, an Illustration masters student at RISD and an Art Director for BPR

As the release of OpenAI’s ChatGPT took the world by storm in late 2022, debates arose about the implications of work produced or assisted by artificial intelligence (AI)—particularly for students. ChatGPT is an AI chatbot that produces conversational text outputs in response to prompts provided by a user. It can be used for anything from answering questions about assignments to writing essays. Some fear that this form of AI will spell the doom of the quintessential take-home essay—the backbone of a student’s ability to reason critically and advance an argument. Although these fears are well-founded, attempting to catch AI-based cheating will prove futile. Instead, we should embrace AI as the next great labor-saving writing technology—like the printing press, typewriter, or Microsoft Word before it. 

There is one vital difference between AI and previous labor-saving technologies: Whereas previous advancements were used merely to transfer human thought to the page more efficiently, chatbots like ChatGPT are capable of producing original work themselves. They can find common threads across multiple texts, thus reducing the effort students need to put into their work. As the technology improves, so too will their ability to synthesize information. 

Indeed, AI is improving at a blistering pace. Although George Mason University economist Bryan Caplan reported that ChatGPT-3.5, which is currently available for free, achieved a D grade on his famously difficult economics midterm, the newly unveiled subscription-only ChatGPT-4 scored an A. With the advent of ChatGPT-4, students may be able to use AI to ace their classes without meaningfully understanding the course material. Given the technology’s rapid advancement—developers released ChatGPT-4 only four months after the launch of ChatCPT-3.5––we cannot reliably count on anticheating software to keep up in this proverbial arms race. Thus, in an age where a lot of students’ heavy lifting can be done by AI, questions arise: Will people even have to think anymore? Or will future generations turn to AI to complete any intellectual task that is remotely challenging?

While there is certainly a concern that any use of AI tools like ChatGPT will be harmful, there are plenty of practical, plagiarism-free ways that AI can be used as part of a student’s toolkit. ChatGPT can condense long academic papers into short summaries that can be read in a minute or two—I myself have used it to determine the most important points from class readings to address in my writing assignments. In that sense, ChatGPT can be an extension of text summaries like CliffsNotes, which has been a mainstay of students’ learning since the 1950s. Certainly, the experience of hungrily devouring Shakespeare does not compare to the rather banal process of scanning a summary, but synopses can provide a basic foundation of understanding for students exploring new literature. They are particularly effective for synthesizing research-based texts in which the author’s word choice and voice are far less important than the information presented. To write a great paper, students will still have to engage with the texts themselves, but AI summaries can give those who are unsure or confused direction. 

ChatGPT can thus be used as a kind of ‘universal SparkNotes,’ saving students an incredible amount of time when studying texts for which online summaries are not already available. While this, in my view, is the most positive aspect of AI for learning, the fact still remains that students can use this technology to write original work—something that surely constitutes plagiarism and is harmful to students’ development.

But since ChatGPT is not going away, how can schools and colleges ensure that AI does not hinder students’ ability to develop critical analytical skills? One strategy might be to increase in-person assessments so as to limit the availability of AI. Short-form, in-class essays might seem as though they could fit the bill, but they have their own drawbacks: Although they force students to quickly and concisely organize their ideas, short essays do not allow for the same degree of high-level analysis as long-form papers. 

Furthermore, exclusively relying on in-class assignments and disregarding the potential of AI would stunt students’ ability to master the most advanced tools of our generation—tools that will become ever more important in the future, exponentially enhancing our ability to gather, synthesize, and produce knowledge. News articles are already being written by AI, and it has the potential to conduct legal or financial research. Thus, not allowing students to gain mastery over AI tools may actually harm their career prospects. As any student who was told “you’re not going to have a calculator in your pocket your whole life” knows, the limitations schools place on students’ academic tools are not always reflective of the realities they will face.

My solution, then, is to judge students not only on work written at home, but also on their ability to defend it through in-person verbal examinations or class presentations. A college student might utilize AI to aid in their research process. Then, in class, that student would face a hard line of questioning about their work from their instructor or classmates. I am not suggesting that students should be questioned on surface-level facts, as a simple read-through would allow them to easily present AI-produced work as their own. I also am not suggesting this as a way of catching plagiarism—a goal that may not even be possible. Instead, I argue that in-class questioning forces students to develop a firm understanding of the ideas that go into their work and engage with new ideas presented by their instructors. Students should be challenged on both the reasoning underlying their work and the potential extensions of their arguments. This opportunity would also allow them to develop persuasive communication skills, which are not often practiced in classrooms today.

It is an imperfect solution, to be sure. It is possible for a student to create an entirely AI-produced research paper, read it over, and present its arguments without ever grappling with its deeper  ideas. However, by setting an extremely high bar for an in-person defense of the work students submit and asking unexpected or thought-provoking questions regarding their work, schools can make it challenging for students who rely exclusively on AI to pass exams. Through this method of assessment, students will become strong oral rhetoricians who can both critically engage with ideas and adeptly employ the transformational tools of AI.

SUGGESTED ARTICLES