
“I am curious if you have tips on how to use ChatGPT best to support writing,” read a recent email from a graduate student client. He raised a question I had been thinking about for some time. AI writing tools are a class of technologies that use Large Language Model (LLM) techniques, and include “chat bots” like ChatGPT. They can generate and organize text in myriad ways, once trained on a large enough data set. (Do you want rap lyrics on data protection, written in the style of the G-Funk era?) What many initially took to be a parlor trick has become a source of wariness for academics. As always, the mainstreaming of new technologies has prompted moral reckonings. Noam Chomsky has weighed in, arguing that AI writing tools are unsophisticated and amoral. A recent New York Times article read, “How to Use ChatGPT and Still Be a Good Person?” So how should academic authors—particularly graduate students and early career academics—respond? How do Al writing tools change the profession of editing and writing coaching?
Recent statements by leading publishers show a consistent approach to using AI writing tools. ACM—the leading organization behind conferences like CSCW and CHI—recently released guidelines for acceptable use. Such tools are permitted, as long as they don’t “plagiarize, misrepresent, or falsify content.” The “novel intellectual contributions” of the manuscript still need to be written by the author. Such tools can’t be listed as an author, and the sections of text processed by these tools must be disclosed in the article itself. (ACM has always prohibited anonymous and ghost authorship practices, such as hiring an author to write an article.) Top-ranked academic journal Nature (and their publisher, Springer Nature) similarly announced that they will enforce standards that prohibit AI tools from being authors and require disclosure of their use. The emerging consensus is that AI writing tools can be used to organize thoughts, but not come up with them.
Getting help with organizing academic writing is nothing new. Editors have long been a mainstay of academia, who themselves turn to non-human helpers. We use Word plugins to enforce phrase consistency and adherence to the latest Chicago Manual of Style. Custom macros help with common tasks such as converting tracked changes to highlighted text. In fact, every manuscript I work on gets a pass with these tools, some of which are “off the shelf” and others I’ve created myself. In each case, I am not coming up with ideas, but shaping how they are presented. Even when providing developmental feedback, I suggest ways that a conclusion might be written or new references to include. I never actually write the conclusion—that’s the job of the author. I might get a shout-out in a book’s acknowledgements section. But much like ChatGPT, I’ll never be listed as an author, since I don’t come up with ideas or the larger argument a client is trying to convey. That’s simply how the rules of academic publishing have always worked.
For what it’s worth, I’m not particularly worried about ChatGPT taking my job as an editor. Academic writing requires generating new ideas on fairly niche topics—which is precisely where algorithms fail. Academic authors ask editors like me to bring out the originality of their argument and authority of their writing voice. By contrast, LLM-generated text is typically bland. “Talking to ChatGPT began to feel like every other interaction one has on the internet,” Ian Bogost reflected, “where some guy (always a guy) tries to convert the skim of a Wikipedia article into a case of definitive expertise.” AI writing tools generate generic ideas phrased in the blandest terms. By definition, academic writing presents an original argument with compelling prose.
As a communication professional, I’ve also oriented my business—whether I’m hired as a writing coach or editor—around conversations with clients. Much as a therapist talks with a client about what’s bothering them, I talk with my clients about the challenges that are impeding their writing. Through conversations we can efficiently chart a roadmap to complete their writing projects. Some plucky entrepreneur will eventually create an academic writing app (if one hasn’t already). People already download apps to help alleviate stress and improve their mental health. Even so, I have a hard time believing that it will be able to capture the gray area domain knowledge that editors accrue over decades. I actually have other worries, more connected with the challenges confronted by graduate clients like the student who emailed me.
I can see why AI writing tools are appealing to international and first-generation graduate students. They could save them valuable time. Perhaps they also could improve language skills by watching how AI writing tools arrange ideas on the page—assuming that such tools demonstrate good writing form. But the clear editorial line between authorship and organization may blur in writing practice. Much like the undergraduate who cuts & pastes sentences from Wikipedia into their own essay late at night, students will be tempted to generate just a paragraph or two of ideas using ChatGPT.
I also worry that AI writing tools will make users responsible for errors, which could further stigmatize early career academic authors. For example, Professor Dave Karpf recently tweeted about his confusion after being contacted by a student who was trying to locate an article he supposedly write a decade earlier. The supposed article’s title was similar to one Dr. Karpf may have published, but didn’t actually exist. If a grad student were to submit such an article for a class, at best they would fail the assignment. At worst they could be slapped with an accusation of academic misconduct, because they crossed a line between writing authorship and organization.
Writ large, AI software tools could also amplify the already imposing publishing requirements in academia. If your grad buddy is using AI writing tools to write twice as many articles, it stands to reason that their level of productivity would be viewed favorably by job search committees. Such escalating publishing requirements would be bad news for an already dire academic job market.
Ultimately, I agree with ACM and Nature; to offload authorship to AI tools would be a tremendous disservice to society. But the point of writing is to learn how to organize knowledge, not just format ideas generically enough to survive peer review. Writing skills are as much about organization of words as they are about generating them. If you automatically organize the ideas in an essay, you won’t gain the writing skills to organize similar thoughts on the page in the future. Nor would your arguments gain depth over time, as you help them evolve and gain inspiration for your next writing project. Through writing, we also connect with the thoughts of authors, often across eras, even long after they have passed on. It’s a joyous feeling to come to know people through their writing. Perhaps bots too feel joy as they read and write, but I doubt it—I’ll leave that debate to the philosophers!
Dr. Aure Schrock is the Senior Editor at Indelible Voice, an editing service for academic book proposals, book manuscripts, and journal article manuscripts. You can find them on Twitter at @aschrock.