Will ChatGPT Soon Replace Editors?
Artificial intelligence is a useful tool, but it can’t completely supplant the human element
Amid the recent panic over AI, Discourse has published several pieces reassuring our readers that artificial intelligence most likely won’t destroy humanity. Meanwhile, frequent contributor Robert Tracinski has argued, to my mind convincingly, that AI will not take all our jobs either. Nevertheless, many in the writing and publishing industry are convinced that AI represents a real threat to their career prospects.
As a member of that industry, I’ll admit I’ve occasionally wondered whether I ought to consider a career change. In a world where The New York Times can eliminate its copy desk, anything is possible—and that was before artificial intelligence became a mainstream bogeyman. But ultimately, I just don’t think AI can effectively replace human beings as writers and editors. That’s because there’s an interpersonal element to these activities that ChatGPT can’t (or at least can’t yet) replicate.
But don’t editors sit alone in a room staring at their computers for most of the day? There’s a grain of truth to that notion—the promise that my introverted self would never have to deal with the public was certainly part of what made this career path so appealing to me. But people skills are actually a large part of an editor’s job. We don’t edit in a vacuum; we edit the work of others, others who understandably have opinions and preferences and feelings about their work. Some writers don’t mind a more thorough, detailed edit, while others get overwhelmed when they see a lot of changes. Some writers like to be rewritten because it saves them work, while others prefer to get clarifying questions and then do any revisions themselves. So one key skill for editors is knowing how to tailor their editorial approach to individual writers; this ensures the best possible final product as well as a smooth working relationship.
Of course, writers might wish to forgo dealing with editors altogether by simply asking ChatGPT to edit their text—cutting out the middle man. But I think it would be difficult, if not impossible, to come up with a prompt that would generate a satisfying result. For example, I fed the previous paragraph into ChatGPT with the prompt, “Perform a substantive edit for the following text,” and it generated a somewhat stiff and stilted paraphrase rather than a true edit. I then added additional instructions: “Now edit paying attention to substantive content, writing style and grammar. Use clear and concise language that is neither too formal nor too chatty.” Here’s the result:
“Is editing really about interpersonal skills?” you may ask. While many picture editors immersed in solitude, the reality is more nuanced. I was initially drawn to this career, thinking it'd suit my introverted nature. However, communication is central to the role. We don't edit in isolation; we collaborate with authors, each with their own sentiments and preferences. Some welcome detailed feedback, while others may feel overwhelmed. Some appreciate editorial rewrites, while others prefer to handle revisions themselves. A crucial skill for editors is adapting their approach to each writer, ensuring quality work and a harmonious partnership.
This isn’t a terrible result, by any means. It’s certainly more concise, which some might see as an improvement. But to me it reads as flat and impersonal; I no longer recognize my own voice. Perhaps a more sophisticated prompt writer could formulate a set of instructions that would produce a better result—but that person would have to have enough editing expertise to understand what “better” even means. Most writers (myself very much included) have a blind spot when it comes to their own work. They don’t see what’s required to improve their writing; if they did, they wouldn’t need an editor. So if they’re the ones asking ChatGPT for help, the AI will be just as blind as its users to the real improvements that should be made.
Another useful function of human editors is their ability to mimic the reader’s experience. Many of Discourse’s writers are experts in their field—economists, lawyers, university professors, and so on. They are extremely intelligent and knowledgeable, but they don’t always remember that nonexpert readers won’t have the necessary knowledge or context to follow the argument. That’s where I come in: As a nonexpert myself, I can note where I’m confused and ask writers to give more contextual detail, add a citation or spell out the relationship between premise A and conclusion B.
Now, ChatGPT can replicate this function to an extent. You can give it a chunk of long, complicated text and ask it to create a summary in two or three sentences. You can ask it to explain the concept in terms that a 16-year-old would understand. Those instructions probably would produce a revision that most readers would be able to follow. But the AI wouldn’t necessarily identify or understand why a particular argument is confusing, and thus the revision still might not be optimal for reader comprehension.
For example, I once edited a fairly long piece that involved a complex legal argument. The piece referenced the incorporation doctrine of the 14th Amendment, a concept that is probably not familiar to most nonlawyers. Just now, I gave ChatGPT the prompt, “Using simple language, summarize the incorporation doctrine of the 14th Amendment,” and it generated a readable, reasonably accurate two-paragraph explanation of the doctrine. But that explanation wouldn’t have been terribly useful for the piece in question, as the discussion of the incorporation doctrine turned out to be irrelevant to the main takeaway.
I ended up deleting all mention of the doctrine in the original piece, a solution I doubt ChatGPT would have suggested if it had merely been prompted to summarize or simplify the original text. Again, a better prompt might have produced a better outcome, but the prompter would have needed the knowledge and judgment necessary to identify what was relevant in the original piece.
There is one area where ChatGPT seems (at least in my limited experience) to be an excellent substitute for a human editor, however: catching typos. It’s an improvement over that now-ancient version of AI, spell-check, which never noticed when you typed “manger” instead of “manager” or left the L out of “public.” I briefly experimented with a recent Discourse piece, “The Roar of the Argentinian Lion,” transposing the vowels in the word “lion” and asking ChatGPT to check for typos. It spotted the misspelling and suggested the correct replacement word. Outsourcing this task to AI does seem like a timesaver, especially when dealing with large volumes of text. But I’ll admit, I don’t fully trust it. I’d still want to go back and check its work, and at that point it might be more efficient just to hire a trustworthy human proofreader.
Bottom line, I do think we’re moving toward a world in which AI will take over some editorial tasks. Editors will have to learn new skills—prompt writing, for one—so that they can get the most out of ChatGPT and other tools. But editing also requires so-called soft skills and the ability to build relationships, and in those areas, human beings are still indispensable. Perhaps this is just wishful thinking on my part, as I don’t particularly want to contemplate a career change, but I’m not worried about AI taking my job just yet.
What I’m reading: Since today’s AI was the science fiction of the past, I got to thinking about science fiction authors I enjoy, and topping the list is Connie Willis. Her writing varies widely in tone, from the heavy and heartbreaking “Doomsday Book”—which is excellent but may not be to everyone’s taste as it involves not one but two pandemics—to the joyful screwball romp that is “To Say Nothing of the Dog,” one of my all-time favorite books.
But the novella “Remake,” which I read for the first time recently (available in the Willis anthology “Terra Incognita“), is particularly resonant in its exploration of the relationship between technology and humanity. Written in 1995, it depicts an eerily prescient future in which Hollywood doesn’t make live-action movies anymore. Rather, the film industry is completely computerized, and those with the technological know-how can manipulate movies in any way they choose. If you want to change “Casablanca” so that Brad Pitt replaces Humphrey Bogart, or so that Rick and Ilsa get together in the end, it’s doable with just a few keystrokes.
Into this overproduced world—in which narrator Tom and his friends are jaded and weary, numbing themselves with drugs and other self-destructive behaviors—comes Alis, a fresh-faced young woman who dreams of dancing in the movies. Her goal seems impossible: Movie musicals are a thing of the past, and the best she can hope for is to digitally superimpose her own face onto Ginger Rogers’ body. Nevertheless, Alis is persistent, and with Tom’s help she eventually discovers a way to achieve her dream. Crucially, she doesn’t succeed by somehow abolishing or limiting the technology available in her world. On the contrary, she figures out a new application of the existing technology that allows her to reach her goal.
“Remake” is more than its concept, though; it’s also a bittersweet romance, an exploration of the overlap between technology and art, and a loving homage to classic films in general and Fred Astaire movies in particular. I recommend it, as well as any other book by Willis. You really can’t go wrong!
Colleen Hroncich, “Has the Tide Turned on School Choice?”
Naomi Lopez, “COVID's Education Revolution”
James Lileks, “What Happened to Travel Writing?”
Salim Furth, “Is Policy Writing a Newscast or an Advertisement?”
Ben Klutsey interviewing Jay Cost, “Why America Is Both Democracy and Republic”
Kate De Lanoy, “Are You Laughing Yet?”
From the Archives
Joe Romance, “Does the Timing of the Trump Indictments Matter?”
Michael J. Ard and Michael Puttré, “The Gaza War Reaffirms America's Essential Role in the Middle East”