HomeWriting

ChatGPT can't replace writing

September 20, 2023

Note: this essay is operating under the (somewhat questionable) assumption that for the forseeable future, advancements in AI will be limited to improvements, however significant, on the large-language-model (LLM) technology that underlies ChatGPT. On that note, I will often refer to LLMs in general as "ChatGPT", since more people are familiar with that.

If you've ever bullshitted an essay,1 you probably know what it feels like to search for an argument that fits words you've already written. But if you've written an essay about something you're passionate about, you've faced the much more rewarding challenge of finding the best words to fit your argument.2

In a future where ChatGPT is ubiquitous, the first type of non-substantive writing will cease to exist, so easily created that it'll hold no value. But ChatGPT won't (and can't) replace the latter type. Why? Because the writing process itself is essential for communicating human ideas.

The communication problem

You have ideas about a topic—it could be mundane, it could be profound. For me, it might be the reality TV show Love Island (the British version, of course); for my friend Zach, whether or not humans have free will. It could be anything. No matter what the topic you choose, you have all sorts of snippets of ideas floating around in your head, and they're all linked with thousands of little connections that form a network of your thoughts.

The issue comes when you want to communicate those ideas to someone. In an ideal world, they'd end up with that exact same network in their head. Of course, in the real world, that perfect copy is obviously not possible, so the goal is to maximize the amount of that network that gets reflected into their mind. Since you can't convey every single idea and connection, which ones you choose to convey make a big difference on how much of your thought network the other person understands.

The writing process is key for making those choices. When you speak, for example, you're communicating only the ideas that come to the top of your mind.3 Writing, on the other hand, allows you to edit, reword, and reorganize in order to maximize4 how much you communicate.

And ChatGPT can't replace this. Because while ChatGPT may be able to choose some ideas and some connections, it can't choose your ideas and your connections. Only you can find the words that best express your internal thoughts. And your ideas are almost always going to be better and more interesting than the amalgamation of the internet's thoughts that ChatGPT expresses.5

This extends to other forms of creative expression, like film. An actor makes deliberate choices in their performance like facial cues and body language to best portray their character. Though AI might at some point be able to convincingly render a fake actor, it can't deliberately make the decisions a great human actor does.

What about education?

It would be hard to discuss ChatGPT's impact on writing without addressing how it will impact writing education, an issue educators across the country are scrambling to find an answer for.6 Without presuming to have a solution to this ridiculously complex problem, I have a rough sketch of what I think the future holds.

Current university-level writing rubrics place weight about equally on two types of criteria. Those like grammar and citing evidence (which, excluding hallucinations, LLMs are great at) are valued about equally with criteria like argumentation and originality (which LLMs are less great at). In a world where every student has an LLM integrated into their text editor that constantly checks grammatical mistakes and can immediately cite relevant evidence from anywhere in source texts, rubrics will adapt. LLM-friendly criteria will be reduced to hallucination checks, and human-idea-friendly ones will take on more weight.

The high school level and below will go through a similar process. Time previously used to teach, for example, grammar,7 will instead be used to teach students how to come up with interesting, original arguments, and how to refine their skills at structuring their writing to best communicate them.8

Now, I have no clue what the timeframe is for this entire transition, but at some point in the near-ish future, gone will be the days of getting a good grade on a vapid essay simply because I know how to write clearly. But hopefully, the better writing process that replaces it will be more interesting for us all.

1.

Which I, of course, have never done.

2.

I use the term “argument” throughout this essay, because the argumentative essay is easier to focus in on. But this applies to the central idea of any type of writing—the question of a research paper, the narrative of a story, etc.

3.

Spoken ideas are often pretty effective in their own right, especially since when you speak, you have dimensions like tone and pace to help express yourself. But specifically for choosing ideas and connections, the point holds.

4.

In fact, the upper limit of understanding a person can intake is usually the amount of ideas and connections they're able to read, and thus the amount of those you're able to write down.

Importantly, this is the upper limit of understanding, and not information. You can intake more information simply by reading more information, but reading, say, a whole almanac of economic statistics wouldn't bring you closer to understanding the evolution of the US economy in the last 50 years.

5.

This will become even more true as future LLM iterations train on a dataset containing more and more LLM-generated content. Additionally, in a future where ChatGPT is ubiquitous, there will be inherent value placed on human-generated ideas, similar to how we value hand-crafted, artisan goods in today's world of mass manufacturing.

6.

Since it's impossible to detect any AI-written content, schools (commendably) have quickly moved away from banning ChatGPT, and have proposed many solutions, from requiring in-person essays to requiring ChatGPT as a writing tutor. Still, I think many of these solutions might have long-term issues, as they're targeted towards a world where writing hasn't fundamentally shifted.

Ethan Mollick has a good article on this topic that covers writing as well as other types of assignments, like problem sets and readings.

7.

In place of formal grammar instruction, the LLM technology in word processors will just have a mode that teaches and explains correct grammar choices on-the-fly during the writing process.

At the elementary level, where the foundations of writing are usually taught, I doubt whether argumentation is something that can be learned. So I imagine other AI-enabled tools, to teach spelling for example, will become popular. The key for these tools (which will probably exist at some level higher up in education, too) is that using them to guide human writing must be more convenient than using ChatGPT to simply correct it automatically.

8.

This, I hope, will make writing much more interesting for students. Learning evidence with the mindset of forming an argument around it, then having the flexibility to write that argument is certainly more engaging than having to conform to the pieces of evidence they've happened to memorize. (This will also benefit teachers, who will get to read more original essays.)