Let’s start with some disclaimers
I’m a human writer. People give me money to write content, and I use the money to buy groceries and pay heating bills. Clearly, I am not a dispassionate observer when it comes to this discussion.
Another problem I have is an inability to predict the future accurately. No one knows what the future of AI will look like, and its future impact on the art of writing is likewise anyone’s guess.
There is no getting around it. GPT-3 is awesome.
To make matters more complicated, I have a straightforward obsession with GPT-3. I cannot get enough of it.
In the months since I’ve been using Open AI’s ChatGPT, I’ve experimented with many use cases, and I’m only beginning to scratch the surface. The sheer versatility and fluency of the program beggars belief.
Everyone probably has a story of when their jaw dropped or they spat out their coffee as an unexpectedly brilliant or helpful response flashed up on the screen.
My personal favorite moment so far?
I’d been meaning to read up on the Pilgrim Fathers as a case study of radical innovation. The problem was I could never seem to get around to it.
I asked ChatGPT a straight-to-the-point question about what motivated the early settlers. I got a coherent answer with three distinct motivating factors: Religion, Economics, Politics. Nice. That would have probably taken me half the book to get to.
I changed tack, following up with a question about the settlers’ experience of the first winter in America, and back came an equally coherent and informative explanation.
Wanting to dig deeper and switch up the format, I typed the following:
This was not an enhanced version of Google Search. This was education like I had never experienced it before.
Apparently, ChatGPT is already revolutionizing university lectures, by eliminating the need for lecturers to answer basic questions and allowing the whole class to progress at a quicker speed.
Recently I’ve used it for Korean language learning, in conjunction with Google Translate. Ask ChatGPT to generate five sentences with a given grammatical structure, specific complexity level, and even subject area (e.g. finance) - and voila. You have your practice sentences ready for Google Translate.
This kind of technology is a quantum leap from where it feels we were just a few moments ago.
And it’s only getting started. And it’s only going to get better.
What is technology good for?
Before getting too hyped up, let’s recall how people react to technological progress. You may be familiar with the Gartner Hype Cycle. If this is correct, we’re probably approaching the initial inflection point, and due for an overcorrection.
Let’s step outside the graph and consider: what does technology do, exactly?
In general, technology makes things cheaper and faster than they were before. To go back to the plane example, it cut the time from London to New York from days to hours.
As perhaps the defining technological innovation of our times, we should certainly look to AI to make things cheaper and faster. But the question remains, what things?
Up till now, the general rule has been that AI is:
- Good at things Humans are bad at (e.g. organizing large datasets in milliseconds)
- Bad at things Humans excel at (e.g. reading off-kilter text).
Luckily this has so far been a net positive for society, as we humans do not tend to enjoy things that we are also bad at.
Enter AI-writing tools
Now we (finally) reach the main point of this post. Is writing one of those things that humans are “bad” at?
That’s not so clear.
A simple yet powerful example in AI’s favor is Grammarly. No matter how good your grammar is, it is easy to slip up late at night or when you are in a hurry. I personally would never press send without switching it on first.
For an hourly writer like me, if it makes the job faster, that is good for both me (I can take on more clients, diversifying my revenue streams) and my clients (jobs cost less money).
But how about writing itself? Again, making the case for AI, you could make the following points:
- Speed: Most AI tools can generate entire articles in seconds with only a few keywords as a prompt.
- Breadth & Depth: AI is limited only by the dataset on which it was trained (i.e. it has no hard limits). Humans tend to specialize in one or two areas at the most.
- Cost: For a similar piece of content, it would be almost impossible to find a human prepared to work as cheaply as an AI program.
While I’ve condensed the argument somewhat, on the surface it has to be admitted it’s a fairly powerful one. If writing is like grammar checking, the human option is not only lower quality, but higher cost, and takes more time.
Game over for Team Humans? Not so fast.
AI can’t think (yet)
“Two percent of the people think; three percent of the people think they think; and ninety-five percent of the people would rather die than think.” - George Bernard Shaw
Good old Shaw. Ever the optimist.
But whatever one thinks about people, when it comes to AI, or at least a large language model like GPT-3, it’s important to understand that it cannot think. At least, not in the way that conscious beings like us can.
Why is this so important to understand? Because it looks like it can!
And here it is we (not AI) who are the danger to ourselves.
The human brain - while it certainly can think - has numerous flaws, documented in the book Thinking Fast and Slow by Daniel Kahneman. One of these is a tendency to confuse plausibility with probability. In other words, a tendency to believe that the more convincing something is, the more likely it is to be true.
David Copperfield is a brilliant illusionist, whose tricks require years of preparation, thought, and skill to design. But he cannot fly. Nor have there been calls to replace flight schools with magic colleges.
If we accept that thinking - and not just mimicking and imitation - is at the heart of writing original content, we can proceed to a more interesting question: how can a thinking being (a human writer) work with AI (an inanimate tool) to create better work than we did before.
The harder, more interesting, question
To answer this, we need to go through the difficult task of parsing the ‘humans are bad at’ tasks from the ‘humans excel at’ tasks. Here is a suggestion, based on my own experience working with ChatGPT so far.
What AI is good at
In general, closed-end problems, with clear rules and a non-infinite number of potential solutions (e.g. Chess, Go) are where AI does best and humans consistently underperform it.
Writing examples include:
- Titles: Surprisingly, I’ve found this to be one of ChatGPTs most reliably helpful skills. Because of the constrained nature of titles (only a few words) and the rules of grammar, it is effective at coming up with clever and funny titles (put the word “funny” or “clever" in the prompt) if you are prepared to generate and read through enough options. Here was one of its suggestions for this article: "The Pen is Mightier than the Algorithm? The Debate on AI in Writing". That’s clever. It just is. (See this link for the full list).
- SEO: While this is not the case with Chat GPT (which is not connected to the internet and does not have access to current information), AI tools can suggest and optimize for keywords, greatly increasing the chances of ranking highly in search engines. SEO is not only a mechanistic process, but it is also essentially an exercise in making a human medium (language) machine-readable (for the algorithm). Hence it makes sense that AI can execute much more efficiently.
- Checking and critiquing: When I’ve written a sentence or paragraph that is either crucial to the broader text, or which I have misgivings about, asking Chat GPT to “Critique and make suggestions” can not only generate concrete recommendations about how some sentences could be improved but also helps to externalize the problem making it easier to view objectively. I may or may not follow the recommendations, but the process itself is psychologically very helpful. Here is an example for this article.
- Changing length and complexity: Occasionally, it may be necessary to expand, contract, or alter the tone of a sentence. Simply asking GPT-3 to do this for you can save brain space and - importantly - you can instantly see if it has made an error, so the risk of this going undetected is low. This is of course something you could do yourself given time, but frees up brain space for more value-additive activities.
- Changes of tone: This is definitely in the ‘jaw-dropping’ category of GPT-3’s many talents. The program can write (or re-write) large passages of text in a completely different style, all on the basis of a few words in the prompt. It requires judgment and taste to review the content, so it’s probably best not to stray too far from what you know. You want to avoid a ‘How do you do, fellow kids?’ moment (see the Gen Z response in this selection).
On top of all this, one of the most intriguing aspects of AI is when it out of nowhere creates something truly awesome. Take the closing line from an AI-generated version of this article: “After all, it's not the pen that's mightier than the sword, it's the person behind the pen.” Not just alliterative, but profound. And - irony of ironies - created by an algorithm.
What AI is bad at
As already mentioned, bearing in mind that you are not interacting with a “thinking” being will give you the healthy skepticism you need to avoid getting taken down the wrong path.
For writing, things to watch out for are:
- Looking for supporting quotes/articles: If effective, this would be a truly revolutionary feature - not just for writing but human knowledge in general. But at the moment, GPT-3 does not know when it is making things up. Sometimes it will tell you that it does not have access to information. At other times it will mix fact and falsehood - particularly with quotations from people that they never said or titles of articles that were never published. For more on this, read about ‘hallucination’.
- Generating original ideas: The problem with using AI to ‘generate ideas’ is that you are highly likely to end up with a generic list. Any original idea that you get from AI will be a lucky mistake. The output is derived from insights that other people have written, and it may be closer to the aggregate point of view than the fringes, meaning that ideas will tend to be bland and unremarkable. If original thought leadership is your aim, then you may be out of luck.
- Summarizing: When it goes beyond a few sentences, asking AI to summarize is not something that I have personally had good experiences with. The most useful application of this would be for large volumes of text (e.g. a chapter in a book). There is a reasonable debate to be had as to whether summarizing is a good idea at all (even by informed humans) - but expecting an AI to ‘pick out the important points’ is asking too much of a tool that does not understand what it is reading.
The points I covered in the previous section are less about writing and more about preparation. What about writing itself? Well, I’d argue that the above are the basics for being able to write a solid, human-worthy article, post, or report. Finding the right material, supporting it with data, and weaving it into an original narrative, are the essence of a good deliverable.
The rest - how it is packaged, the grammar, the tone - these are as important as the content, don’t get me wrong. But by themselves, they simply give the appearance of good writing.
Zen and the Art of Content Creation
One point I haven’t touched upon in the above is a very commonly cited, very slick feature of ChatGPT, which is the ability to generate a structure for an article.
I remember the first time I saw one take shape in front of my eyes, and thought to myself, “Blimey”.
At first glance, this move would seem to be a slam dunk.
The AI is not overreaching its abilities, just suggesting a framework. The writer still has to write the content, and the agonizing ‘blank page’ scenario, which can delay the start of the process - is no more.
I think that a by-product of the development of AI is to force humans to reflect on aspects of ourselves that we have never considered before.
In this case, I felt an instinctive resistance to this approach.
I once had a Latin tutor who gave me an interesting strategy for approaching translation exams. When given 1 hour to translate a passage of Latin, he said, do not write anything until the last ten minutes. Einstein is often quoted as saying something similar about what he would do if he had 60 seconds to save the world.
If you have read Zen in the Art of Archery, a book about a German man’s experience learning Zen archery from a Japanese master, you may recall a passage where he asks his teacher when he should let go of the string. ‘It shoots’ replies the Master. The student questions him further:
"And who or what is this "It"?"
"Once you have understood that, you will have no further need of me. And if I tried to give you a clue at the cost of your own experience, I should be the worst of teachers and should deserve to be sacked! So let’s stop talking about it and go on practicing."
Another way to describe the experience would be the well-known analogy of a leaf suddenly bending under the weight of accumulated snow. The leaf does not decide to bend.
Writing is like this. It is not a mechanistic process, like putting together an office chair. It is never obvious when the moment to stop researching and start writing will come - until it comes.
By committing to a structure too soon, you are suddenly following an instruction manual, unnaturally constraining yourself, and will - amongst other things - end up with an article that looks almost exactly the same as a hundred others. Just like office furniture.
How I use AI for writing
Having established that there is no substitute for inspiration, I still find myself turning to ChatGPT several times in a given writing session, keeping in mind the following rules:
- Use it primarily for narrow, tightly-defined tasks: rewording a sentence, brainstorming a title, suggesting alternative words or phrasing.
- For anything bigger, wait until you have passed the mid-point: That way, you will be able to see if you have missed anything in your structure, without being overly influenced by it.
- Avoid using it for research: unless you are already familiar enough to verify it yourself (or sense that something is off), or can confirm with another source.
The last point bears repeating, if only because the experience of asking an omniscient computer any question and getting a digestible answer is so appealing. Using GPT-3 for research purposes can be fun when the stakes are low (e.g. in a personal context - see my American history example). But in a professional context, it is a bit like playing Russian roulette with your reputation. Don’t do it!
More broadly, I would guess that the more reliant one is on AI, the greater the risk to the quality of output. At the risk of repeating myself, AI can’t think, so asking it to do your thinking for you is unwise.
So is AI a good thing?
All that said, AI is an unavoidable and exciting part of our future.
It’s not clear what it’s going to revolutionize yet. And getting to know its capabilities and limitations is an ongoing process. I would urge both writers and purchasers of written content to get stuck in. Just like blockchain technology, we need to find a nail for this hammer.
If I had to guess, I think that no matter how advanced the AI, our relationship to it will be one of partnership - filling in for each other’s flaws and foibles. For the time being, we are the senior partner. When AI does learn how to think, we will have other issues to worry about.
If middle-of-the-road content is now available at the click of a button, everyone can make it. How are you going to stand out in this ocean of sameness?
The premium on originality just increased, and human writers are - as of now - the only known suppliers.
[I should state that AI did not write a single sentence of this article, although I did consult ChatGPT multiple times. If you'd like to compare an AI-generated version, please see this link, or log on to ChatGPT and experiment directly].