0044 (0)1273 732 888
Blog
Will ChatGPT silence human writers?
Author : Rob Ashton
Posted : 30 / 01 / 23
Share this:
If you’re ever feeling down about what you’ve achieved in life, this surprising fact might cheer you up. You managed to rewire your own brain before you were ten years old.
There – feel better now? You should. Mozart may have composed his first symphony when he was eight, but he has nothing on you. How did you do it? By learning to read and write. (To be fair, Mozart rewired his brain this way too, so maybe he does have the edge.)
That we can learn to read at all is a miracle of adaptation – as I’ve explained before. We rarely give it a second thought once we’ve done it. But learning to read and write involves creating a huge new network of nerves across our brain, which connects multiple systems that we evolved for other purposes.
Acquiring these skills is probably the greatest achievement of any of our lives. And most of us accomplish this incredible feat in childhood.
While the processes of reading and writing are technically much harder than we think, computers have long been able to do the reading part. But writing in a way that’s convincingly human has always remained stubbornly out of reach. Until now, that is.
As you may have noticed, an artificial intelligence app called ChatGPT has been causing quite a stir recently. It’s not often that tech bursts out of computer blogs and into the general consciousness. But this is definitely one of those moments.
It took Twitter two years to get its first million users. Facebook took ten months. ChatGPT took five days. News about the new writing bot has been splashed across the pages of every online news website from the New York Times to the Daily Mail since it went live in the final weeks of 2022. There have been plenty of times when tech has been over-hyped. I doubt this is one of them.
Yet there are two things that news reports – and most other commentators – have missed about the meteoric rise of this incredible technology. The first is that writing matters more than most people have previously given credit. In the digital age, we’re far more likely to write to colleagues, clients or even friends than we are to speak with them. (My guess is that you tap on your phone’s keyboard far more than you use its microphone to actually speak to anyone.)
The second is that writing is hard. So anyone who can do it well has a distinct advantage.
Writing is a messy, difficult and often tortuous process, even for those of us who do it professionally. We resist writing long documents until we have to.
Take this writer, for example. I’m typing these words only after procrastinating for far too long on social media. It took more than one nudge from my editor to get me to commit my first words to the screen. ‘I’m working on it now,’ I guiltily reassured her on Slack. What I didn’t say was that I’d spent the last two hours avoiding the task, although I strongly suspect she’d already rumbled me from her LinkedIn feed. (Sorry, Catie.)
Writing is frequently painful. So anything that takes that pain away has to be a good thing, right? Well, yes and no.
On the plus side, ChatGPT helps because it’s staggeringly fast. I’ve spent weeks investigating and thinking about the topic for this post. I’ve gathered opinion from my LinkedIn followers and whiled away otherwise idle moments pondering what the future might hold. I’m a fast writer (even if I can’t start until I’m staring down the barrel of a deadline). So this article will take me no more than a few hours to write. But I have nothing on ChatGPT, which can produce its content in seconds. Writer’s block? What’s that?
It’s also convincing. One of my favourite BBC documentary radio shows used a ChatGPT-generated script for an introduction to a recent episode. I couldn’t tell the difference.
It even seems to have cracked that most difficult of writing skills: humour. Like this piece it produced in response to a user request to write about removing a peanut butter sandwich from a VCR, in the style of the King James Bible:
Image: Thomas H. Ptacek / Twitter (Click image to enlarge in new tab)
User’s prompt: write a biblical verse in the style of the king james bible explaining how to remove a peanut butter sandwich from a VCR
ChatGPT response: And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it.
And he cried out to the Lord, saying, “Oh, Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge?”
And the Lord spoke unto him, saying, “Fear not, my child, for I shall guide thy hand and show thee the way. Take thy butter knife, and carefully insert it between the sandwich and the VCR, and gently pry them apart. And with patience and perseverance, the sandwich shall be removed, and thy VCR shall be saved.”
And the man did as the Lord commanded, and lo and behold, the sandwich was removed from the VCR, and the man was saved.
And the Lord said, “Verily I say unto thee, seek not to put thy peanut butter sandwiches in thy VCR, for it is not a suitable place for such things. Rather, keep thy sandwiches in thy refrigerator or on thy plate, where they belong.”
And the man heeded the Lord’s words, and from that day forth, he kept his sandwiches in their proper place, and was saved from trouble and woe. Amen.
If your job depends on producing reports, proposals, customer emails, live chat or, in fact, any form of writing (and that’s most of us), ChatGPT would seem to spell trouble. We might have rewired our brains and been writing better than the bots for years, but now the bots have caught up. Or have they?
Dig a little deeper, and all is not quite as it seems. Here’s that introduction from the 12 January episode of Inside Science, the BBC Radio programme I mentioned just now:
Welcome to BBC Inside Science, where we explore the cutting edge of science and technology. This week we’re delving into the swinging world of gibbons and their unique sense of rhythm. We’ll also be discussing the ethical debates surrounding whether residents living near wind farms should receive free electricity. And in the light of the recent failed satellite launch in Cornwall, we’ll be examining the algorithms behind these complex operations.
Notice anything strange? Like I said, I didn’t. And that’s one of the problems, because hidden in this very convincing, very human-sounding passage is something that is absolutely not true.
The radio presenter, Marnie Chesterton, had asked the bot to write the intro and given it a list of topics she’d be covering. She’d also asked it to do it with wit, style and ‘a passing reference to the recent failed satellite launch in Cornwall’. (This is called the ‘prompt’ – a new piece of tech jargon which I suspect will become increasingly familiar.)
ChatGPT got almost everything right, but it bungled the final sentence. It couldn’t find the answer to why the satellite launch failed, so it made one up about algorithms and put that in instead.
It couldn’t have known the real answer, because it has only a limited knowledge of events after 2021. The ill-fated launch was in January this year, long after the developers stopped filling the chatbot’s memory banks.
It does this a lot. ChatGPT draws on 580Gb of data to produce its answers. Its memory is crammed with an eye-watering number of facts on everything from particle physics to puff pastry. But it doesn’t know everything. If you’re an expert on a particular topic, you’ll easily be able to catch it out in a fabrication. Stray beyond that topic, though, and you probably won’t notice.
Nor will non-experts on your specialist subject spot the errors that jump out at you. We tend to believe by default things that are easy to read. (It’s called the fluency heuristic.) So ChatGPT’s flowing prose has the potential to do a lot of damage.
Which brings us to the second problem: it doesn’t tell you where it got its facts from. (To be fair, it’s accurately reflecting the habit of 90% of bloggers on the internet in that regard.) At least, it doesn’t unless you ask it to. This dramatically increases the risk of the spread of inaccurate information, both because most people won’t ask it to cite sources and because it can produce its misinformation at a terrifying pace.
It’s also why you should never rely on it to produce the final version of an important document.
ChatGPT’s algorithm produces its convincingly human style using probability. It knows which words most often follow other words in sentences and builds its own accordingly, drawing in facts from its database as it goes. It’s like Google crossed with your smartphone’s predictive text function, but on steroids.
Often, its style is bland and generic, as it mimics how the average human writes – that is, badly.
I said earlier that writing is a messy, often tortuous process. But it’s that very process, frustrating and infuriating though it is, that often produces our best ideas. An algorithm can’t reproduce human creativity and it never will, for the simple reason that the human brain doesn’t have algorithms. That’s not how it works.
Our eureka moments – the high-value insights or messages that cut through when others have failed – spring out of our unstructured, messy lives. They depend on seemingly unrelated ideas colliding, usually when we’re distracted by a mundane task such as showering or unloading the dishwasher. These tasks engage the brain’s mind-wandering ‘default mode’, which is the exact opposite of focused thought.
Our lives are full of these mundane moments, but it’s in those that the gold is buried. We collect those golden nuggets (often unconsciously) over the days and weeks of our daily lives or the months and years of a career.
And, when we sit down to write, we assemble and share them with our clients, colleagues or friends. It’s the very process of writing that forces order out of the jumble of ideas in our heads. Writing itself – human writing – is an essential part of original thought and, therefore, progress. Write well, think well (and vice versa). A bot can never produce an original thought, for the simple reason that it runs on what others have thought already.
As I write that myself, though, I know I’m in danger of missing the point – just as many others have. Using ChatGPT to replace original human writing is too obvious a ploy and a waste of its true capability.
When Google Earth went live, it gave us the chance to travel virtually to any point on the globe. But the first thing many of us did was use this revolutionary new technology to see what our own home looked like on the internet. We always start with what’s familiar when it comes to new advances like this, partly because we’ve yet to imagine the possibilities and partly because it makes it feel less threatening.
Yet there are many ways that ChatGPT can help us – with writing and more. So many ways, in fact, that they could make up an entire post. (Or, indeed, an entire book. Prompt, which shows you how to use the tool to better understand your audience and even transform your career, is a great example.)
But if you want to discover its true possibilities, you need to think of it as an assistant who can help make your work even better rather than someone to whom you outsource your thinking entirely.
You could, for example, tell it what you’re writing about, paste in what you’ve written and ask it what you’ve missed. Or you could ask it to summarise a longer document. You can even tell it what style you’d like it to use.
Always be very careful, though. Check and triple-check what it’s written. And never upload confidential or proprietary information.
According to an article published by Insider this week, reportedly based on leaked Slack messages, Amazon has allegedly warned its employees against sharing corporate information with ChatGPT. The report claims Amazon’s lawyers had spotted AI-generated text that resembled recent internal company data.
If the claims are true, quite how the data got there is not clear: OpenAI says ChatGPT uses a database that hasn’t been updated for over a year. But the technology is free partly in order to test it on the public and to help develop it further. As part of that process, it stores whatever prompts you ask. Who knows where that information may turn up later.
Indeed, Open AI now states that its developers may review whatever you upload and warns users against sharing sensitive information.
Whatever applications we dream up for ChatGPT (or any other writing bot, for that matter), it’s not about to replace human originality any time soon. The human brain is the most complex structure in the known universe, and it doesn’t run on algorithms. Computer programs can do a lot of things, but generating well-formed, original ideas is not one of them – no matter how much AI-generated sentences and paragraphs might suggest otherwise.
So if you think a bot that does a good impression of a human could replace you, heed the signs. That probably says more about your current role than it does about ChatGPT. You can write to fill space and tick a box, or you can contribute something original. Which will it be?
Image credit: Phonlamai Photo / Shutterstock
Advice and tips (156)
Grammar (60)
Choose your words wisely (46)
Uncategorised (26)
Plain English (25)
Psychology and linguistics (24)
Language abuse (21)
60-second fix (21)
Reader-centred writing (17)
Technology (15)
Online and social media (15)
Punctuation (15)
Jargon (12)
Spellings (12)
News from Emphasis (12)
Presentations and speeches (10)
Podcast (10)
Report writing (10)
International issues (10)
Technical writing (9)
Design and formatting (9)
Courses for companies (8)
Letters and CVs (7)
Numbers and finance (6)
Graduates (6)
Quizzes (6)
Customer relations (6)
Videos (5)
Literacy and education (5)
Style guide (4)
Proofreading (4)
Legal writing (4)
Advertising (4)
Development of English (4)
Twitter (4)
Partners (3)
MCA (3)
Email (3)
Wordplay (3)
Writing news stories (3)
Team leaders and managers (2)
Editing (2)
FAQ (2)
Artificial intelligence (2)
Conferences and exhibitions (2)
Pitches and proposals (2)
PDF downloads (2)
Book reviews (1)
webinars (1)
Learning and development (1)
Policies and procedures (1)
Internal communication (1)
Writing for media (1)
Tutorial (1)
More topics