The hidden danger of writing with AI

Emphasis - shutterstock 1667146669One of the biggest dangers of relying on AI writing bots is perhaps not the most obvious.

It’s not that what they produce is inferior to human writing. It’s that the opposite is true.

Generative AI apps like ChatGPT or Microsoft Copilot can produce text that is often astonishingly realistic. They can write emails, outline and design PowerPoint decks, or summarise reports at a speed that no human could ever hope to match.
 

Too good

The trouble is that their output is almost too good.

It’s so quick and authentic that we often don’t give it a second glance. So we risk sending messages or submitting documents that don’t say what we think they do.

For people who rely on AI to make writing easier and faster, this is a huge problem. And it’s one that can affect anyone with a human brain – myself included.

Last week, my colleague Luke sent me an AI summary of my Writing Matters article, Why it’s so hard to change how we write, to get my opinion on how well Copilot had done.

As far as I could tell, it had hit all the right points. In fact, I thought it was spot-on.

But I was wrong.
 

Hidden mistakes

It took Luke to point out a critical error, right in the middle of the text. (It had incorrectly defined the word Documentese.)

That’s right: I’d completely missed a mistake in a summary of my own article. Not just that, but I’d failed to spot it in a summary that was only five sentences long.

Why? Well firstly, it read well. And as I explained recently, the brain tends not to look too closely at text that’s easy to process. (Purveyors of fake news take advantage of this all the time.)

Then my own confirmation bias kicked in. I assumed it was all accurate simply because I recognised many of my own key facts.

This second mistake is a particular risk with AI, which is often working with information that’s already familiar to us. (That could be a prompt we’ve written or an email we’ve just received.)

When we recognise that information in its output, we tend to assume everything else is correct too.
 

Huge risk

I’ve yet to see anyone else highlight this risk, but the potential for mistakes and miscommunication is huge.

A colleague reported back to me recently that practically nobody at one major consulting firm actually reads most of the reports they receive.

Instead, they use the firm’s secure, internal version of ChatGPT to produce a summary for them.

That means thousands of the firm’s day-to-day decisions are now based on what a bot says the authors of the reports wrote.

And this is not an isolated case.
 

Human nature

Writing has become central to everything we do, and it’s hard work.

So organisations across the globe are now using AI to generate and process written information for them.

But in focusing on the astonishing power of this Digital Age technology, they’ve overlooked one thing: the Stone Age brains of the humans who use it.
 

Mind the trap

Never, ever outsource your thinking or your writing to AI.

It may save time in the short term, but the potential costs are enormous.

That’s not to say you shouldn’t use AI at all – far from it. But think of it as your own digital assistant.

It’s not a tool to get the writing out of the way so you can focus on the ‘real’ work. Writing usually is the real work.

To avoid the familiarity trap I fell into, do all that you can to make its output less familiar. Try saving it as a PDF, or changing the font or colour. Read it on another device or use a screen reader so you can listen to it instead.

Remember, AI doesn’t just make mistakes. It lies to your face so fluently and eloquently that you never suspect a thing.

 

Image credit: Pressmaster/Shutterstock

The definitive guide to transforming the writing of individuals and teams

GET YOUR FREE PDF COPY NOW

Comments