Blog

The dangers and limitations of AI writing tools

Leif Kendall

ProCopywriters

Artificial intelligence is magic.

AI tools like ChatGPT and Midjourney produce stunning images and words in the right order.

BUT.

AI tools are terrible at writing words that matter.

As we will see, AI tools are blatant, unrepentant liars – and they don’t even know when they are lying.

AI tools work by gobbling up gigabytes of data, mashing it around with some code, and then spewing out new sentences that the code believes sound similar to other sentences it has ingested.

Unsurprisingly, this makes AI a horrible alternative to human writers.

More than that, they are currently (in July 2023) a dangerous alternative to original research and writing that can leave your organisation liable to ridicule, loss of income, and legal action.

Here’s why AI tools are probably more trouble than they’re worth.

 

AI makes it difficult to separate fact from fiction.

Ask AI to quote its sources and it will.

But some of those sources don’t exist.

A New York law firm, Levidow, Levidow & Oberman, P.C., discovered this the hard way when they used ChatGPT to help them write a court brief.

ChatGPT “included references to a number of fake cases”.

ChatGPT had fabricated legal cases based on the data it had consumed. The lawyers had no idea that ChatGPT was so creative.

Lawyer Steven Schwartz wrote in a response after his mistake was discovered:

“I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic. I deeply regret my decision to use ChatGPT for legal research, and it is certainly not something I will ever do again.”

Source: Business Insider – A law firm was fined $5,000 after one of its lawyers used ChatGPT to write a court brief riddled with fake case references

AI can be dangerously wrong.

One publication that decided to embrace AI authorship, Men’s Journal, quickly discovered that their new artificial interns should not be trusted.

Their first AI article, on the role of testosterone in men’s health, was riddled with errors.

Jon Christian for Futurism writes: “Like most AI-generated content, the article was written with the confident authority of an actual expert. But on closer inspection, the whole thing fell apart. Bradley Anawalt, the chief of medicine at the University of Washington Medical Center who has held leadership positions at the Endocrine Society, reviewed the article and told Futurism that it contained persistent factual mistakes and mischaracterizations of medical science that provide readers with a profoundly warped understanding of health issues.”

Source: Futurism | Magazine Publishes Serious Errors in First AI-Generated Health Article

AI sounds convincing even when it’s making up nonsense.

Because AI chatbots can construct grammatically-correct sentences and include the right sort of words and phrases, their output can seem very convincing.

A great example of this illusion of intelligence was spotted on Reddit when one user asked ChatGPT to explain what makes the song ‘A Whiter Shade of Pale’ so interesting.

ChatGPT spewed out a detailed response which seemed to be packed with music theory.

At first glance, to a non-expert, it looks intelligent and accurate.

But Redditors quickly dismantled all the mistakes in the article.

It was wrong about almost everything.

One Redditor commented:

“That sounds like it would be very convincing to somebody who doesn’t know anything about music, even though it’s wildly inaccurate.”

 

Source: Reddit | Do you use ChatGPT to learn about music theory? (with example)

AI won’t amuse, confuse or delight your audience. AI is boring.

The current crop of AI tools do not seamlessly weave powerful insights with dashes of emotion, humour or character. AI content is flat and uninspired.

If you want any kind of levity in your content – even just a smidgen of personality to lighten your material – you’ll need a human to write it.

AI won’t impress search engines.

As a search engine, Google’s primary job is to bring users the best answers to their questions. Google executives talk about evaluating the quality of online content using a set of metrics they refer to as EEAT: Experience, Expertise, Authoritativeness, Trustworthiness.

It’s currently impossible for an AI tool to produce content that meets all of these requirements – unless of course the initial AI content is heavily edited and improved by human writers. You quickly get to a point at which the AI tool causes more trouble than it’s worth – especially if it is introducing made-up ‘facts’.

Google’s search engine boss Prabhakar Raghavan cautions users to be careful with AI chatbots: “This type of artificial intelligence we’re talking about can sometimes lead to something we call hallucination. This is then expressed in such a way that a machine delivers a convincing but completely fictitious answer.” Source: WELT AM SONNTAG

Clearly, “fictitious answers” from AI are not compatible with Google’s EEAT quality metrics.

Using AI liberally is unlikely to win you readers – or rankings.

AI can’t respond to your client’s specific needs.

A typical client-copywriter interaction involves conversations, meetings, interviews, brief-building, feedback and revisions.

You can try to cram all of that stuff into a single prompt, but you’re unlikely to get the output that your client wants. Unless the client has very generic requirements – and no interest in accuracy.

Again, using an AI tool to get you a first draft may be helpful, but you will still need a human writer to apply the subtleties and nuance that the client expects.

Clients need humans who can understand the context behind their requests and the objective behind their strategy.

Write in seconds. Edit in weeks.

Given the inaccuracies and errors that proliferate in AI-generated content, any AI content needs heavy vetting by humans.

At some point, the amount of time required to review, check and assess the AI content outweighs any time-saving it seemed to offer.

Brandolini’s Law, also known as the Bullshit Asymmetry Principle, states that:

“The amount of energy needed to refute bullshit is an order of magnitude greater than that needed to create it.”

Spouting lies takes seconds.

Gathering the evidence to refute those lies can take hours.

Rather than asking AI for evidence and information, it is usually quicker to use a search engine for research.

 

AI is useless for many kinds of content…

Because of the limitations outlined above, AI tools are a dangerous choice for many kinds of writing.

  • Ads. You need creativity and originality if you want to stand out. AI struggles to create genuinely original, insightful ideas that also incorporate the complexity of a brand’s objectives (not to mention their tone of voice, messaging hierarchy, product features, competitive landscape etc etc).
  • Websites. It would take longer to write AI prompts than it would to write the copy.
  • Collaborating with creatives. You can’t have AI in a room contributing intelligently to campaign ideas and strategies. It would be like inviting your drunk uncle to participate. You get occasional outbursts of wit, and then just unfiltered nonsense.
  • White papers. You can’t trust AI to write accurate information informed by real insights.
  • Thought leadership. You can’t lead with recycled opinions and fabricated “facts”. True leadership requires original thinking.

These are all areas where copywriters are still essential in professional life.

AI cannot compete with copywriters.

AI may replace us all. But not yet.

ChatGPT and rival tools are seriously impressive.

Every week they make great leaps forward in their accuracy and flexibility.

They should be embraced as partners to professionals.

But for now, we must assume that any word they write could be fiction.

And there’s no way of telling which words are fiction.

Any serious organisation that wants to protect its reputation, respect its readers and safeguard against legal action should use ChatGPT in a limited number of ways, within strict limits, supported by robust guardrails.

 

What do you think?

Are you using AI tools in your copywriting work?

If so, how are you using them?

 

AI tools are blatant, unrepentant liars – and they don’t even know when they are lying.

Comments

19th September 2023

Thora Fitzpatrick

This seems like a very negative take on AI. I uploaded a post on AI immediately before this one was published and it’s still pending – why not put mine up for a balanced view?

Also it now seems like you don’t think AI is all that bad after all – you’re now running a workshop on it??

19th September 2023

Leif Kendall

My own take on AI is just that: my own take based on my own research.

ProCopywriters has a more open position to all things, because we’re an organisation that represents hundreds of copywriters, many of whom use AI for a range of purposes.

Tim Tucker is hosting the second of our workshops on using AI tools. Again this is completely separate from my own doubts and concerns about AI tools.

Your article is still pending because my article happened to be scheduled before yours was submitted. Your article is still awaiting review because we’ve had a number of AI-related articles and we didn’t want to overload the blog with similarly-themed content all at once. That doesn’t mean we won’t publish it. Debate is healthy and interesting and always welcome. 🙂

What do you think?

Your email will not be published. ProCopywriters members: log in before commenting so your comment links to your profile.

Become a member

Join ProCopywriters

Connect with peers, develop your skills and extend your reach on our blog.

Become a member
Learn online

Online workshops

Every month we get an expert, an author or a professional trainer to deliver a one-hour presentation on copywriting, marketing or digital media.

Browse events
Menu