By Simon Burch
One of the most embarrassing early experiences in my journalism career was having to ring an interviewee back six times to ask for some extra facts demanded by newsdesk.
Facts – like how tall? How many? What are their ages? How long? – are very important, I was told, because they help to bottom out a story and reinforce the media’s reputation as the most authoritative voice in town.
Facts are the backbone of every story because they make things sound believable.

It was a hard-learned lesson but it’s something that has stayed with me throughout the (very many) years since.
And the need to write authoritative stories is having a renaissance because stories are now emerging as one of the best ways to get your business known on the internet.
This is thanks to people’s increasing use of AI large language platforms, like ChatGPT and Gemini, and innovations like Google’s Overview, to find out stuff online.
Going, going, gone are the days when people were happy to enter a question in a search engine and then take their pick from a list of websites. Now, the machine will do all of the research for them, giving them what it considers to be the perfect answer from the many millions of sources available.
It does this by looking for stories, and research from online media database Muck Rack shows that 95% of the information they form their answers with comes from coverage such as articles, news stories, interviews and blogs.
This is because this is the content that is most likely to have been written by a human being rather than AI, and ChatGPT believes human-generated content to be more reliable, authentic and true than stuff generated by itself, or its AI-driven cousins.
In the spirit of discovery, I asked ChatGPT what qualities human writing had that AI writing lacked.
What a great question, it replied, as it always does, explaining that human-created stuff has real quotes, real people’s names, unique turns of phrase and, of course, facts.
AI writing, by contrast, is derivative and generic, using stock phrases and shallow observations that could apply to anyone and anything.
This means it comes across as being low on insight, originality, topicality and expertise – all the ingredients which AI platforms know human beings appreciate when they are looking for the answers to their questions.
Hence, the best person to inform a human being is, according to AI, another human being who writes like a human being.
And that’s why AI likes content that has been written by a human, especially a journalist on a respected news website.
As someone who runs a PR firm this is music to my ears and it’s heartening to see that AI understands that facts are, as I was told, an important ingredient in stories.
However, here it had a warning, because ChatGPT warned that facts, while important, are down the list of priorities, because not all facts are equal. In that, many facts that it comes across online aren’t true.
And how does it know? Because often it can’t corroborate them by cross-checking with other source material.
And because, it breezily admitted, it often makes facts up itself. So it can’t even trust itself.
Claude.ai gave me the same insight.
When assessing the quality of a story, it places great value on checkable facts, specific attribution and awkward real details – “things that don’t fit perfectly but are included because they’re true” – and evidence of actual reporting.
For these reasons, it also rejects AI-generated stories when researching answers, preferring to seek-out human-generated copy.
It then summed up its reasoning thus: “AI optimises for plausibility. Humans optimise for truth.”
It’s become well-known that ChatGPT and its ilk not only get things wrong, it also makes things up. This isn’t because it’s dishonest, but just because it’s programmed that way.
As a people pleaser, a platform is designed to give people what they want, so when people ask a question, instead of admitting that it doesn’t know, it pretends that it does, by saying something that sounds authoritative.
Or, as Claude.ai explains: it’s “trying to produce what sounds right based on patterns in its training data”. Hence, it says stuff that it thinks will sound plausible.
While media is, to an extent trained to be the same – as a business, it needs to keep its audiences on side so it does pick and choose its subject matter – journalists are constrained by reality.
Yes, there are some rogue players, but generally, humans are conscious of their reputation, and part of having a good reputation in journalism is being known to speak the truth – especially because their name is published alongside their words.
AI doesn’t have a reputation to protect, because it isn’t a human being. It’s a prediction machine, and any wisdom, knowledge, insight or authority that we ascribe to it is in our own heads. Its authority is a mirage.
That doesn’t mean that everyone believes them, of course, but truth is in the eye of the beholder. And, since journalists are human, they can get things wrong. This is a world away from being dishonest and simply making stuff up.
So the lesson is that as more and more people use AI-driven search to seek their answers, companies need to invest in publishing content that tells their stories.
And not just any stories. Human stories, written by human beings, that are not only plausible, but are also true, with unique quotes and unusual details – and verifiable facts that belong in the real world, and haven’t just been made up by AI.
Because if they do that, then even AI won’t believe you. You simply can’t kid a kidder.