Verification and fact-checking are themes Shel Holtz and I keep returning to on our For Immediate Release podcast. Almost every few episodes now, in one form or another, we find ourselves back at the same uncomfortable truth: AI is moving faster than our professional discipline is adapting to it.
Episode 491, published earlier this week, brought that home rather starkly.
We talked about how Deloitte submitted major government reports in two different countries – Canada and Australia – both containing AI-generated citations to research that never existed. Expensive, high-stakes advisory work, undermined by fabricated sources. At almost the same time, a CDC-linked study in the United States was found to include hallucinated references that actually contradicted the real scientists’ findings.
Different organisations. Different jurisdictions. But exactly the same failure.
And once again, it was tempting for some to frame this as a “technology problem”. It isn’t. This is a professional judgement problem.
Generative AI does not “know” anything. It does not retrieve facts in the way a researcher does. It predicts what comes next in a sentence based on probability. That makes it extremely fluent – and sometimes extremely wrong. It can invent academic papers. It can attribute quotes to people who never said them. It can generate URLs that look authentic but go nowhere. And it does all of this with complete confidence.
When that output flows straight into reports, policy papers, press releases, or research summaries without being checked, the damage does not attach itself to the AI tool. It attaches itself to the people and organisations who put their name on it.
We often hear the phrase human in the loop offered as reassurance. In the cases we discussed, humans were very much in the loop. The problem is that they appear to have been in the loop as editors, not verifiers. They polished language. They reviewed structure. But they did not check whether the sources were real.
That distinction now matters more than ever.
Yes, that takes time. But so does repairing trust once it has been damaged. So does issuing corrections. So does facing public scrutiny when errors surface in high-profile work. As Shel put it in the podcast episode, the cost of proper verification is still far lower than the cost of getting it wrong.
This is why this topic keeps resurfacing in our recent podcast episodes. The incidents are becoming more frequent. The stakes are rising. And yet, in many organisations, verification is still treated as an optional extra rather than a core capability.
It should now be neither optional nor negotiable.
You can use AI.
You can benefit from AI.
But you cannot delegate truth to AI.
Being “human in the loop” only works if the human is a fact-checker, not just an editor.
That is no longer a best practice. It is the baseline.