There was a time when it was usually possible to tell where something came from.
A news story came from a newsroom.
A campaign came from an agency.
An article had an identifiable author whose experience, judgement, and intent shaped the work.
That sense of provenance – knowing not only what was said but also who stood behind it – quietly underpinned trust in communication. We didn’t always agree with what we read or heard, but we generally knew how to interpret it.
In this context, provenance isn’t about art history or ownership. It’s closer to what technologists call data provenance – the ability to trace where something comes from, how it was produced, what shaped it along the way, and who is accountable for it.
In communication, provenance helps us interpret meaning and authenticity. It tells us whether we’re reading considered judgement, automated output, lived experience, or something in between.
That assumption is now weakening.
Listening again to For Immediate Release podcast episode 494 that Shel and I recorded and published a few days ago, what struck me most wasn’t any single controversy or headline. It was how often different stories, from different parts of the communication landscape, pointed to the same underlying shift: provenance is fading.
Not disappearing entirely. But becoming harder to see, harder to infer, and easier to ignore.
When output replaces judgement
One of the episode’s early stories revolved around a familiar provocation: the claim that “PR is dead”. The latest proponent of this flawed worldview is Sir Martin Sorrell, founder of the WPP advertising and PR group, in a debate on BBC Radio 4 with Sarah Waddington, CEO of the PRCA, a credible voice for the PR profession. Strip away the drama, and what remains is a deeper tension about how communication is valued.
Is it about scale, speed, and output, as Sorrell insists – or about judgement, meaning, and trust, as Waddington argued?
When communication is framed primarily as volume – content produced, impressions delivered, feeds filled – the role of human judgement becomes less visible. Not necessarily less important, but less legible. And when judgement is obscured, provenance weakens. The work starts to look interchangeable, regardless of who shaped it.
Systems that create and distribute
That same pattern appears even more clearly in advertising.
AI-driven platforms now integrate creation, testing, optimisation, and distribution into closed systems. Ads are generated, refined, and deployed automatically, often with minimal human intervention once the process is underway.
From an efficiency perspective, this is impressive. From a provenance perspective, it’s unsettling.
When creation and distribution collapse into a single system, it becomes harder to say where responsibility sits. Who decided this message was appropriate? Who checked its claims? Who is accountable when something subtly misleads or misfires?
The content still exists. The audience still interprets it. But the human hand behind it becomes faint.
Writing without fingerprints
Perhaps the clearest example of fading provenance comes from authorship itself.
AI-generated writing is now fluent enough that content alone is no longer a reliable signal of origin. The old instincts – spotting awkward phrasing, mechanical tone, or stylistic quirks – no longer hold.
As Shel and I discussed in FIR 494, what gives AI away is increasingly context, not content. A mismatch between voice and behaviour. A sudden change in output. Something that doesn’t quite add up.
In other words, we don’t detect machine authorship by reading more closely. We detect it by learning more about the writing's origin.
And when that context is missing, provenance fades again.
Why this matters
At first glance, this might sound like a technical or academic concern. It isn’t.
Provenance matters because audiences still make assumptions about communication – assumptions about intent, expertise, accountability, and care. When those assumptions are no longer reliable, trust doesn’t automatically adjust. It erodes quietly.
The real risk is not that AI-generated content sounds artificial. It’s that it sounds normal. Polished. Confident. Familiar. While quietly introducing errors, overreach, or synthesis that no one explicitly signed off on.
That’s why communities like Wikipedia are less concerned with stylistic tells and more concerned with patterns, signals, and editorial judgement. Not because certainty is possible, but because provenance can no longer be taken for granted.
Responsibility moves upstream
None of this is an argument against using AI. Its benefits are real, and its adoption is already normal.
But as provenance fades, responsibility doesn’t. It moves upstream – from readers to writers, from audiences to institutions, from detection to disclosure.
Because when provenance fades, trust doesn’t disappear all at once. It thins. And rebuilding it requires something machines can’t automate: clarity about intent, openness about process, and judgement that is visible, not hidden.
That question sits at the heart of communication in the age of AI. Not as a technical challenge, but as a human one.
🎧 Take a listen to FIR 494 right here. Shel and I discuss six topics that I reference in this post.
Visit the show notes page on the FIR website for a transcript, and to view the video version.
Related reading:
- Is PR Dead or Just Evolving? 28 April 2025