Lately, I’ve noticed how often conversations about content seem to start in the wrong place. Instead of asking whether something is accurate, engaging, or useful, the first question increasingly seems to be: was this written by AI?
I don’t recognise that instinct in my own reading. When I come across something online, my reaction is much more basic: do I enjoy it, do I find it useful, do I think it’s well written? The means of production don't usually enter my mind.
Yet spend time on LinkedIn, and you’ll see how normalised AI-spotting has become. Em dashes, neat structure, familiar phrases – all are treated as evidence, as if identifying supposed “AI tells” is now a meaningful form of critique.
I suppose the anxiety behind this is understandable. Generative AI has unsettled long-held assumptions about authorship, originality, and professional value. But the more I think about it, the more I wonder whether this fixation on detection is taking us in the wrong direction.
Identifying how something was produced is not the same thing as deciding whether it’s any good.
Confusing process with quality
“AI-generated” describes a process. “Good” or “bad” describes the outcome.
Too often, we collapse those two ideas into one judgement. Content is treated with suspicion, not because it’s inaccurate, misleading, or ineffective, but because someone thinks they recognise the fingerprints of a machine.
That’s a category error. And it quietly shifts attention away from professional judgement towards surface-level reassurance.
This is why Allison Carter’s argument, published last April, still resonates, something she repeated on LinkedIn last week. Her provocation was not a defence of AI, but a reframing of the problem – stop worrying about whether content is AI-generated, she argued, and focus instead on whether it does what it needs to do.
A professional standard hiding in plain sight
One of the most valuable contributions in Allison’s piece is her set of five criteria for evaluating content:
- Is it accurate?
- Is it useful or interesting?
- Is it ethical?
- Does it show some originality?
- Does it achieve its intended goal?
Taken together, these aren’t a checklist so much as a statement of professional responsibility. They give us a far more meaningful way to assess content than trying to reverse-engineer it.
Accuracy here is not just about factual correctness but also about context and consequences. Usefulness is not generic engagement; it is relevance to a specific audience. Ethics goes beyond compliance to include transparency, honesty, and respect for human experience. Originality doesn’t require novelty for its own sake, but perspective and judgement. Purpose asks whether content exists for a reason or simply because something needed to be published.
Crucially, none of these questions depend on the tools used to create the content. They depend on intent, judgement, and accountability.
When disclosing AI assistance matters
At this point, it’s worth addressing a question that often sits just beneath the surface of these debates: when does AI assistance – and its disclosure – actually matter?
AI involvement matters when it becomes a vehicle for deception. When it’s used to mislead, to obscure authorship, to simulate expertise that isn’t there, or to gain advantage by pulling the wool over the reader’s eyes. In those cases, the problem isn’t the technology, but the intent behind its use.
That’s why framing this as a technology issue is so often unhelpful. Deception, misrepresentation, and lack of transparency are ethical failures regardless of whether AI is involved. Used responsibly and openly, AI is simply another tool in the process – no more inherently suspect than spellcheckers, grammar helpers, templates, or search engines.
And, this is also why the current outrage can feel strangely selective. As Kat Perdikomati observed in a recent LinkedIn post, we’ve long accepted ghostwriters, uncredited researchers, and invisible teams shaping published work. What AI changes isn’t the principle of assistance, but its visibility.
That visibility unsettles existing assumptions about voice, status, and ownership. But discomfort alone isn’t an ethical test. As with any form of assistance, what matters is intent, transparency, and accountability – not whether the help came from a person or a system.
Subjectivity is not the flaw – it’s the job
Some people will object that “good content” is subjective. And they’re right. Two experienced communicators can read the same piece and reach different conclusions.
But that isn’t a weakness of the profession. It is the profession.
Communication has always relied on informed judgement – balancing clarity with nuance, persuasion with trust, speed with care. That judgement is shaped by experience, ethics, and accountability, not by pattern recognition.
Attempts to eliminate subjectivity often replace it with proxies: rules, tells, and detection tools. These offer the comfort of certainty, but very little responsibility. They tell us what something looks like, not what it does.
From anxiety to standards
For communicators and leaders, this distinction matters. AI has become a convenient stand-in for deeper uncertainty. Rather than asking whether teams have shared standards, we end up policing tools. Rather than trusting professional judgement, we look for mechanical signals.
The more constructive question is not how to detect AI, but how to define and uphold standards for quality, ethics, and purpose – standards that apply regardless of how content is produced.
Reclaiming the responsibility
The real risk right now isn’t that AI will flood the world with bad content. Bad content existed long before generative tools arrived.
The greater risk is that we allow anxiety about tools to distract us from the harder work of judgement – outsourcing responsibility to detection exercises instead of owning it as a professional obligation.
Good content has never been about the tool. It has always been about the choices behind it – what to include, what to leave out, what to prioritise, and what to stand behind.
If AI changes anything, it should be this: a renewed focus on judgement over suspicion, standards over shortcuts, and trust over tell-hunting. Not because the tools are neutral, but because responsibility still sits squarely with the humans who choose how to use them.