Reputation in the age of AI: it's about people, not technology
Technology in hand, people in mind. In an AI-enabled world, the most important decisions are still human ones.

Reputation in the age of AI: it's about people, not technology

While taking the Global Alliance's 2026 Reimagining Tomorrow survey last week, as part of my ongoing interest in responsible AI in communication, I paused on one question long enough to realise it deserved more than just a survey answer.

The Global Alliance is the worldwide federation of PR and communication associations, and each year it publishes research on the state of the profession. Its current focus is AI – specifically, how PR and communication professionals are adopting it, governing it, and grappling with its implications.

This is the question in their survey that stopped me in my tracks:

What is the most important thing an organisation can do to protect and build its reputation in an AI-enabled environment?

It's a deceptively straightforward question. And the more I sat with it, the more I felt the standard answers – governance frameworks, transparency policies, responsible AI principles, etc – while necessary, were missing something essential.

The accountability reflex

The most obvious answer, and not a wrong one, is this: establish clear, visible human accountability for AI decisions.

💡
In an AI-enabled environment, the greatest reputational risk isn't AI making mistakes. It's organisations appearing to hide behind AI to avoid responsibility. Audiences, regulators, and employees are increasingly attuned to that evasion – and they don't forgive it easily.

So accountability matters structurally. That means named, senior-level ownership of AI governance. Transparency when AI is involved in significant decisions or outputs. A credible process for when things go wrong – and the willingness to use it. It shouldn't be a policy document filed somewhere nobody can find, but a living commitment with actual humans attached to it.

The 2025 Reimagining Tomorrow research put a number on the problem: 91% of organisations had adopted AI, but only 39% had responsible AI frameworks in place. That governance gap isn't just a compliance issue. It's a reputation risk sitting in plain sight.

But accountability, on its own, still isn't quite the full answer.

The human-centred question

Reputation is built on relationships. Relationships are built on trust. And trust, ultimately, is something that happens between people.

That's why I keep coming back to human-centred AI as the more complete frame. Not AI that's merely governed, but AI that's genuinely oriented around people – their interests, their wellbeing, their experience of the organisation. The difference matters more than it might appear.

Many organisations approach responsible AI as a risk-management exercise. Governance frameworks, guardrails, policies. All of it is necessary – I'm not dismissing any of it.

But human-centred AI asks a different, harder question: whose interests does this actually serve? That's a values question, not a compliance one.

💡
It's where communicators have something distinctive to contribute that technologists and lawyers often don't – because we understand how trust is built and lost, how narratives form, and how people experience the gap between what organisations say and what they actually do.

What this looks like in practice

Putting people at the centre of AI strategy isn't a slogan. It shows up in specific choices: being transparent about when and how AI influences decisions that affect people; designing AI-assisted processes that preserve human dignity and judgment rather than just optimising for speed or cost; and communicating about AI in ways that are honest about its limitations, not just promotional about its possibilities.

It also means communicators need to be in the room when AI strategy is being shaped – not brought in afterwards to explain decisions already made.

The profession has spent years making the case for a seat at the leadership table. AI governance is one of the clearest opportunities yet to demonstrate why that matters.

Reputation as a values signal

Here's what I think the question is really getting at: in an AI-enabled environment, reputation increasingly functions as a signal of organisational values. Not just competence, not just compliance – values.

  • Do you use AI in ways that respect people?
  • Do you take responsibility when it doesn't work as intended?
  • Do you prioritise human interests when they come into tension with operational efficiency?

Organisations that answer those questions credibly, consistently, and visibly will build reputations that hold. Those that treat AI governance as a box-ticking exercise will find that audiences – internal and external – notice the difference.

💡
The most important thing an organisation can do is keep people genuinely at the centre. Not as a positioning statement, but as a principle that shapes real decisions.

That's the answer I gave to the survey. I think it's also the work our profession needs to own.

If you haven't taken the survey yet, I'd encourage you to. It's a small investment of time for research that genuinely feeds back into the profession. And when you reach question 35 – the one that prompted all of this – I'd be curious to know how you answer it.

👉 Take the 2026 survey here – open until 1 May.


References:

Photo at top by Getty Images for Unsplash+.

Neville Hobson

Somerset, England
Communicator, writer, blogger from the beginning, and podcaster shortly after that.