Reflections from 2025: AI is Not One Thing – It Is a Set of Choices
A visual metaphor for choice – and the judgement required to make it. / Adobe Firefly

Reflections from 2025: AI is Not One Thing – It Is a Set of Choices

Looking back over the past year of writing about artificial intelligence, one idea stands out more clearly than any other.

It surfaced most strongly during a For Immediate Release interview in July with Monsignor Paul Tighe of the Vatican, in a conversation about AI, ethics, and what it means to lead responsibly in a time of rapid technological change. He spoke about "the wisdom of the heart" – a phrase that stayed with me long after the recording ended.

Not as a metaphor, but as a lens on staying human in the age of AI.

It offered a way to think about AI that cuts through both hype and fear. One that asks not just what systems can do, but how decisions are made, values are held, and responsibility is exercised – by people. Since that conversation, the idea has quietly shaped much of what I’ve written in the second half of this year.

What follows is a reflection on that writing – not a summary of posts, and not a set of predictions – but a look at the themes that have consistently informed my thinking as AI has moved from abstraction to everyday reality.

Not conclusions. Not predictions. Just signals worth paying attention to.

Human-centred AI isn’t a slogan – it’s a choice

Human-Centred AI

Again and again, I found myself returning to the same question: what does it actually mean to stay human in the age of AI?

In What Does It Mean to Stay Human in the Age of AI? and Staying Human in the Age of AI: What Comes Next, I explored how easily language about “capability” and “efficiency” can crowd out more fundamental concerns – agency, dignity, trust.

AI’s power is real. That much is no longer in doubt. The harder question is how we choose to use it.

That theme came into even sharper focus in Speaking for Humanity: The Wisdom of the Heart in the Age of AI, drawing on the Vatican’s framing of AI ethics. What struck me wasn’t theology, but clarity – a reminder that conscience, responsibility, and moral judgement don’t disappear just because systems become more capable.

Human-centred AI isn’t about nostalgia. It’s about intent.

Communication is at an inflection point

Communication and AI

If there’s one area where AI’s impact has become impossible to ignore, it’s professional communication.

In With AI, Corporate Communication Is at an Inflection Point, I looked at evidence showing how widespread AI-assisted writing has already become – in press releases, corporate statements, and even communication from international institutions.

That alone isn’t the problem. The real tension lies elsewhere: authenticity, verification, and voice.

Posts such as If AI Touched It, You Must Verify It argue that verification is no longer an editorial ideal – it is baseline practice. Meanwhile, pieces like When AI Lets Go of the em dash and The Em Dash, AI and the Search for Authentic Voice used something as small as punctuation to surface a much larger issue: when machines write with us, how do we retain authorship, identity, and trust?

AI can help communicators work faster. Speed without judgement, however, is not professionalism.

Risk, ethics, and societal impact are no longer abstract

Risk, Ethics and Societal Impact

Some of the most sobering moments this year came from writing about where AI crosses from theory into consequence.

The Day Cyberattacks Became Autonomous marked a shift from human-directed attacks to AI-driven ones – not speculative, but real. It was a reminder that autonomy cuts both ways.

Elsewhere, Wikipedia Is Holding Its Own – but AI Poses Major Challenges explored how human-curated knowledge systems are coming under pressure from low-quality, machine-generated content. Quantity is easy. Trust is not.

And in The Rise of Culturally Grounded AI, I examined how global, one-size-fits-all models risk flattening cultural contexts and local values – raising questions not only about bias but also about sovereignty.

Ethics, in other words, is no longer a future conversation. It is an operational one.

Trends and Collaboration

Not everything I wrote this year was cautionary.

Some of the most interesting moments came from experimenting with how people and AI collaborate in practice.

Why ‘Vibe Coding’ Matters More Than You Think explored how collaboration itself is shifting – from writing code to shaping creative work – with AI acting less like a tool and more like a partner.

And in When My AI Voice Told the Story Better Than I Could, I examined what happens when AI doesn’t just assist with text, but with voice and storytelling. The result wasn’t replacement, but surprise – and a glimpse of new formats still taking shape.

The pattern here isn’t automation. It’s adaptation.


What ties all of this together

Looking back across these posts, I don’t see a manifesto. I see a thread.

AI is not one thing. It is a set of choices – technical, organisational, ethical, and personal.

💡
The challenge for communicators, leaders, and organisations isn’t to keep up. It is to decide what matters, and then act accordingly.

That is what this year of writing has been about for me. Not answers, but orientation.

And if there is a single idea that runs through everything, it is this: staying human in an AI world isn’t something that happens automatically. It is something we have to choose – again and again.

Looking ahead

As 2025 draws to a close, I find myself less interested in predicting what AI will do next and more focused on what we choose to do, proactively and in response.

The questions that matter most now feel strikingly consistent:

  • How do we lead responsibly in environments shaped by AI?
  • How do we maintain trust, judgement, and accountability in our communication?
  • How do we treat ethics as a practice, not a principle statement?
  • How do we support one another as roles, expectations, and skills continue to evolve?

These questions sit behind much of my interests and work heading into 2026.

They are also why Silvia Cambié and I have spent much of the past few months developing a new Shared Interest Group (SIG) within the International Association of Business Communicators (IABC), focused on AI leadership and communication.

The IABC Board gave the initiative the green light in early December, with an announcement expected in the end-of-year email to members. We plan to have the "AI Leadership and Communication SIG" fully operational early in the first quarter of 2026.

This is a professional home for communicators who want to move from AI uncertainty to credible, human-centred AI leadership. If you're an IABC member, you can join now.

This AI SIG is one expression of a broader intent: to create spaces where communicators can explore these leadership issues together – openly, critically, and with practical judgement – rather than treating AI as a purely technical or tactical challenge.

For me, the emphasis going forward is less about tools and tips, and more about orientation. Less about acceleration and more about discernment.

💡
If the past year has reinforced anything, it is that the wisdom of the heart is not a constraint on progress. It is what makes progress worth pursuing.

In summary:

More to come in the New Year.

Neville Hobson

Somerset, England
Communicator, writer, blogger from the beginning, and podcaster shortly after that.