Recently, Brian Solis argued that the most significant impact of AI in business is not technical – it is behavioural and cultural, particularly at executive level.
His central concern is that AI is shifting from a decision-support tool to a decision-influencing – and sometimes decision-replacing – force. He points to research suggesting that many leaders now rely heavily on AI in decision-making, in some cases trusting it more than colleagues.
That became the focus of episode 512 of the For Immediate Release podcast, published earlier this week, where Shel and I explored what this might mean for leadership, accountability, and communication.
It’s a compelling thought. Taken at face value, it points to a real shift. But I’m not sure it’s quite that simple.
There’s a difference between using AI to explore options or clarify thinking, and using it to drive decisions that carry real consequences. Much of the current narrative leans toward the latter, often without much distinction.
It’s easy to jump from those statistics to a worst-case scenario.
Leaders outsourcing decisions. Organisations run by algorithms. Human judgement sidelined. I’m not convinced we’re there.
In our conversation, I found myself pushing back – not against the direction of travel, but against the framing. Much of what’s being presented feels like a worst case framed as if it were already the norm.
That matters, because not all decisions are equal. There’s a world of difference between using AI to summarise a complex document, explore options, or stress-test an idea, and using it to make a strategic decision that affects the future of an organisation.
Still, one question keeps surfacing.
If a leader changes a decision because an AI suggests something different, who owns that decision? That’s the point where this becomes less about technology and more about leadership.
This is where communicators have a particularly important role to play. Not in managing the tools, but in helping organisations explain how decisions are made – and ensuring human judgement remains visible and accountable.
Internally, that means:
- helping leaders articulate how AI is used in decision-making
- reinforcing that human accountability remains non-negotiable
- ensuring that context, ethics, and judgement are part of the conversation
Externally, it means:
- explaining how AI is used responsibly
- making decision-making processes more transparent
- protecting trust and reputation before questions arise
In short, helping organisations answer not just what decisions are made, but how they are made.
Perhaps the most useful way to think about this is not as a binary choice between human and machine.
Instead, it’s a question of boundaries. Where should AI inform decisions? And where must humans remain unmistakably in charge? That feels like the real work now. Not resisting AI. Not blindly embracing it. But learning how to use it without losing something essential in the process.
Where that line sits in practice is likely to become one of the defining leadership questions of the AI era.
Sources:
- FIR 512: The AI Shift in Executive Decision-Making (For Immediate Release, 4 May 2026)
- AI Is Changing More Than Work, It’s Rewiring Executive Decision-Making (Brian Solis, 24 April 2026)
Image at top by Getty Images for Unsplash+.