In August, I wrote about what it means to stay human as artificial intelligence becomes ever more capable. My starting point was a guiding principle from Mustafa Suleyman, AI chief at Microsoft: AI should make us more human, deepening trust, understanding, and our connection to the real world.
This is also the clear sentiment expressed by The Vatican in the wisdom of the heart – "which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people" – that was a core theme in an FIR interview with Msgr Paul Tighe in July.
All this has stayed with me. And it came back into sharp focus this weekend while reading a Bloomberg Weekend interview with Suleyman, conducted by Mishal Husain. The conversation adds texture and urgency to those earlier reflections, not by softening the reality of what’s coming, but by making the choices in front of us clearer.
AI is already superhuman – and that changes the question
Suleyman is blunt: AI is already superhuman. He is not talking about consciousness or general intelligence, but about systems that outperform humans in defined tasks – from medical diagnostics to pattern recognition at scale.
This matters because it shifts the debate. The question is no longer whether machines will surpass human capability in certain areas. That threshold has already been crossed, Suleyman says. The more important question now is how that capability is governed, distributed, and used – and whether it ultimately strengthens or diminishes human agency.
From a human perspective, this is unsettling but also clarifying. If superiority in narrow tasks is inevitable, then our humanity does not lie in competing with machines. It lies in setting direction, values, and boundaries.
Humanist superintelligence is a deliberate choice
What distinguishes Suleyman’s thinking is his insistence that superintelligence must be constrained by design. He calls this "humanist superintelligence" – systems that remain aligned with human values and interests and are subject to clear red lines on containment, autonomy, and safety.
This is not presented as an abstract moral stance, but as a practical one. Suleyman argues that systems capable of setting their own goals, rewriting their own code, or acting autonomously must not be released without robust assurance, transparency, and oversight.
The organisational test – efficiency or flourishing?
This is where a second thread becomes important.
In October, I wrote about Chris Heuer’s challenge to organisations confronting AI adoption: what do they owe humanity in an AI-driven world? Do they pursue efficiency at all costs, or do they choose long-term flourishing? Do they replace people wherever possible, or enable them to thrive alongside machines and algorithms?
Reading Suleyman’s interview alongside that challenge, the connection feels obvious. A humanist vision for AI means very little if organisations deploy it in ways that quietly erode dignity, purpose, or trust at work.
Technology does not make these choices on its own. Organisations do. And AI dramatically expands the range of choices available to leaders – including choices that prioritise speed, scale, and cost reduction over human wellbeing.
It signals that ethical AI conversations are converging across technology, faith, governance, and organisational leadership.
If superintelligence is meant to be “on our team”, as Suleyman suggests, then organisations are where that claim is tested in practice.
Rebuilding what we have lost
One of the more unexpected moments in the Bloomberg interview comes when Suleyman talks about AI-assisted journalism. He imagines systems that could help revive local reporting by verifying eyewitness material, conducting interviews, and stitching together reliable accounts at a scale human newsrooms can no longer sustain.
This idea stood out because it reframes AI not just as a force of disruption, but as a possible tool for restoration – rebuilding capacities that society has allowed to weaken under economic and structural pressure.
Where this leaves us
The Bloomberg interview does not offer easy reassurance. Suleyman speaks candidly about risk, power, and the concentration of decision-making among a small group of actors. He argues for regulation that is thoughtful and sustained, not reactive or ideological.
Placed alongside my earlier reflections, a consistent message emerges.
To stay human in the age of AI, we must be deliberate – about what we build, how we deploy it, and what we choose to optimise for. Intelligence without direction can be brilliant. Intelligence deployed without care can be damaging. But intelligence guided by human purpose has the potential to help us flourish – at work, in society, and in how we relate to one another.
That is not a technical challenge alone. It is also a significant leadership one.
Postscript: This post builds on the earlier reflections I mentioned in the narrative above: the FIR Interview in July on the wisdom of the heart, an August essay on what it means to stay human in the age of AI, and an October piece exploring Chris Heuer’s challenge to organisations to put human flourishing ahead of efficiency. Read together with this latest reflection, they form an ongoing thread about how we shape AI – and how it, in turn, shapes us.
Prime source:
- Microsoft’s Mustafa Suleyman: ‘AI Is Already Superhuman’ - Bloomberg Weekend interview by Mishal Husain, 12 December 2025.
Image attribution: