I found myself pausing over two articles this weekend. One was in the Financial Times, thoughtful and measured. The other, from Futurism, was considerably more blunt, even alarmist in places.
Yet despite the very different tones, both were circling the same unsettling question: what happens when a generation grows up using AI not simply as a tool, but as an extension of how they think?
The FT piece focused on finance firms beginning to grapple with the reality of “AI-native” graduates entering the workplace. One New York executive described his latest intake of interns as “the first true AI natives” he had encountered. They arrived fluent in the tools, quick with answers, and outwardly impressive. But when senior people pushed deeper into their reasoning, something felt missing.
The conclusion from the executive was striking:
“We want critical thinking, not just AI.”
That line stayed with me long after I finished reading the article, partly because it feels like the beginning of a much bigger conversation.
More than a skills gap
For the past couple of years, organisations everywhere have been racing towards AI adoption. Leaders want AI strategies. Teams are being encouraged – and sometimes pressured – to integrate AI into workflows. Entire industries are trying to reshape themselves around AI, and understandably so.
The technology is remarkable. Used well, it can save time, surface ideas, accelerate research, help with coding, simplify routine tasks, and remove friction from all sorts of work.
I use it myself every day, and I would struggle now to imagine certain parts of my workflow without it. But lately I’ve started wondering whether we are entering a different phase of the AI story. Not the excitement phase, where everything feels new and full of possibility, but the phase where the consequences – intended and unintended – begin to reveal themselves more clearly.
The Futurism article pushes this concern hard, arguing that universities may be producing graduates who have become so dependent on AI tools that some are struggling with basic literacy, reasoning, and discussion skills. The language is dramatic, perhaps deliberately so, and I think we should be cautious about sweeping generational claims. Every era tends to worry that younger people are somehow losing abilities older generations possessed.
I think many of us have already seen smaller versions of this ourselves – AI-generated text that sounds polished but says very little, confident summaries that quietly flatten nuance, or presentations that look impressive until you start asking difficult questions.
And perhaps most importantly, there is a growing temptation to accept convenience in place of reflection.
When convenience becomes dependency
That last point feels significant to me. Generative AI is extraordinarily good at removing friction, which is one of the reasons it has spread so quickly. Faced with a blank page, a difficult problem, or a pile of research, it can provide an instant starting point. Sometimes that is genuinely helpful. But sometimes I wonder whether constant assistance subtly changes our relationship with thinking itself. Not dramatically or overnight, but gradually.
Why wrestle with ambiguity when an AI can produce a neat summary in seconds? Why sit with uncertainty when a chatbot offers immediate answers? Why spend time forming your own view when a plausible one arrives instantly on screen?
Those questions matter because so much professional work – especially communication and leadership work – depends on judgement rather than speed. And judgement develops slowly, usually through experience, mistakes, curiosity, conversation, scepticism, and the occasionally uncomfortable process of thinking things through properly.
The things AI still cannot do
That may be why the FT article resonated with me more than the louder Futurism piece. Beneath the finance examples, it was really talking about something fundamentally human. Not whether AI is good or bad, but what organisations will value as AI becomes normal.
For all the rhetoric about automation and efficiency, I’m not convinced the most valuable people in the next few years will necessarily be those who use AI most aggressively. I suspect they will be the people who know when to slow down, who still ask difficult questions, who recognise weak reasoning hiding beneath polished outputs, and who remain curious enough to look beyond the first answer.
Perhaps they will also be the people confident enough to say, “I’m not sure that’s right.”
That feels increasingly important in a world where technology can generate certainty far more easily than wisdom.
AI fluency will matter. Of course it will.
But I’m increasingly convinced that the qualities becoming most valuable are not speed or automation or even technical mastery. They are judgement, curiosity, perspective, scepticism, empathy, and the deeply human ability to think critically about the world around us.
Which may be why AI-native does not always mean AI-ready.
Sources:
- Humans still matter more than AI in finance (Financial Times, 8 May 2026)
- Bosses Horrified as “AI Native” College Graduates Hit the Workplace (Futurism, 9 May 2026)
Photo at top by Immo Wegmann on Unsplash