The other side of human-centred AI
When AI agents reach the edge of the digital world, they turn to humans to observe it for them / AI-generated image created with ChatGPT.

The other side of human-centred AI

Not long ago, the dominant conversation about artificial intelligence focused on replacement. Machines would automate tasks, eliminate jobs and steadily take over work once done by people.

But a thought-provoking essay in Noema by Cambridge researcher Umang Bhatt suggests something rather different may be emerging.

Instead of replacing humans, a new generation of AI systems may increasingly depend on us.

AI agents – the autonomous software systems now appearing in everything from personal assistants to enterprise tools – can organise our calendars, book travel, analyse documents and manage complex workflows. In many use cases, they already appear remarkably capable. Given a goal, they can navigate software systems, access data and complete tasks with minimal intervention.

Yet they share a fundamental limitation. They live entirely inside the digital world.

They cannot see a dented car after a collision. They cannot check whether a bridge is flooding after heavy rain. They cannot smell smoke in a building or taste food in a restaurant.

When an agent reaches the edge of the digital world, it hits what Bhatt calls the observation gap. And when that happens, the agent does something simple.

It asks a human.

When the digital world meets physical reality

The request might be straightforward. Take a photograph of a damaged vehicle for an insurance claim, for example. Or confirm whether a patient’s symptoms have changed. Check whether a security camera is obstructed. And visit a location and report what you see.

Once that piece of real-world information arrives, the agent can continue its automated chain of actions.

In effect, humans become part of the system.

Bhatt describes this rather strikingly as a “Human API” – an interface through which machines can request observations from people in the physical world. Instead of calling another piece of software, the system calls us.

At first glance, this sounds perfectly reasonable. After all, human judgement has always been part of technological systems. For years, we have talked about the importance of keeping humans “in the loop”.

But the essay raises an unsettling possibility. What if the loop itself changes?

From decision-makers to sensors

In a genuinely human-centred system, people exercise judgement and retain authority over decisions. Yet in an agent-driven environment, humans may increasingly be called upon simply to provide confirmation or observation so the machine can proceed.

💡
The difference is subtle but important. Humans move from being decision-makers to becoming sensors.

This shift matters because it changes the nature of the relationship between people and machines. Instead of technology serving human agency, humans risk becoming infrastructure for automated systems.

Bhatt offers a number of examples that bring this idea into focus.

A medical agent analysing symptoms might ask a nurse to check whether a patient’s legs are swollen. A climate monitoring system might ask a resident near a bridge to photograph the water level. An insurance agent might ask a driver to capture images of a damaged vehicle from multiple angles.

Each request is small and seemingly harmless. Yet as these systems scale, millions of such requests could be generated.

In that world, people are not necessarily replaced by AI. Instead, they are recruited by it.

The hidden costs of the human API

There are also implications that extend beyond convenience.

One concern is the hidden cost of human attention. Every request an AI system makes consumes time and cognitive effort. Multiply those small interruptions across organisations and networks, and human attention becomes a resource that machines draw upon continuously.

Another issue is consent. The article in Noema describes scenarios where AI agents infer who might be able to answer a question based on someone’s communications and social network. In that case, a person who never installed the system – perhaps a colleague, a friend or even a family member – may find themselves responding to queries generated by an AI agent.

They are not in the loop. They are simply part of the system’s sensing network.

Then there is the question of responsibility.

Many AI systems already ask humans to confirm decisions before taking action. A purchasing agent may select items but require the user to approve payment. A hiring system may rank candidates, but ask a manager to confirm the final choice.

On the surface, this looks like a collaboration between human and machine. But it can also function as a subtle transfer of liability.

The system proposes the action, and the human carries the consequences. In other words, responsibility flows downward while automation flows upward.

As AI agents reach beyond the digital world, humans become the bridge between observation and automation / AI-generated image created with ChatGPT.

Why this matters for organisations

These issues highlight something that often gets overlooked in discussions about AI.

Technology systems are not just technical architectures. They are social systems as well. They shape how responsibility is distributed. They influence who holds power and who bears risk. And they determine how decisions are made and explained.

This is where the discussion becomes particularly relevant for communicators and leaders.

💡
Organisations adopting agentic AI will increasingly need to explain how these systems operate. They will need to clarify where decisions are made, how human input is used and who remains accountable when things go wrong.

Transparency will matter not only to customers and regulators but also to employees,whose roles may shift as these systems become embedded in everyday workflows.

For communicators, this creates a new responsibility. It is not enough to talk about what AI systems can do. We also need to help organisations articulate how those systems interact with people.

Who is asked to provide information? Who is responsible for decisions? And who carries the risk when automation fails?

These questions go directly to the heart of trust.

The other side of human-centred AI

For me, the phrase human-centred AI has become a guiding principle in many conversations about technology and ethics. Yet the emergence of agentic systems suggests that we may need to look more closely at what that phrase actually means.

Human involvement alone does not guarantee human control. A system may still rely on humans – as observers, verifiers or approvers – while the logic of the system itself remains firmly in the machine’s hands.

If that becomes the dominant pattern, we may discover that the future of AI is not one in which humans disappear from the system. Instead, we remain deeply embedded within it.

In conversations over the past year, I’ve been exploring the idea that human-centred AI must ultimately be about preserving human judgement, dignity and responsibility.

💡
The Noema essay offers a useful counterpoint. It reminds us that the presence of humans in a system does not automatically make it human-centred. We may still be present – answering questions, confirming decisions, supplying observations – while the direction of travel is set elsewhere.

Perhaps the real challenge will not simply be keeping humans in the loop. It will be in ensuring that the loop itself remains genuinely human.


Sources:

Neville Hobson

Somerset, England
Communicator, writer, blogger from the beginning, and podcaster shortly after that.