The shift we should be measuring

Written by Carolyn Shepherd, Founder, Emmeline.AI

AI leadership is shifting. It is no longer defined by access to technology. It is defined by something far harder to build and even harder to measure: the human ability to adapt at the same speed as the systems we are deploying.

As one of this year’s AI 100 UK Leaders, I had the privilege of standing alongside people building extraordinary technologies. What became clear is this. The greatest barrier to adoption now is not capability on the machine side. It is capability on the human side.

And unlike previous technological revolutions, we do not have 20, 50 or 100 years for society to adjust. We have months, sometimes weeks.

“We have been through major technological revolutions before so what will we do differently this time to ensure progress isn’t delayed?”

 

The human readiness gap is AI’s defining challenge

Across sectors, the pattern is consistent.

  • AI systems are operational.
  • Integrations are accelerating.
  • Use cases are clear.

Yet human adoption lags.

Recent IBM research shows that 66 per cent of UK enterprises are already experiencing significant AI-driven productivity improvements, yet many still say they have not tapped AI’s full potential, highlighting the need for workforce transformation and AI skills. This readiness gap slows progress in:

  • public sector reform
  • AI agent deployment
  • productivity and innovation
  • data informed decision making
  • digital inclusion

In short, the bottleneck is no longer the technology. It’s us.

 

Why readiness matters more than ever

Traditional approaches to learning and development typically measure confidence, participation, and satisfaction. These are useful engagement signals, but they do not tell us how someone will think or act when the context shifts or the stakes are high. They give leaders no real visibility of who can be trusted to work with AI in a way that protects outcomes, customers and reputation.

As AI moves inside workflows, this distinction becomes critical.

We need the ability to distinguish between:

  • knowing about something, and
  • being able to reason with it.

Between:

  • recalling information, and
  • adapting insight to new situations.

This is the cognitive shift required for responsible and effective AI use across government, industry, and public services.

 

A question that changed direction for me

Over the past year, I began exploring a simple idea:

If readiness is the barrier, what does readiness look like in the mind?

  • Not as reported in surveys.
  • Not as perceived by managers.
  • But as a real time cognitive shift, the moment someone’s understanding reorganises, their reasoning deepens and their judgement becomes more adaptive.

If we could observe that moment reliably, we could better understand:

  • who is ready for AI enabled work
  • what types of learning genuinely build capability
  • where risk or support needs sit inside a system
  • how fast organisations and sectors can move safely

These questions sit at the heart of digital transformation, digital inclusion and responsible AI governance.

 

Using AI to understand human thinking

This led me into my own work on whether AI itself could help us detect these cognitive transitions.

Generative AI has a distinctive property. It can hold a structured, topic-agnostic conversation. When paired with the right methodology, these conversations can reveal patterns in how (and whether) people:

  • connect ideas
  • test assumptions
  • generalise insights
  • apply principles to new contexts

This work evolved into something I later named Schema Shift Analytics, a patent pending, AI driven approach that uses generative AI conversation as the medium for detecting signals of cognitive readiness at scale. In simple terms, the conversation generates the data, and the method focuses on how patterns in those responses change as understanding deepens.

The broader principle behind it matters. Ironically, Artificial Intelligence may be able to help us see aspects of Human Capability that were previously invisible.

This has potential implications for:

  • workforce and AI skills strategies
  • employability and labour market support
  • leadership development and talent decisions
  • public sector transformation
  • safety critical decision making
  • and the wider field of data and human judgement

 

Why this matters for public, private and non-profit sectors

Everywhere we look, AI is entering environments where human judgment, ethics and adaptability matter.

  • Public services are using AI for triage and casework.
  • Health and social care teams are exploring AI-augmented decision pathways.
  • Enterprises are piloting AI agents inside core operational workflows.
  • Recruitment systems are processing AI-generated applications at scale.
  • Cybersecurity teams are navigating machine-speed threats.
  • Organisations are wrestling with digital inclusion and capability gaps.

In all these settings, responsible adoption depends on people who can adapt, reason, and apply judgment alongside intelligent systems.

The question for leaders is no longer: “Do we have the tools?”

It is: Are our people ready for AI?

 

The leadership shift now required

AI leadership in 2026 demands three things:

  • Understanding the conditions needed for people to adapt quickly and safely.
  • Taking readiness as seriously as system deployment.
  • Exploring new, evidence-based ways to understand human capability in real time.

This is not about selling tools or promoting solutions. It is about recognising that the success of AI for Good, digital transformation, public service outcomes and workforce innovation now rests on the intersection of human judgment and machine intelligence.

We have an opportunity to learn from previous revolutions and to make different choices this time.

Not just deploying the technology, but developing the readiness to use it wisely, together.


Read More Workforce & AI Skills

Comments are closed.