The AI standoff: Why britain feels stuck

Written by Jonny Williams, Chief Digital Adviser - UK Public Sector, Red Hat

Last week, in a quiet room in Westminster, a group of leaders gathered to discuss AI. I cannot name them because we met under Chatham House rules, but I can say that the group included people who have built national systems, advised ministers at difficult moments, and lived with the real and messy constraints of digital change.

They understand the promise of AI. And they see, and experience, the structural barriers holding Britain back.

The Digital Leaders roundtable was framed around trust, risk, and value. But alongside these themes, one idea kept resurfacing related to a steady erosion of agency. A sense that decisions are often not being made intentionally, but as a result of inertia, habit, and commercial gravity.

This issue of agency relates directly to the question of sovereignty. Not necessarily the flavour of sovereignty focussed on data residency. But definitely the basic ability of our public institutions to design their digital future rather than having choices made for them.

The concerns from some attendees are backed by data. A recent industry survey paints a challenging picture. Most UK organisations believe Britain can be an AI powerhouse within three years. But almost nine in ten admit they are not yet delivering value from their AI investments. Meanwhile, shadow AI, where employees use unapproved tools, has become commonplace.

These issues are often framed as technical, but they are fundamentally questions of agency at all levels of society. People are trying to move faster than the structures around them allow.

One observation from the room was that organisations are under immense pressure to adopt AI at pace, but nobody knows precisely where value might reside. This places them on a treadmill, where it feels like they should be making progress, but in reality everything is static. However, in pursuit of motion they are unknowingly making lasting decisions about vendors, platforms, and dependencies.

Reflecting on these challenges helped me to capture a useful definition: Digital sovereignty is simply the ability to make good choices deliberately.

In order to do this, you need the power to set direction, and the institutional competence to choose well in the first place.

This requires leaders who understand sociotechnical strategy, procurement teams who can design contracts fit for AI, and policy makers who appreciate the long-term implications of each path. Without this, organisations default to whatever is easiest, cheapest in the short term, or already embedded.

During the discussion there was recognition of decisions that had occurred by default rather than design. A public body relying on an incumbent supplier because procurement frameworks make this an easier path to take. A department leveraging a proprietary cloud platform because of sunk costs.

These are not isolated scenarios. These choices accumulate. Legacy systems are not created by a single decision but by hundreds of small ones nobody examined closely enough. The roundtable pointed towards tomorrow’s AI legacy.

What many leaders in that room were seeking for the future of AI feels closer to intelligent dependence. They want a clear understanding of what to rely on, under which conditions, and for how long.

This is not isolationism or absolute self-sufficiency. Nobody argued for building everything themselves. But an example from the scientific community, built on open source ecosystems, while leveraging enterprise open source for support, shows that sovereignty comes from shared foundations, transparent supply chains, and the ability to inspect and adapt the tools you depend on.

However, intelligent dependence requires capability, and that capability seemed to be lacking in many areas. Some participants described institutions frozen by fear of making the wrong move, terrified of becoming the first department to suffer a major AI-related incident. Meanwhile, one attendee observed that hyperscalers appear to be increasingly responsible for the decisions that the public sector is taking.

Speed does matter, and many people feel that Britain cannot afford to wait. But accelerating de facto choices without due consideration, especially if those choices disproportionately undermine future optionality, is not the same as delivering value sooner. The former is comfortable, the latter requires asking difficult questions. Where does this model come from? What assumptions does it carry? How easily can we switch course or adapt when requirements change? Many at the roundtable agreed that provenance and transparency should be a far greater concern.

This is why openness emerged as a recurring theme during our discussion. While we agreed that trust can often be a relative concept, with moving goal posts, it could be slightly easier to pin down the concept of openness. Fundamentally, you cannot fully govern or assure a system you cannot inspect. At a time when it feels like AI evolves faster than policy, the ability to adapt, to understand what you are using, and how it can be changed, is an essential form of resilience.

In fact, in a recent survey Red Hat found that 84% of organisations see enterprise open source as vital to their AI strategy. Yet, I would argue, most public sector leaders today don’t understand openness well enough to agree.

This aligns very closely with a key set of questions that emerged during the event. Who owns AI strategy? Who decides when to build, when to buy, and when to form strategic partnerships? Who challenges the default path? These decisions will shape the next decade of public services, yet too often they are left to procurement inertia or delegated to a single person without the right support.

While the AI skills gap is frequently highlighted as a critical challenge facing the nation, our discussion made one thing abundantly clear to many of us. Yes, there is a skills gap. But it’s a leadership gap, not a technical one.

Viewing AI as a technology challenge is a flawed perspective. It is fundamentally a strategic issue that is core to the running of any organisation that wishes to be effective in the years ahead. As ever, this boils down to people, process, and tech in that order. The role of Chief AI Officer must, at the very least, be a sociotechnical capability, but in reality this is a role that is closer to COO than CTO.

During the roundtable I presented the room with two questions that every organisation should be asking itself.

Do you believe that technology will continue to evolve at pace?

Do you believe that the cyber threat landscape will become more volatile?

If you answered yes to either question then now is not the time to overcommit when you only know half the story. You need optionality, agency, choice. Sovereignty. This is a variety of security that is not a nice to have. It’s non-negotiable.

Fortunately, the experts in the room appreciated these questions, and some shared that most leaders in the UK would answer yes to both.

But as one attendee noted, if the landscape keeps evolving and the threat environment becomes increasingly volatile, most strategies (judged by procurement decisions and technology stack selection) appear to be aligned with the opposite set of circumstances.

So, if we are strategically misaligned with reality as a nation, are we stuck?

Are we doomed to remain in a standoff with our fear of failure rather than asserting ownership. Doomed to take the default path rather than pursuing the effort of design? Leaving value on the table while enabling others to profit?

Britain has enormous potential. We are leaders in research. And we showed during the early GDS years that we can lead in practical digital government too, when leadership, capability and courage align. We could lead again, not necessarily by winning a global AI race but by becoming the world’s governance and trust experts. That could be our competitive advantage.

Creating and demonstrating AI systems worthy of a democratic state, designed with clarity, accountability, and scientific rigour.

But this requires intent. It requires choosing design over default, agency over inertia, capability over convenience.

The public servants I was with in Westminster last week want to make these choices. They see what is possible. They know Britain does not need to dominate AI globally. It simply needs to stop losing the ability to choose what it adopts, adapts, or rejects.

Many people understand that sovereignty is not about being first, but about retaining freedom. Freedom to decide what serves the nation’s interests, values, and citizens.

The question now is whether our institutions will embrace new leaders and give them the mandate, the skills, and the confidence to act. Because if we do not purposefully choose our digital future, someone else will choose it for us.

But, if we act now, AI might be the banner under which we can finally achieve our national digital potential.


Read More Regulation

Comments are closed.