Imagining different: The state and open source
February 2026
Digital leaders across the public, private and non-profit sectors face a shared challenge: how do we harness the transformative power of artificial intelligence while ensuring it is safe, responsible, and delivers real value.
In Defence, that question is not theoretical. It is operational, strategic and urgent.
As Digital Commercial Director at the Ministry of Defence, I work at the intersection of AI policy, commercial strategy, and operational delivery. Whether serving on the Ministry of Defence AI Steering Group, contributing to the Government Commercial Function Digital Board, or working with the Department for Science, Innovation and Technology’s Digital Commercial Centre of Excellence, my focus is consistent: turning AI potential into trusted, deployable capability.
The lessons we are learning in Defence are relevant across HMG.
Regulation is often framed as a constraint on innovation. In reality, effective AI regulation is what unlocks scale.
The UK’s approach, led by the Department for Science, Innovation and Technology — has been principles-based, risk-aware, and sector-sensitive. That matters. It allows sectors like Defence, health, and HMRC to apply consistent guardrails while tailoring implementation to operational realities.
For senior leaders, the implication is clear, AI regulation should not be treated as a compliance exercise, but as a design principle.
Responsible AI is not a bolt-on. It must be embedded into:
When regulation and commercial strategy align, innovation accelerates, because trust accelerates.
One of the most persistent risks I see is treating AI like a conventional digital purchase.
AI systems are:
Traditional procurement frameworks can struggle with this dynamism.
That is why we developed the AI Buying Guide for Defence, to give commercial and digital teams clarity and confidence when acquiring AI-enabled solutions. The core principles, however, apply across sectors:
Too many AI procurements begin with a solution looking for a use case. Define the operational or organisational need first. AI is a means, not an end.
Bias testing, explainability, human oversight, and accountability structures must be specified contractually, not assumed.
What data trains the model? Who owns it? How is it governed? How will performance drift be monitored over time?
AI systems improve, or degrade over time. Contracts must allow iteration, retraining, and performance review without locking organisations into static assumptions.
Outcome-based contracting, staged deployment, and shared risk mechanisms often work better than traditional fixed-scope models.
These principles are as relevant to a global charity deploying AI for service delivery as they are to a Defence programme managing mission-critical capability.
Another systemic challenge is scale. The UK has world-class AI companies. Yet many struggle to navigate public sector procurement, security requirements, or the pace of Defence delivery.
Through the Defence Tech Scaler programme, we have focused on bridging this gap helping AI companies scale responsibly while meeting the unique demands of Defence.
For senior leaders across sectors, the lesson is simple:
If you want innovation, you must design commercial ecosystems that allow it to survive.
That means:
Innovation does not scale by accident. It scales by design.
Across government boards and cross-sector conversations, one theme consistently emerges: AI adoption is less about technology maturity and more about trust maturity.
Trust from:
Responsible AI adoption is therefore a leadership issue, not merely a technical one.
Leaders must:
In Defence, this responsibility carries particular weight. But every sector deploying AI at scale holds a comparable duty to employees, customers, and communities.
The UK has the opportunity to position itself as the most agile and trusted environment for AI innovation. That ambition requires more than policy statements. It requires:
In my role across Defence and wider government digital leadership forums, I see daily how powerful that alignment can be. When strategy, commercial acumen, and ethical intent move together, AI becomes not just a technological tool, but a source of national advantage; delivering capability, security, resilience, and economic growth.
Whether you are leading transformation in central government, modernising services in a local authority, scaling digital products in the private sector, or deploying AI for social impact in the non-profit world, the questions are the same:
AI will not transform organisations through ambition alone. It will do so through disciplined, ethical, commercially intelligent leadership.
The revolution is not in the technology it is in how we choose to adopt it.