7 tips for delivering Innovation in Local Government
September 2024
Bubble or not bubble, AI is here to stay. And we need to be able to have increased confidence in its outputs and prevent potential harms. To do that, new fields are emerging, fields that are increasingly naturally intersecting without doing so : AI safety and AI assurance.
Both play critical roles in designing, deploying, and regulating AI systems, and their convergence holds significant implications for the future of policy, especially as we grapple with the risks of advanced AI systems with AI adoption accelerating. With the upcoming AI Safety Bill in the UK, it’s worth exploring how these two areas can support and enhance each other.
At its core, AI safety is about ensuring systems behave as intended—reliably, predictably, and without harm. Think of it as the crash-test dummy for AI. While safety testing ensures that AI systems are fundamentally sound, AI assurance steps in to make sure we can prove that over time, particularly as regulations tighten and systems evolve.
Take autonomous vehicles, for example. It’s not enough to ensure the car knows how to stop and go; it also needs to handle unpredictable scenarios, from sudden obstacles to extreme weather. That’s safety testing in action—ensuring that AI works robustly across a range of situations. In high-risk sectors like healthcare and finance, safety testing is even more critical. The potential consequences of failure—misdiagnoses, financial fraud—demand that safety isn’t just a check-box exercise but a fundamental pillar of trust in AI systems.
Once safety testing is complete, AI assurance steps in. If safety testing is the crash-test dummy, assurance is the process of reviewing the system’s “logbook”—proving the AI not only passed initial tests but continues to perform under evolving conditions. It’s about showing that AI behaves as expected—ethically, consistently, and within the legal boundaries.
As more organisations adopt AI assurance frameworks, we could see assurance become as formalised as financial auditing. AI assurance doesn’t just align with regulation; it is deeply embedded in responsible practices within organisations, ensuring systems remain fit for purpose. This could eventually normalise AI validation as part of business operations, embedding trust and compliance at the core of AI governance.
Leading this conversation in the UK is the AI Safety Institute (AISI), which works closely with policymakers, academia, and industry to set robust regulatory frameworks. Their mission? To ensure AI systems are not only safe but beneficial, driving the UK’s leadership in AI governance. But what makes this even more compelling is the complementary work of the Responsible Technology Unit (RTU), which previously operated as the Centre for Data Ethics and Innovation (CDEI).
The RTU has been instrumental in shaping what AI assurance should look like, focusing on transparency, fairness, and risk mitigation. I was lucky enough to be part of the pioneer work as early as 2020. Their AI Assurance Roadmap has provided tools and frameworks for organisations to prove that their AI systems meet ethical, legal, and societal expectations. This work is setting the stage for a future where AI assurance could be as standardised as any other compliance exercise.
By having both the AISI and the RTU under the Department for Science, Innovation and Technology (DSIT), the UK is well-positioned to foster cross-pollination of ideas between safety and assurance. While the two departments have distinct roles, their combined expertise—policy development from the RTU and technical validation from the AISI—paves the way for more joined-up thinking in AI regulation. This holistic approach could place the UK at the forefront of global AI assurance.
AI safety testing has the potential to normalise and accelerate AI Assurance. When robust safety testing is established upfront, it becomes the foundation for continuous validation. Rather than treating validation as an afterthought, it becomes embedded in the lifecycle of AI development. This shift could lead to organisations making validation as routine as financial audits.
Imagine a world where AI Assurance is as commonplace as annual reports. By proving that AI systems meet performance and ethical criteria consistently, we can build stronger trust with regulators, businesses, and the public. As a result, innovation will accelerate—because trust is the key to scaling AI responsibly.
As someone who has spent years working on responsible AI, I can’t help but feel a sense of déjà vu. When we first rolled out PwC’s Responsible AI toolkit, the challenges were as real as the excitement. We learned early on that building ethical AI isn’t just about technology; it’s about building trust. And as I always say, trust doesn’t come from promises—it comes from proof. AI safety and assurance are two sides of the same coin on this journey.
As AI reshapes industries and societies, the intersection of AI safety and assurance will be pivotal in ensuring that these systems not only function but do so ethically, safely, and within regulatory frameworks. The work of the AI Safety Institute and Responsible Technology Unit in the UK will be crucial in guiding this process. Their collaborative efforts will not just react to AI risks but anticipate them, setting a global precedent for responsible AI governance.
Ultimately, AI safety testing is about more than preventing harm; it’s about creating a foundation for AI validation to become a formal, standard practice. By embedding these processes into AI development from the outset, we can ensure that trust in AI isn’t just a goal—it’s something we can demonstrate, time and again.
Because in AI, as in everything, trust is everything—and trust comes from showing your work.