This year’s Public Sector Innovation Conference came at the end of yet another week, another lifetime in the story of AI, from a rampant, overzealously woke Google Gemini to Elon Musk suing Open AI for, well, acting his own previous advice and becoming more profit-driven (apparently he offered to withdraw his suit if it changed its name to ‘Closed AI’). Accordingly, it felt right to dedicate our whole day to discussing the impact of AI on the public sector, across three topics: AI in innovation, Ethics and AI, and AI for Good and Bad. I think this year felt like one of the most dynamic, engaged conferences we’ve had, with most of the audience making some sort of contribution during the day.
Sabby Gill kicked us off with a summary of the recently-published DL Attitudes of Leaders to AI Survey, highlighting evidence of widespread interaction with AI across government, but also the need for more leadership, data and privacy concerns, and a mixed picture on impact to date, with many citing the need to overcome silos as a key barrier. Our AI in innovation panel (Number Ten’s Eoin Mulgrew, Informed’s David Lawton, NAO’s Yvonne Gallagher, and DWP’s Shruti Kohli) developed and really pushed the ‘leadership’ and ‘silos’ themes: for me, one of the consistent and stand-out insights of the whole conference was the need to understand whether we’re just doing tactical implementations or real transformation – and it was pretty clear that the consensus in the room was that we’re not really scratching the surface: technical innovation is happening, but very little strategic innovation.
We followed this our first panel with an experiment: ‘There’s an AI for That’ – a sort of rapid speed-dating format where we heard six examples – just three minutes each – of practical AI implementations from the ‘front line’ (thanks to Swindon’s Sara Pena, Norfolk’s Geoff Connell, Natural England’s Alex Kilcoyne, FCDO’s David Gerouille-Farrell, MoJ’s Shelina Hargrove, and CDDO’s Clive Kelman) – great, practical examples of use of AI in streamlining, improving accessibility, co-piloting, integrating, and enabling our public services. Surprisingly, panellists stuck to the 1-slide/3-minute rule, policed by an implacable Robin Knowles and his trusty alarm clock.
After morning coffee, we held our second panel, ‘AI and ethics in the public sector’, with FDM’s Sarah Wyer, TPXImpact/Nesta’s Imeh Akpan, and AWS’ Himanshu Sahni. The stand-out theme here was the need to acknowledge that AI is merely a mirror to society with all its baked-in inequalities – and therefore of the need to ensure diversity in training AI models. Unfortunately it was felt that more attention is given to ‘user research’ than the sort of ‘social research’ that might address this issue, whilst attention to bias, the need for explainability, and assessment of the usefulness to humans of all AI implementations was seen as a practical framework for addressing some of these issues.
We finished our morning with an excellent ‘fireside chat’ between Malcolm Harbour CBE (Connected Places Catapult) and Rebecca Rees (Trowers and Hamlins) addressing issues relating how the public sector might inject more innovation into the way in which it procures AI. It was felt that government needs to adopt a more proactive, ‘market-making’ stance to encourage suppliers to meet current needs, and that this might require more creative approaches such as hackathons and much more pre-market engagement. It was noted that the upcoming Competitive Flexible Procedure should offer a solid framework for such behaviours, hopefully resulting in a less traditionally adversarial relationship between government and its suppliers.
Our afternoon kicked off with a hugely informative keynote from Ollie Ilott, Director of the AI Safety Institute, who provided us with an overview of the Institute’s work, across four themes. First, we never fully understand the risks, since the capabilities of AI often emerge down the line post-implementation. Second, the institute uses automated benchmarks, red-teams, and automated agents and tools to test for misuse, societal impacts, possible autonomy of AI, and that the present safeguards continue to be sufficient. Third, the need to upskill people across these areas is an ongoing challenge, and fourth, that the growth of AI is exponential and that it is not easy to focus on the frontier (as we must), since we only evaluate ‘old drops’ of the technology, and thinking in an exponential way isn’t intuitive.
Our third panel session, ‘AI for good and bad’ saw contributions from The Army’s Brigadier Stefan Crossfield, NCSC’s Ollie Whitehouse, Zuhlke’s Dan Klein, and Actionable Futurist’s Andrew Grill. The stand-out challenge for me came from Andrew, who asked the audience to first raise their hand if they’d tried out Chat GPT (all hands up), and then to raise their hand again if they’d used it again in the past week (all hands down) – the point being that we need to engage more ourselves with the technology to appreciate its positives and its risks. The panel raised a range of fascinating angles on the topic – from Ollie’s question about what trust looks like in a post-truth world to Stefan’s emphasis on leveraging commodity AI (rather than trying to built it within government), and the worrying observation that for our adversaries, ‘the human is not always in the loop’.
Our second and final ‘There’s an AI for That’ panel heard pop-up offerings from Beam’s Seb Barker, Skin Analytics’ Jack Greenhalgh, NHS Resolution’s Niamh McKenna, and Curistica’s Dr Keith Grimes: again, an astonishing range of applications for AI across diagnostics, assessment, accessibility, and documentation.
Our final speaker of the day was Harriet Harman MP, who gave an excellent closing keynote addressing the need to ensure that an AI-powered world is one of equality for all – especially given the ‘techbro’ culture that has prevailed thus far. She drew attention to Section 149 of the Equality act, in which public authorities are accountable for ensuring that biases are opened up and challenged in datasets – but also that we will need radical change to our processes if we are to implement such much-needed regulation. Unfortunately, current legislative processes are far too complex and slow to keep pace with AI’s evolution, and Harriet suggested that we may need to grant special statutory powers to the Science, Innovation and Technology and Business and Trade Select Committees in order to fast-track the state’s regulatory response.
All in all, a really enjoyable, packed day of informed views, challenge, and debate. Thanks to Robin and the Team at Digital Leaders, and our sponsors for the day: Informed Solutions, Connected Places Catapult, Zuhlke and DIgitLab.
Prof Mark Thompson 17 March 2024