National Digital Conference 2024 – Chair’s Blog

Written by Prof. Mark Thompson, Professor in Digital Economy, University of Exeter

This year saw Digital Leaders convene in Salford’s Media City on a beautiful sunny autumnal day to explore the day’s theme, Public Services in the Age of AI.  Our day was nicely framed by Joe Miller’s coining of William Gibson’s observation that ‘the future’s already here: it’s just not evenly distributed’. Jo’s challenge to the audience was to consider how we approach the future when it’s already here.

As might be expected of a DL conference, the conversation was of a very high quality – in fact, yesterday’s event was of such richness that I’m going to depart from my Chair’s Blog convention of summarising contributions chronologically by presenter, to focus instead around some of the key themes that I felt emerged from our day.

As might be expected, there was lots of discussion around the need for public services in the age of AI to ‘retain their compassion’; moreover, it was considered important that AI should wherever possible empower, rather than replace, front-line public servants, and we heard various examples of how this is already happening. An associated observation was of the need to be transparent to citizens about when AI was being used, especially as Ai becomes better at closely mimicking human interaction.  

Similarly, the observation was made that those commissioning technology are often commissioning ‘black boxed’ AI even if they are not focally aware of this, as technologies become ever more embedded in broader supply chains and infrastructures that span the globe: ensuring transparency, and associated awareness, about the presence of AI somewhere in the value chain of public services was thus seen as a new responsibility for those commissioning public services.

As might also be expected with such a topic, innovation also loomed large in our discussions, particularly the difficulty in scaling.  Rachel Singleton coined DoD’s Lt-Gen Jack Shanahan’s observation about AI demonstrators being deceptively easy to start, but fiendishly hard to scale up – and it was agreed that we’re still in the early stages of proving ROI on most AI implementations – indeed, that even agreeing on suitable indicators of ROI for AI initiatives can be problematic.

Perhaps this situation underpins Nijma Khan’s lament that far too many AI PoCs are left on the cutting-room floor and are never scaled: her prescription was to try to focus more on building MVPs than PoCs.  Ensuring that AI is focused on public value, rather than the need to be seen to be ‘innovating’ per se was also seen as important – although unfortunately early signs are that this is not always happening. Perhaps being a ‘fast follower’, innovating as you go, could be a good way of balancing innovation with commercial and ethical risk here.

Relatedly, the traditional government process of first specifying technologies and services and then tendering these to the market was seen by most as hopelessly unsuited to the accelerating pace at which AI is evolving – as well as how this is complexifying risk, especially around ethical responsibility.  In fact, we ran a group wordcloud around these issues:

ND AI conference Wordcloud

Joe Hill had a few specific challenges to long-standing government practice in this area, seen as slow, cautious, bureaucratic, and overly risk-averse (almost defaulting to ‘no’).  The first was the need to be more prepared to reform public institutions around AI rather than the other way around; the second was that government tends to focus on cost, but not on the [cost of the] excessive time and bureaucratic delay that characterises excessive caution around cost, resulting in the same (or worse) outcome.

Third, AI scenarios are too often benchmarked against some idealised service levels that do not exist in reality, since existing services are already marred by natural human error and inefficiency: procurement should therefore benchmark against current, rather than aspirational, levels of performance and optimise from there.

Fourth, we should address a ’laptop bias’ to many Ai use cases centred around digital, rather than physical, tasks.  Finally, Joe’s view was that ‘ensuring the human in the loop’ was not always the optimal design in cases where AI has the potential to cut transaction times by months, but cannot do so because these often await final signoff by a human – so a more sophisticated understanding of these tradeoffs is required.

Perhaps the final major topic concerned the various implications of AI’s embedding in the broader contextual landscape of legacy technologies, processes, organisational structures, and value chains.  Here, Phil Swan led the way with the acknowledgment that use of AI presents an opportunity for central government to play more of a role in co-ordinating local public services in all their various silos, pointing to some work currently underway in MHCLG in this area, and made the point that ‘citizen-centred’ AI doesn’t just mean ‘human AI’, but also the capacity of AI to join up various currently disconnected, siloed services around the citizen in a seamless way.

The point was also made that AI’s considerable predictive abilities, so useful for reducing failure demand by joining up diverse indicators into a pattern that proactively predicts issues, cannot possibly come to fruition in an organisational environment of fiefdoms and general reluctance to share common processes and data (my words, not Phil’s!).

Ah yes, data.  Returning to Jo Miller, we’re all going to have to take the idea of training data much more seriously – particularly the idea that ensuring ‘human AI’ requires continued access to original, human-generated content, which is far from guaranteed (see the above comments about AI’s embeddedness in far broader infrastructures and even other black-boxed technologies and datasets).  Who’s thinking about that?

I hope that the above selection of insights from this year’s Conference demonstrates the exceptional quality of discussion throughout the day.  Thanks, as ever, go to all the Digital Leaders team, our excellent speakers, and to all those Digital Leaders themselves who once again supported such a rewarding event.


Read More AI for good

Comments are closed.