I recently attended a fascinating roundtable on AI. It was a really interesting discussion, with many different perspectives, and it did get us all thinking that among the understandable hype, there is a risk of missing something crucial.
The scale of growth of AI tooling is unlike anything I’ve ever seen in my career. That’s not just me saying it. Here’s a slightly more authoritative viewpoint:
“AI had roughly linear progress from 1960s-2010, then exponential 2010-2020s, then started to display ‘compounding exponential’ properties in 2021/22 onwards. In other words, the next few years will yield progress that intuitively feels nuts.”
Jack Clark, the Co-Chair of AI Index at Stanford University.
Along with the technical progress, there is also a compounding exponential growth in the public inquisitiveness in AI. This explosion in interest has been fuelled first by ChatGPT, and now by other publicly available generative AI tools.
It seems to me that with this unprecedented growth, there is an understandable risk of us jumping towards AI solutions and trying to find justifications for their use.
Another quote for you…
“When all you have is a hammer, everything looks like a nail”
A phrase I’ve used for years without realising it is from Abraham Maslow (he of the “Hierarchy of Needs” fame). A rush to adopt AI could actually create additional problems rather than address fundamental needs.
The emergence of these tools is very exciting and offers real potential to accelerate change. But I worry that we run the risk of letting the solutions dictate the approach and obscure the very problems we need to solve. There are risks associated with this (and apologies in advance to both reader and Maslow for the abuse of the analogy):
More seriously, I worry that in our haste to deploy AI and ‘proofs of concept,’ we risk underestimating the ethical concerns that emerge with AI. This approach could be used as a subconscious excuse to ‘just get going’ and so bake an unknown and potentially unacceptable level of risk that will discriminate and have other unintended consequences.
Not to be doom and gloom, I actually believe public sector organisations are in a great position to use AI. When we work on these projects, we don’t start with the solution. We look at:
When we truly understand these, AI tools are an amazing set of capabilities to use as we co-design and iterate through to potentially viable, feasible and desirable solutions.
This doesn’t need to be a hugely time consuming endeavour. For example, we worked with The Home Office to help them gain alignment on desired outcomes, understand their highest priority problems, and potential approaches to solutions in a matter of days.
At Zaizi we’re looking at the transformative potential of AI. They present entirely new opportunities to try things we’ve never done and in ways we’ve never thought of.
But progress is not just about using the technology itself. It’s about our ability to understand the problem we’re solving and whether AI genuinely offers the solution to that problem.