What I think about AI

Written by Dave Rogers, AI lead, Public Digital

Artificial Intelligence is having its moment in the sun.

It’s a technology older than the internet, but with the open release of ChatGPT 3.5 in late 2022, something changed. AI’s power and distinctiveness suddenly became visible to a much wider audience.

Glossy coverage of AI has been bubbling away for years – in the marketing of AI vendors, in the promotional material of tech and AI-specialist consultancies, and periodically, breaking through to mass market media.

But in 2023, this has escalated. AI is now hot, and this heat is having an immediate impact.

The challenge of responding to AI

For the people we work with at Public Digital – leaders and practitioners in institutions across the world – AI has become a popular topic, and is a consideration in most digital strategy and delivery.

Inside organisations, people are feeling new pressure to leverage, exploit, or harness AI. These words evoke the sense that, if you simply started using AI more, you’d quickly release untapped value for you and your users/customers. As with every new hype tech, it is easy to feel that if you’re not buying in now then you risk rapidly falling behind.

But even with the hype, the emerging identity around AI is contradictory. As far as media commentary is concerned, it is both an economic saviour, and an existential threat. In both senses, these are sensationalist interpretations.

The reality, as is so often the case with technology, is more complex.

While these interpretations claim to spell the future – optimistic or pessimistic – of AI, the truth is that its social and economic impact is hard to predict. This impact is undeniably going to be larger than recent technological fads. If you scrape below the surface of AI, it is a genuine technological novelty, with its capacity for impact perhaps more comparable to the explosive effects of the internet than to the faddish ripples of blockchains or web3.

Due in part to the novel experience of using AI for the first time, as well as its sensationalist reported in the media, the subject of AI invites a range of different – and very powerful – emotional responses.

Let’s examine those responses, and look for ways to balance them with more grounded, pragmatic responses to AI which will allow organisations to harness its potential.


From the direct experience of generative AI tools, through to the bold predictions of productivity and contribution to economic growth, it is easy to get swept up in the seemingly infinite possibilities of this technology.

But in reality, it’s very hard to tell which human activity can be replaced or augmented by machines, and crucially, the side effects that will occur when this happens. Humans typically act within complex systems. Making one part more efficient through the use of AI does not necessarily produce a positive outcome for all.

For example, automating school essay writing creates a shortcut through a currently crucial form of assessment within the education system. Perhaps this is cheating, or perhaps it exposes shortcomings in how we assess skills.

Equally, automating commercial bid writing doesn’t make buying and selling “more efficient”. it’s more likely to increase the volume of bids, and introduce new challenges for bid assessors and the wider procurement process.

This excitement for AI is perhaps most apparent in market speculation. Huge sums of money will be made around AI speculation, despite widespread concerns that many AI companies are forming without an economic moat (a sufficiently unique and sustainable market advantage).

Perhaps some of the biggest winners will be consultancies (the author uncomfortably notes that he is a consultant!), who are responding to the energy for AI-centric change generated by this moment.


Fear is a reasonable reaction to AI. It is already causing harm, through perpetuating bias, deep fakes, disinformation, social engineering attacks and growing carbon footprint. But is it an existential threat? AI experts seem to think so.

The question is: Can experts in a specific technology be relied upon to understand the impact of that technology in society?

Many people are anxious about losing their job, or what they love about their job, to AI. Major consultancies, amplified by the media, are consistently painting an over-simplified picture of economy-wide job replacement. Lessons from the emergence of software in the 1960s, and the world wide web in the 1990s, tell us that digital technologies certainly cause disruption to jobs, but they rarely simply replace them.


AI has magical and awe-inspiring qualities. It does things that make it seem human-like, or even akin to a strange, unrecognisable form of intelligence.

Scratching below the surface of these ‘magical’ qualities, you quickly encounter complex software, mathematics and psychology. This can make AI inaccessible, even for those with strong digital literacy, or technology skills outside of the field.

It’s vital that digital leaders and practitioners remain curious, and nurture a desire to explore behind the magical surface layer.

To be put to effective use, AI needs to be properly understood.

Looking beyond emotional responses

Decision-makers in organisations need to harness this excitement for, and awe of AI into the delivery of value, while channelling some of that fear into balanced consideration of risk.

But beyond these emotional responses are pragmatic ways to approach AI:

Acknowledge you’re often working with prototypes

We need to see this technology AI for what it really is: experimental and flawed. It’s experimental because so many of the popular products are in a prototype phase: still learning how to design for safety, legality and broad utility. Its many flaws, which don’t necessarily undermine utility, are many, from questionable use of intellectual property, through to innate vulnerabilities like prompt injection.

When designing how AI is integrated into your overall service design, tool use and technology architecture, we must be conscious of the fact that we are using a prototype component.

Look at real world applications

Real world applications of AI are where we need to listen more, and listen directly, in order to demystify this technology. The Columbia Review of Journalism wrote about ‘How to report better on artificial intelligence. It includes the following quote from Jonathan Stray, Senior Scientist at the Berkeley Center for Human-Compatible AI:

“Find the people who are actually using it or trying to use it to do their work and cover that story, because there are real people trying to get real things done”

This is a principle that goes beyond journalism, and into the pursuit of a more truthful, less binary understanding of AI. It is also a fundamental part of user-centred design, showing how this practice endures as technology evolves.

Strive for effective communication

When organisations use AI, decision makers around its application will need a shared and pragmatic understanding of the technology.

It will need to be demystified through effective abstractions, metaphors and stories which bring the nuances and realities of AI to a wider audience.

Treat as one tool in the box

Treat AI as just another tool in the digital toolbox – one that you’ll occasionally reach for, but only for the right kind of job.

Finally – use it!

The easiest way to see AI for what it is is to use it.

Use AI as you would any other new technology: play with it, try building things with it, find the edges of its utility, safety and efficacy. This process need not take extraordinary financial investment or time – lots of AI in 2023 is accessible, cheap to get started, and often wrapped up inside low-commitment software-as-a-service products.

There’s also often no need to ‘go big’ on AI, such as AI-ringfenced financial investment, AI teams or AI-centred project goals. This framing incentivises the adoption of AI over meeting user needs or delivering value which directly contributes to the mission of the organisation (a mistake so common for hyped technology).

For leaders, it’s better to remove barriers to using AI, than impose the will to adopt it.

For digital practitioners – product managers, designers, software developers, data scientists – it’s important to use it, and gain enough familiarity to inform everyday practice and decision making.

Originally posted here

Read More AI & Data

Comments are closed.