Five things I’ve learnt about AI adoption.

Written by Rachel Coldicutt, Executive Director, Careful Industries

AI is currently inescapable. It’s not just making itself known in every app or software application you open – if you’re a digital leader then everywhere you turn someone will be explaining that it’s a strategic priority or a critical issue or the key to unlimited success or mind-bogglingly transformational efficiency. 

And even if you don’t work in tech, you’ll be hard pressed to escape those two little letters in the headlines, and it can seem as if AI is something that is happening to us, whether we like it or not, but the reality is that it’s a set of technologies that governments, organisations, and individuals can choose to adopt – or not. As digitally enabled disruption is being used for anti-democratic ends by Elon Musk in Washington, it’s a good time to stop and think more deliberatively about the choices you are able to make about technology and to have the courage to slow down and make more careful choices. 

At Careful Industries, we work with leaders and organisations to help them figure out what AI means for them, because the reality is that – whatever that sales email might tell you – AI won’t transform everything equally or in the same way for everyone. For instance, if your team or organisation is sitting on troves of structured data and delivering repeatable processes you’re going to be more “AI ready” than one that delivers people-centred services in a complex environment. If your main task is wrestling with legacy systems, the ways you’ll think about AI will different to an organisation making the most of a box-fresh system. Just like with digital transformation there’s no one-size fits all for AI adoption. 

Some of the things I’ve learnt over the last year: 

  1. FOMO is not a strategy

When tech companies come and talk to you about how great AI is, it’s not because they’re benevolently coming to share the news: it’s because they’re selling you something, so they’re going to go out of their way to make it sound like the best thing since sliced bread. One reason AI is so prevalent right now is that a small number of people and companies (some of them quite friendly with the new US President) stand to make a lot of money and gain a lot of power from more people and companies adopting their technologies. So, go easy on the FOMO and feel comfortable questioning the hype.  (Buy your laptop sticker here!)

  1. There’s no such thing as a stupid question

The field of AI is almost intentionally confusing. 

For a start, there’s no single agreed definition of AI, meaning that every tech company offers a different explanation and every regulatory environment has a slightly different spin. The field is also full of detailed technical and multi-lettered acronyms (LLMs, ADMS, GPTs, AGI etc etc) and every few months a new term will reach the top of the hype cycle and seem to pop up in every meeting and in every LinkedIn post. Over the next few months, many of the people who told you genAI would change everything will  start saying “agentic AI” is the future of work/the end of work/the ultimate disruptor* (*delete as appropriate) – you definitely don’t need to nod your way through that. Feel confident to stop and ask questions and make sure you know what’s being described. 

  1. Find out what people are already using 

It’s very easy to assume that the way you and your immediate circle of colleagues work is what everyone is doing, but if you don’t work in a highly process-driven environment it’s likely that different people will have different habits and preferences. Some co-workers might use Claude or ChatGPT on a personal device as a way of getting started with a difficult project; some might go out of their way to avoid AI enhancements to products and services at all costs. Before you make any assumptions, it’s worth doing a survey to find out what people are actually doing and using that as a starting point for thinking about further AI adoption and whether it will work for your team or organisation. 

  1. Make safe spaces for experimentation 

If you feel curious but nervous, create environments for trying things out with safeguards in place and some easy-to-follow best practice guidelines. Lots of people learn best by doing, and it’s not always possible to imagine what the outcomes might be. Rather than starting with a massive transformation or implementation programme, make spaces to experiment and see what that sparks. 

  1. Automate the easy things 

It’s tempting to throw new technologies at your most wicked problems – the ones that seem to get harder over time, or have layers of complexity around them, but in reality you’ll just make a difficult thing even harder, and potentially create new points of failure. You’re better off automating the things that come easily to your team or organisation – particularly if you’re using generative AI, where potential inaccuracies and mistakes in outputs could undermine efficiencies, or an automated-decision system that might create biased or incorrect outcomes. 

Take something where failure will be obvious – where there will either be clear external signals if it’s not working, or where the skills and experience of your staff will mean that everyone is alert to what the wrong kinds of outcomes look like. For instance, if you work in social care, don’t automate complex human decisions about whether or not a young person is at risk; start by improving case workers’ diary management with better route planning between visits and automated appointment reminders. Adopt tools in a way that matches your skills and confidence rather than feeling pressurised to disrupt for disruption’s sake. 

No matter what anyone tells you, AI isn’t inevitable, and you should feel empowered to make good decisions in the workplace that create better outcomes for everyone. 


Read More Bias &Ethics

Comments are closed.