Why AI must be designed for everyone – AI’s role in accessible Public Services

Written by Chris Bush, Head of Design, Nexer Digital

AI is quickly becoming a core part of how public services are delivered. On paper, it sounds like a win as AI has a lot to offer. It can make services smarter, quicker and more responsive. But the risks are just as real.

I’ve worked in digital design for public services long enough to know that, no matter how good our intentions, new technology often benefits those who are already well-served. AI is no exception. The real opportunity, and the real responsibility, is making sure it doesn’t deepen digital exclusion but instead helps dismantle it.

There’s no doubt AI can unlock better public services. It can support people with disabilities, simplify complex processes, and create more flexible, inclusive experiences. But that won’t happen by default. It only happens if we’re deliberate, not just about what AI can do, but who it’s doing it for.

A huge part of the challenge lies in the scale of exclusion we’re talking about. Around one in four people in the UK has some form of disability, whether visual, auditory, cognitive, speech-related or sensorimotor, who are everyday users of public services. If AI doesn’t work for them, it doesn’t work full stop.

What gives me hope is that AI can be part of the solution when it’s designed thoughtfully. I’ve seen tools that describe images in vivid detail for people with sight loss, or convert dense, official documents into plain language for people with learning disabilities. I’ve seen public sector organisations use speech-to-text and sign language AI to bridge gaps for people who are deaf or hard of hearing. The opportunity now is to bring them into the mainstream.

But good intentions aren’t enough. We need to build these systems alongside the people they’re meant to serve. That means involving users with access needs from the start, not at the testing phase, but during discovery, design and development. It means asking, early and often: Will this help someone feel more confident, or more confused? Will it support independence, or make someone feel stuck in a loop they can’t get out of?

Too often, digital teams still default to building for the “average user.” The truth is, there’s no such person. Designing for inclusion means making space for a wide range of experiences. That’s especially true when introducing AI into services that already feel impersonal or difficult to navigate.

Take chatbots, for example. They’re often introduced to ease pressure on staff and improve access. But they frequently fall short for users who rely on screen readers, and many don’t work with assistive technology at all. For neurodiverse users, the way questions are phrased or the logic of the conversation can be confusing, making it harder to understand or resolve their issue.

If chatbots are to support inclusion, they need to be designed from the ground up with these users in mind. That means ensuring compatibility with screen readers, using clear, predictable language, and avoiding overly complex or rigid question flows. Most importantly, there must always be a straightforward way for someone to get help from a human when the bot isn’t working for them. And that experience needs to be tested with people who are most likely to face these barriers, not just those already comfortable with digital tools.

I’ve also seen how AI can help improve public service content in really practical ways. Swindon Council, for example, used AI to transform dense housing contracts into easy-read formats, making a real difference for residents with learning difficulties. What made it successful was how it was developed with the people it was intended to support. The team worked closely with users throughout, gaining regular feedback and building a clear understanding of the challenges people faced.  It meant they could make meaningful design decisions, ones rooted in real-life contexts rather than assumptions.

Where things get even more interesting is in the public sector workplace itself. AI is starting to support not just the general public but staff, helping people with various access needs work in ways that suit them, summarising meetings, or transcribing conversations in real time. When coupled with inclusive policies and things like a “Manual of Me”, which lets colleagues share how they work best, AI can support healthier, more inclusive teams.

But with all this potential comes the need for caution. AI has to be ethical, transparent and controllable. We have to understand how it makes decisions, how to challenge those decisions, and how to shut it off if something goes wrong. That means thinking beyond functionality and into accountability, especially in systems that impact people’s benefits, housing or access to care.

Ultimately, AI’s real power isn’t speed or scale. It’s the ability to remove barriers that, until now, we’ve struggled to shift. Things like inaccessible documents, long waits for support, or hard-to-navigate systems. But it only removes those barriers if we actively design it to.

So, as we explore what AI can do in the public sector, my advice is simple. Start with the people who are most likely to be excluded. Involve them in the design. Keep things transparent and human. And never assume that just because something is “smart,” it’s better.

The future of accessible public services is built on listening, testing, and designing for the full spectrum of human experience.


Read More Digital Inclusion

Comments are closed.