Most AI systems are still designed with ideal conditions in mind. They assume fast connections, English-speaking users, and a level of familiarity with digital tools that allows people to navigate interfaces and structure queries in ways systems can easily interpret. From a technical perspective, that approach is understandable. It simplifies design, testing and optimisation.
But it also embeds a quiet assumption: that users will adapt to the system. In practice, that assumption is where many systems begin to fail.
In the environments where access to information matters most, connectivity is often inconsistent, devices are mobile, and language is far from uniform. Digital confidence varies, and the time available to search, interpret and act is usually limited. Decisions are practical, immediate and shaped by context. In those situations, even small amounts of friction are amplified.
What emerges is not simply a technical gap, but a structural one – and one that AI, if applied uncritically, can reinforce rather than resolve.
There is a growing tendency to position AI as something that can be layered onto existing systems to improve them. In some cases, that holds. In others, it simply accelerates the same underlying problems.
Search is a useful example.
Traditional keyword-based search relies on users phrasing their queries in ways that align with how content has been indexed. It assumes a shared language, predictable structure and a certain familiarity with how systems behave. When those conditions are present, it works well enough. When they are not, the experience quickly becomes fragile. Small differences in phrasing can produce entirely different results, relevant content can remain hidden, and users often end up repeating searches or abandoning the process altogether.
Introducing AI into that environment does not automatically address those issues. If the underlying assumptions remain unchanged, the system may appear more sophisticated while still being misaligned with how people actually search and learn.
Working on a multilingual knowledge platform recently brought this into sharper focus, particularly in the context of Access Agriculture and the Ask Agi project.
The platform itself contained a substantial and growing body of practical content, translated into more than a hundred languages and used across multiple regions. The knowledge was relevant, grounded and designed to support real-world decisions. Yet discovery remained inconsistent.
Search behaved literally. It matched keywords rather than interpreting intent, which meant that results were highly dependent on how closely a user’s phrasing aligned with indexed terms. For users searching in their own language, or accessing the platform through mobile devices with limited connectivity, this introduced a level of friction that felt disproportionate to the task.
Over time, it became clear that the issue was not the availability of knowledge, but the distance between a question and a usable answer.
Addressing that gap required a shift in perspective. Rather than designing for ideal conditions and adapting later, the focus moved towards designing for constraints from the outset.
Language could no longer be treated as a simple translation layer. Preserving intent across languages became central, which required thinking about how meaning is represented and retrieved, not just how words are converted.
Connectivity, similarly, could not be deferred to performance optimisation. It needed to be considered as part of accessibility itself. Responses had to be concise, relevant and efficient to deliver, reducing both load times and the need for repeated searches.
Perhaps most importantly, the interaction model had to change. Instead of expecting users to adapt to a search interface, the system needed to respond to natural questions, allowing people to express what they needed in their own terms and receive guidance that was clear, structured and immediately usable.
Taken together, these shifts do not add complexity so much as they reposition the system around the realities of its users.
One of the more telling outcomes of this shift was not the introduction of a new feature, but a change in behaviour.
When users were able to ask questions naturally and receive relevant, concise responses, they did not simply complete a task and leave. They returned more frequently, explored a broader range of content and engaged more deeply with the material available to them.
This is where the impact of AI becomes more tangible. When systems align more closely with how people think, communicate and make decisions, the value of the underlying content increases. Discovery becomes more intuitive, but also more meaningful. The system begins to act less as a gateway and more as a guide.
That kind of shift is difficult to achieve through incremental improvements alone. It requires questioning the assumptions that shape the experience in the first place.
This raises a broader question about how AI is being applied more generally.
Much of the current conversation focuses on capability – larger models, faster responses, more sophisticated outputs – often without enough focus on how these systems are applied in real-world contexts (something we’ve explored further here). These developments are important, but they do not automatically translate into meaningful impact if the systems they are embedded within are not designed for the environments in which they are used.
Designing for the centre tends to produce systems that perform well under ideal conditions. Designing for the edges produces systems that are more resilient, more inclusive and, ultimately, more useful.
If an AI system cannot operate effectively in low-bandwidth contexts, across multiple languages and for users with varying levels of digital confidence, then its usefulness is inherently limited, regardless of how advanced the underlying model may be.
This is not about lowering ambition. It is about directing it more precisely. A Different Starting Point
The opportunity for AI lies not only in expanding what systems can do, but in reducing the effort required to use them.
That shift begins with a different starting point. Instead of asking what new capabilities a technology might enable, it is worth asking where friction currently exists, and how it might be removed.
When that becomes the focus, design decisions begin to change. Architecture becomes more intentional, priorities become clearer, and the resulting systems often become simpler rather than more complex.
Designing for the edges does not constrain innovation. It sharpens it.