Reclaiming digital spaces: Building inclusive AI through lived experience and street data

Written by Professor Lella Nouri, Associate Professor, Swansea University

A turning point for AI and society 

2025 marks a decisive moment in our relationship with artificial intelligence. Across governments, industries, and communities, we are asking not only what AI can do — but who it serves and who it might exclude.

For years, the promise of AI innovation has coexisted with a growing awareness of its blind spots: bias, lack of representation, and the replication of structural inequalities through data. The challenge now is to ensure AI evolves as a tool for social repair, not reinforcement of injustice.

As an academic working on the intersections of AI, community safety, and digital inclusion, I have seen both the harm and the hope that accompany technological progress. The future depends on whether we can make AI more human-centred, transparent, and inclusive.

 

Seeing the streets differently: Turning hate into insight

Through my project StreetSnap, we are implementing image recognition to identify and analyse hateful graffiti in public spaces – turning what is often dismissed as vandalism and nuisance into real-time data about belonging and exclusion. 

By combing image recognition, practitioner reporting and creative arts interventions through the sister project of Flip the Streets, StreetSnap enables local authorities and residents to respond together, replacing hate with creativity.

This work reimagines what AI can be. Instead of a distant, data-hungry system, it becomes a lens of empathy – mapping the stories that shape our shared spaces. 

If you can’t see the data, you can’t see the problem – but if you can’t see the people behind the data, you can’t solve it.

 

Listening as data: The power of lived experience

Bias in AI is rarely just a technical flaw; it reflects the social inequalities embedded in our datasets. Communities most affected by discrimination are often the least represented in the data used to design digital systems. 

That insight inspired the work that is being developed on the Lived Experience Repository of Racism in Wales – a soon to be open digital archive that gathers existing studies and testimonies from people who have experienced racism and exclusion. This platform ensures that policy, research and innovation are guided by real stories, not abstract statistics. 

The lesson is simple: inclusion starts with listening. StreetSnap listens to the language of the streets; the repository listens to the language of lived reality. Both show that narratives are data too – essential for designing technology that reflects, rather than erases, human experience. 

 

Beyond bias: Building accountability into AI

Much of the public conversation about AI ethics remains focused on mitigating bias, but we need to go further. Accountability means asking who participates, not just how the algorithm performs. 

In these projects, I work with artists, young people, community safety teams and policymakers to co-design how data is gathered and interpreted. When diverse groups are part of the design process, the outcomes are not just more ethical – they are more trusted, relevant and resilient. 

True AI accountability cannot be achieved through audits alone. It depends on shared ownership, where communities shape the tools that shape their lives and experiences.

 

Digital inclusion as democratic infrastructure

Digital inclusion is often described as an access issue – broadband, devices, or skills – but it is also a democratic issue. If participation in digital systems determines access to services, safety and opportunity, then inclusion is foundational to social justice. 

Projects like StreetSnap and the repository demonstrate how inclusive AI begins with inclusive story telling. By valuing local knowledge, creativity and lived experience, we are not only collecting better data – we are redefining whose experiences matter in the digital public sphere.

This approach echoes the broader shift in responsible AI practice: from designing systems for people to designing them with people. 

 

The opportunity ahead: AI for human connection

The next phase of AI development must move beyond efficiency toward empathy. To build AI for good, we must reimagine what ‘good’ looks like in practice.

That means:

  • Embedding ethical reflexivity into every AI project — continually asking who benefits, who is visible, and who is missing.
  • Merging data and creativity, making social issues visible and actionable through digital storytelling.
  • Empowering communities to co-create and govern the technologies that impact them.
  • Bridging sectors — academia, policy, tech, and the arts — to ensure innovation serves the public good.

AI can, and should, help us see ourselves more clearly — not just predict outcomes, but reflect our values.

 

Key takeaways for leaders

  • Prioritise representation: Build datasets that capture the diversity of lived experience, not just convenience or scale.
  • Design with empathy: Invite communities into the design process to ensure AI reflects the realities it seeks to address.
  • Invest in digital inclusion as social infrastructure: Equity in data access and participation is essential for ethical innovation.

Looking forward

Recognition in the Digital Leaders AI 100 list reflects not only technological progress but a growing movement toward ethical, community-driven innovation. 

As we look ahead, the most transformative AI systems will not be those that think like humans, but those that help humans think more compassionately about one another.


Read More AI for good

Comments are closed.