A turning point for AI and society
2025 marks a decisive moment in our relationship with artificial intelligence. Across governments, industries, and communities, we are asking not only what AI can do — but who it serves and who it might exclude.
For years, the promise of AI innovation has coexisted with a growing awareness of its blind spots: bias, lack of representation, and the replication of structural inequalities through data. The challenge now is to ensure AI evolves as a tool for social repair, not reinforcement of injustice.
As an academic working on the intersections of AI, community safety, and digital inclusion, I have seen both the harm and the hope that accompany technological progress. The future depends on whether we can make AI more human-centred, transparent, and inclusive.
Through my project StreetSnap, we are implementing image recognition to identify and analyse hateful graffiti in public spaces – turning what is often dismissed as vandalism and nuisance into real-time data about belonging and exclusion.
By combing image recognition, practitioner reporting and creative arts interventions through the sister project of Flip the Streets, StreetSnap enables local authorities and residents to respond together, replacing hate with creativity.
This work reimagines what AI can be. Instead of a distant, data-hungry system, it becomes a lens of empathy – mapping the stories that shape our shared spaces.
If you can’t see the data, you can’t see the problem – but if you can’t see the people behind the data, you can’t solve it.
Bias in AI is rarely just a technical flaw; it reflects the social inequalities embedded in our datasets. Communities most affected by discrimination are often the least represented in the data used to design digital systems.
That insight inspired the work that is being developed on the Lived Experience Repository of Racism in Wales – a soon to be open digital archive that gathers existing studies and testimonies from people who have experienced racism and exclusion. This platform ensures that policy, research and innovation are guided by real stories, not abstract statistics.
The lesson is simple: inclusion starts with listening. StreetSnap listens to the language of the streets; the repository listens to the language of lived reality. Both show that narratives are data too – essential for designing technology that reflects, rather than erases, human experience.
Much of the public conversation about AI ethics remains focused on mitigating bias, but we need to go further. Accountability means asking who participates, not just how the algorithm performs.
In these projects, I work with artists, young people, community safety teams and policymakers to co-design how data is gathered and interpreted. When diverse groups are part of the design process, the outcomes are not just more ethical – they are more trusted, relevant and resilient.
True AI accountability cannot be achieved through audits alone. It depends on shared ownership, where communities shape the tools that shape their lives and experiences.
Digital inclusion is often described as an access issue – broadband, devices, or skills – but it is also a democratic issue. If participation in digital systems determines access to services, safety and opportunity, then inclusion is foundational to social justice.
Projects like StreetSnap and the repository demonstrate how inclusive AI begins with inclusive story telling. By valuing local knowledge, creativity and lived experience, we are not only collecting better data – we are redefining whose experiences matter in the digital public sphere.
This approach echoes the broader shift in responsible AI practice: from designing systems for people to designing them with people.
The next phase of AI development must move beyond efficiency toward empathy. To build AI for good, we must reimagine what ‘good’ looks like in practice.
That means:
AI can, and should, help us see ourselves more clearly — not just predict outcomes, but reflect our values.
Key takeaways for leaders
Recognition in the Digital Leaders AI 100 list reflects not only technological progress but a growing movement toward ethical, community-driven innovation.
As we look ahead, the most transformative AI systems will not be those that think like humans, but those that help humans think more compassionately about one another.