Choosing Our 2035: Why AI Leadership Has To Start With Purpose And Empathy

Written by Bev Jones, Head of Marketing and Events, Digital Leaders

When I hosted “Shaping 2035: How purpose and empathy drive the AI we want” at AI Week 2025, I didn’t leave thinking about technology. I left thinking about the kind of society we are shaping through the choices we make now.

AI is often described as inevitable. What I was reminded of instead is that AI reflects what we prioritise, reward, and overlook. The question is not whether AI will reshape our organisations. It already is. The question is whether that change will make life feel more human or more constrained.

 

Two futures, one choice

What stayed with me most was the contrast between two plausible versions of 2035.

In the first, AI sits quietly in the background. It removes friction, anticipates need, translates across borders, and gives people back time. Teachers teach without administrative overload. Carers focus on care rather than coordination. Public services become more responsive and more equitable because delays and data barriers fall away.

In the second future, everything appears efficient but feels diminished. AI becomes the operating model rather than the tool. Decisions are made inside systems people cannot understand or challenge. Past biases harden into tomorrow’s rules. Organisations move too fast out of fear of falling behind rather than clarity about what they value.

The difference between these futures is not the technology. It is leadership. It is whether leaders choose direction before speed, and responsibility before convenience.

 

AI is not failing us. Leadership sometimes is.

We tend to talk about AI failures as if systems behave independently. What I heard instead was a pattern of leadership failures.

When algorithms reward outrage to maximise engagement, or when risk scores reflect historic prejudice, these outcomes stem from unexamined goals and weak oversight. AI does not invent values. It mirrors ours. It scales our assumptions, incentives, and blind spots.

That can feel uncomfortable, but it is also liberating. If leadership choices create the conditions for failure, they also create the conditions for better outcomes.

AI leadership is now a form of civic leadership.

This requires leaders to think not only about commercial benefit but also about societal impact.

 

Purpose as the anchor

One question cut through the noise for me: why are you bringing AI into your organisation at all.

Too many organisations still start with the tool rather than the purpose, and end up with scattered pilots and modest returns. What I took from the session is that AI strategy must follow organisational purpose. If your mission is to improve citizen outcomes, where exactly can AI reduce delay or expand access. If you serve customers, how can AI elevate people rather than automate judgment.

Ethical goals belong alongside operational goals. Efficiency alone is not enough. Inclusion, wellbeing, and contribution matter too. These priorities will shape 2035 more than any technical choice made in 2025.

A question I now ask often is whether our AI ambition serves our purpose or whether our purpose is starting to bend around the technology.

 

Governance as a practice of trust

Another shift for me was in how we think about governance. AI does not fit comfortably within older frameworks. Traditional governance assumes static systems. AI learns, adapts, and behaves differently at scale.

Organisations need oversight that spans all AI activity, not only the headline projects. They need mixed governance boards that include technologists, ethicists, legal specialists, operational leaders, and people who understand the lived experience of users.

Transparency must be built in from the start. People deserve to know when AI is being used, how decisions are reached, and how outcomes can be challenged.

Good governance cannot be static. As systems evolve, so must the questions we ask of them. Governance in this space becomes a continuous practice of humility and accountability.

 

Culture, literacy, and courage

No strategy succeeds without a culture ready to support it. Culture shapes how people respond to new tools and how they interpret change. AI exposes this truth quickly.

I was struck by how personal AI literacy is. People engage from different starting points. Some feel energised, others cautious, many unsure. Leaders need to create space for honest conversations about how roles will change and what support will be offered.

Critical thinking and ethical reasoning are becoming essential skills. So is courage. Courage to ask who benefits, who might be left behind, and whether a solution aligns with organisational values.

What resonated deeply was the idea that AI leadership now carries a civic dimension. Leaders are not only shaping business outcomes. They are influencing the future social contract around work, trust, and opportunity.

 

Building the 2035 we actually want

As I reflect on the session, one conclusion stands out. AI maturity is not measured by how much technology we deploy or how fast we scale. It is measured by how responsibly and purposefully we align AI with human intent.

The decisions we make now will shape how people live and work for years to come. That carries weight, but it also offers immense possibility.

If we lead with purpose, empathy, and clarity, AI can help create a future where technology quietly does the heavy lifting so people can focus on connection, dignity, and opportunity.

Watch the full conversation with Becky Davis, Director of AI, Sopra Steria Next here: https://aiweek.digileaders.com/talks/shaping-2035-how-purpose-and-empathy-drive-the-ai-we-want/


Read More AI for good

Comments are closed.