AI is reshaping how we work, learn, and live. As governments from London to Seoul debate AI safety and cooperation, the biggest opportunity isn’t a new model or tool – it’s who gets to shape it.
For over a decade, my work has focused on one core idea: AI will only serve society well when the next generation is equipped not just to use it, but to question it, improve it, and use it ethically. Building a responsible AI future requires embedding education, ethics, and inclusion into every layer of national AI strategy – because tomorrow’s innovators are already here.
AI is often described as a tool for efficiency or productivity, but at its heart, it reflects human values. The data we use, the assumptions we make, and the people we include (or exclude) determine whether technology strengthens or fractures our society.
At Teens in AI, we run programmes for 12–18-year-olds in over 100 countries. In 2024, 80% came from diverse ethnic backgrounds, 69% attended state schools and 57% identified as girls. These numbers show the diversity of voices shaping future-facing AI.
In the UK, women make up just 22% of the AI workforce and people from minority ethnic backgrounds only 15% of tech roles.
As we come to the end of 2025 and move into 2026, a key part of our strategy is to ‘teach the teachers’. We provide educators with ready-to-use materials and resources so they can introduce AI concepts confidently, even without prior experience. By equipping teachers as facilitators and learners, we help schools embed AI literacy sustainably, ensuring understanding spreads far beyond any single programme.
Behind every AI use case is a decision: Who builds it, whose assumptions shape it, whose voices are absent. When we start AI literacy early, we embed ethical reasoning, critical thinking, empathy and inclusion into the design process – and that matters for fairness, accountability and social trust.
Governments worldwide are grappling with how to build AI capacity responsibly. In the UK, the Department for Science, Innovation and Technology has acknowledged that addressing the AI skills gap is critical to meeting the ambitions set out in the National AI Strategy. Yet progress remains uneven, with most initiatives focusing on adult reskilling or postgraduate study rather than early education. This risks leaving a generation behind.
The World Economic Forum’s Future of Jobs Report 2025 warns that 85 million jobs may be displaced globally, while 97 million new roles will emerge – demanding creativity, ethics, and digital fluency developed long before university
I regularly contribute to government discussions on AI education policy. Most recently, at the UKAI Roundtable at Parliament, I joined policymakers to emphasise the urgent need to equip young people with the knowledge and confidence to thrive in an AI-driven world. There was a shared understanding that to future-proof our economy, we must start with schools – but this requires long-term investment and collaboration across government, industry, and civil society.
On 5 November 2025, the UK Government published its response to the Curriculum and Assessment Review: Building a World-Class Curriculum for All – a long-awaited step towards embedding AI literacy and digital competence across the national curriculum. The response signals an important shift in recognising that AI is not just a technology issue but an education one. I welcome this direction. It’s been a long time coming.
For years, many in education and technology have called for a curriculum that keeps pace with social and technological change. The Review’s emphasis on AI, digital literacy, and critical thinking reflects a growing consensus that early AI education is essential not only for future employability but for civic understanding and ethical reasoning. As these recommendations move from policy to practice, collaboration between government, educators, and social innovators will be vital to ensure that every young person benefits from this shift.
For the past decade, we have shown how equipping young people early with ethical and practical understanding of tech and AI can inspire innovation and social good. The government’s commitment to creating an agile curriculum and embedding technology across all subjects aligns closely with what we see daily in classrooms worldwide: when young people understand AI’s real-world context, they develop curiosity, empathy and confidence to lead responsibly in the future workforce.
The UK Government’s White Paper on AI Regulation proposes a pro-innovation model, where regulators apply five principles: Safety, Accountability, Transparency, Fairness, and Contestability. While legislation is not yet fully in place, this framework underscores why early education in AI literacy and ethics matters. Globally, frameworks such as UNESCO’s AI Competency Guidelines for Schools are setting a new benchmark for how countries embed AI understanding into national curricula.
If AI is to serve the people, not just profit, we must reimagine what progress looks like. For governments, this means treating AI literacy as a civic skill – as essential as reading or numeracy – and ensuring every young person, regardless of background, has the chance to participate.
The private sector has a unique opportunity to drive social impact while addressing its own talent shortages. The UK AI Opportunity Action Plan highlights that demand for AI skills far outpaces supply, with 60% of businesses citing recruitment as their biggest barrier to AI adoption.
This year’s wave of layoffs reveals a short-sighted pattern. As organisations replace junior roles with AI systems, they may gain short-term productivity but lose long-term capacity to grow human expertise. Every algorithm trained today still depends on people who can ask better questions tomorrow. Forward-thinking companies recognise that sustaining innovation means investing in young minds, not eliminating them.
This is where partnership matters. We collaborate with global organisations including Sage, Capgemini and Red Hat to strengthen AI education while advancing social good. Together, we co-design real-world challenges that equip young people with the skills, ethics and confidence to build responsible AI solutions. For our partners, this is not philanthropy – it is strategic investment in a diverse, future-ready workforce that reflects the values and needs of a fairer digital economy.
When organisations share mentorship and technical expertise, young people learn not just how to build with AI, but why to build responsibly. In turn, businesses gain imaginative, ethical, and globally aware insights from the next generation.
AI will continue to evolve faster than most institutions can regulate it. The only scalable safeguard is education – for students, teachers, parents, policymakers, and employers. We must move from ‘awareness’ to ‘agency’.
That is why our focus is not just on teaching coding. We integrate the United Nations Sustainable Development Goals into every programme, fostering a culture of ethical inquiry and social responsibility. When young people understand bias, fairness and sustainability, they carry these principles into every innovation they create. Our alumni now lead university projects, apprenticeships and careers in AI ethics, data science and policy – proof that early investment builds lifelong impact.
The choices we make today about who gets to learn, create and lead with AI will define the values that underpin our future. As we look ahead to 2026, our ambition is to make AI literacy as universal as literacy itself – accessible across subjects, languages and cultures. If we want technology that reflects the best of humanity, we must invest in the people who will build it. The time to act is now – not when the gap has already widened, but while we still have the chance to shape it together.
Read More Workforce & AI Skills