The launch of ChatGPT in November 2022 fundamentally changed how we work, learn and even how we plan our holidays. Generative AI (GenAI) offers significant promise: it can streamline tasks such as drafting reports, conducting research, synthesising data, and it is now increasingly embedded into everyday operations. The AI landscape is evolving, but we do not just need access to better AI tools, we need critical AI literacy.
What is critical AI literacy?
AI literacy is foundational for understanding, using and evaluating AI tools. It provides insights into how AI works, why it produces certain outputs, its limitations, and key terminology. Critical AI literacy builds on this by going beyond a technical understanding of AI to develop a deeper analysis of its societal impact, biases, ethical implications and power dynamics.
It challenges us to ask important questions such as:
We need to understand how AI tools work but also develop frameworks and guidelines to ensure that AI is used ethically and responsibly, and promotes values such as fairness, inclusion and trust.
Why does it matter?
AI tools have the potential to transform how we work and make decisions. They may bring efficiencies and increase productivity, but there are also risks, such as misinformation, hallucinations, overreliance, de-skilling, privacy concerns, bias and increased digital exclusion. We need to equip people with critical AI literacy to support its responsible use and governance, ensuring that tools are not adopted in ways that can be misused, cause harm or result in unintended consequences.
Critical AI literacy is essential for:
Building critical AI literacy
Every organisation should embed critical AI literacy training across their workforce. More broadly, if we want AI for the public good, we should also have a public AI literacy programme available to everyone, that does not just focus on how to use AI but incorporates critical thinking and ethics.
A call to action
Just as we recognise the importance of digital literacy, critical AI literacy is now fundamental, if organisations are to take advantage of the opportunities offered by AI, but ensure that it is adopted ethically and responsibly.
There are AI skills gaps across many sectors; training is critical, and many organisations are assessing training needs and delivering high quality AI training, essential for AI adoption. But not every organisation has access to quality free training, to address this gap, academics from The Open University and the University of Lincoln funded by UKRI Responsible AI, have collaborated with Citizens Advice to develop eight beginners’ courses that support knowledge and understanding of AI. The courses are free and hosted on The Open University’s OpenLearn platform. They provide learners with an understanding of what AI is, how it works, and practical skills such as prompting and evaluating outputs. The courses also address the ethical and responsible use of AI, AI governance (including developing an AI strategy and policies), navigating risks and understanding how AI is legally regulated.
The courses are engaging. They include case studies, videos, quizzes and activities. On completion of each course, learners receive a digital badge and a certificate of participation which they can use to evidence their continuing professional development and add to their LinkedIn profile. The courses also have a Creative Commons licence, so they can be reused by organisations as part of their training.
It is essential that organisations develop strategies that support AI skills development, but the the focus should not just be on technical skills, but on critical AI literacy to build critical thinking and critical evaluation skills. This should be an on-going priority to reflect how AI is developing and how it is re-shaping skills to ensure that organisations maximise the benefits brought by new technologies.
Read More Workforce & AI Skills