Education is the key to AI Safety

Written by Sidrah Hassan, AI Ethics Consultant, AND Digital

Do we understand what we are building?

Artificial Intelligence (AI) has dominated conversations in recent years and continues to do so. The World Economic Forum reports that 86% of employees believe AI will be a leading factor in driving business transformation. Alongside this, we have seen significant legislation and investment in AI from major Western governments, including the EU, UK, and US. It feels hard to escape the AI bubble that has become all-encompassing, impacting tech, politics, and various sectors. However, amidst this AI hype, the most crucial piece of the puzzle seems to have fallen by the wayside: education. While the EU AI Act has made it compulsory to provide appropriate upskilling to employees involved in AI systems, this initiative has not gained much traction elsewhere. I often encounter stakeholders who can engage in conversations about AI but lack a deeper understanding of how to build and harness this technology safely. This reality should concern us all. Organisations are eager to capitalise on the economic benefits of AI, yet many do not fully grasp how to develop these systems responsibly or how they impact society and the environment. Are we, as a society, convinced that organisations building AI systems — which significantly affect our lives — are suitably knowledgeable in creating safe and responsible AI?

Lack of AI literacy contributes to Catastrophe

We have already witnessed the catastrophic consequences of seemingly innocent AI products. For example, Character.AI, an AI chatbot that can be customised as various characters, lacked safeguards, which tragically led to the suicide of a teenage boy. This incident, among others, highlights the urgent need for AI literacy, particularly ethical AI literacy, for the organisations creating these technologies. AI is relatively nascent in the commercial field, having transitioned from labs and universities to Silicon Valley and beyond. Consequently, I am not convinced that CEOs, product leaders, engineers, or designers are fully aware of the catastrophic risks posed by unaligned AI systems and how to mitigate them. Education is the key to AI safety and a crucial foundation to differentiate AI from short-term technology hype to long-term success.

Many organisations trying to capitalise on the AI bandwagon are repurposing existing governance structures, policies, and best practices that may not be applicable to AI development. Artificially intelligent systems differ significantly from traditional software like websites and apps. AI relies on ingesting large amounts of often poor-quality data, producing responses that can exacerbate bias, amplify inequalities, and hallucinate. A 2019 study by the US government showed that facial recognition systems were between 10 and 100 times more likely to misidentify Black individuals than white individuals. This discrepancy was evident in the case of Robert Williams, a man from Detroit wrongfully held in custody in 2023. Williams was accused of stealing $30,000 worth of luxury watches, despite being innocent. His conviction stemmed from an AI facial recognition system that matched his image with CCTV footage, leading to a wrongful arrest. This case exemplifies what occurs when AI systems are built without necessary safeguards and used without adequate education on their limitations.

What can the future look like?

The current discourse on AI safety alignment focuses heavily on technical alignment, research, and policy, often neglecting education. This narrow approach fundamentally weakens the cause of AI safety, creating a chasm of AI literacy between frontier labs and the product teams responsible for building these systems. I urge businesses developing or using AI systems — especially those outside EU regulations — to prioritise AI upskilling for their employees. AI literacy will vary by organisation, but some commonalities exist. It is essential for businesses to assess what approach fits their context and values. Organisations can draw inspiration from a plethora of freely available resources online, including Udemy courses, YouTube tutorials, and LinkedIn Learning. Additionally, more bespoke training can be developed in collaboration with subject matter experts to create tailored AI literacy pathways. These can be delivered through a mix of online, in-person, or hackathon-style training sessions. Understanding and implementing AI literacy not only enables teams to build safer AI systems but also prepares them for the future of work. To quote Giovanni and Tiribelli, we must empower our teams to “turn AI systems into weapons of moral construction rather than weapons of mass destruction.”


Read More Cyber Security

Comments are closed.