AI and digital inclusion

Written by Cassia Jefferson, Digital Poverty Alliance

Today, it is near impossible to avoid coming across news, debates or research discussing the development and impacts of AI. We at the Digital Poverty Alliance are keen to emphasise that developments in AI must come hand in hand with considerations about how these technologies will affect marginalised communities, particularly those suffering from digital exclusion. This article, in line with our mission, will discuss some of the potential benefits and drawbacks of the rise of artificial intelligence-based technologies. 

 

But what is artificial intelligence? 

While we have seen a flurry of information and opinion surrounding the rise of AI and its potential effects on society, often such news items do not define or explain what Artificial Intelligence is. AI can be broken down into a series of subcategories, but this article will focus primarily on generative AI. In Carsten Jung and Bhargav Srinivasa Desikan’s paper ‘Transformed by AI: How generative artificial intelligence could affect work in the UK’ published by the IPPR at the end of March, generative AI is defined as ‘new computer software that can read and create text, software code and data. Cutting edge models have even shown ability reason and apply abstract concepts in a range of disciplines, often at undergraduate level’. In effect, generative AI has the capacity to create content- the most famous example being Chat GPT. This can be contrasted with discriminative AI, which does not create new output, but is able to categorise things based on preexisting data- for example, if you are able to unlock your phone using facial recognition, it is through a discriminative AI model. 

This discussion leans into one of the potential issues with Artificial Intelligence – only a limited number of people can understand what it is, let alone how it works. Lack of public understanding about AI can make some mistrust these new technologies, and thus less likely to engage with them and reap their benefits. As noted by Yasmin Ibison in a recent webinar with the DPA and Digital Leaders Unlocking Potential, Navigating Pitfalls: Digital Poverty and AI’s Dual Impact, ‘How do we build trust in AI systems so few of us can understand?’. If the public mistrusts AI, they are less likely both to understand and engage with further decisions made about AI development. 

 

Diversity and Inclusion 

A recent literature review carried out to assess the interrelation between AI and diversity and inclusion states ‘diversity and inclusion (D&I) considerations are significantly neglected in AI systems design, development, and deployment.’ This can simply be a result of broader, preexisting issues about digital exclusion. Those who are already unable to access digital services and online platforms have less of an online footprint and are thus often less represented in the data sets used to train AI technologies. This means they are potentially more likely to be excluded or discriminated against in processes such as job applications where companies are increasingly using AI technologies to sort through candidates. Equally, if AI is used to inform public policy, those who are most vulnerable may not be represented in these decision-making processes and suffer as a result. 

However, this is potentially easily fixed. Sham’s, Zowgi and Bano’s review also noted that ‘various challenges could be tackled immediately with regard to diversity and inclusion in AI. For instance, including the perspectives of marginalised communities, such as individuals with disabilities and the elderly, in the development process, can support more representation in the training data’. The main takeaway from this, then, is that AI development needs to come hand in hand with broadening the pool of people involved in discussions about AI. 

 

Education 

Students and parents are often divided in their opinions on the use of generative AI in education. While 41% of children believe AI will be beneficial for education, only 29% of parents think the same, according to a recent Internet Matters Report. Children from higher income backgrounds are more likely to have used ChatGPT- 45% of children in households with an income of over £80,000 have used it, compared to 11% of children with household incomes of under £10,000. This is connected to digital inclusion more generally – children from lower income households who are digitally excluded may have less access to AI technologies. Unless digital inclusion is prioritised, AI may, in fact, widen the digital divide. Equally, as the government has not provided clear guidelines on the use of AI in education, schools are often taking different approaches on how they integrate AI and AI education (including discussion of its potential harms) into the classroom. Clearer national guidance is needed on these matters. 

 

The job market 

The impact AI will have on the job market remains unclear, as much will depend on how AI develops and what policies will be put in place to regulate it. As companies incorporate AI into their operations, back-office jobs may be most at risk of being replaced by AI technologies in its earlier developmental stages. This would potentially affect women more than men, as women are more likely to occupy these positions, such as receptionists and secretaries (IPPR: Transformed by AI).

However, just as with most things about AI, it is unclear to what extend AI will affect the job market- all will depend on government regulation, affecting whether AI is used to augment the job market and labour output or is allowed to displace workers without boosting economic output (IPPR: Transformed by AI)

In all these cases, it is essential that the government takes action to implement policy surrounding AI development which incorporates not only the voices of those developing these technologies, but also those from the wider public, particularly those who may already be digitally excluded. Addressing broader issues of digital exclusion and social inequality will help mitigate against the danger that AI developments may leave those who are already excluded behind.


Read More AI & Data

Comments are closed.