5 ways AI could transform digital accessibility

Image of an accessibility software being used to create captions for a video on screen

Written by Joe Chidzik, Senior Accessibility and Usability Consultant, AbilityNet

From autonomous cars to conversations with robots, artificial intelligence has the potential to transform the lives of disabled people. AbilityNet’s senior accessibility and usability consultant Joe Chidzik explores some of the ways AI will make the internet more accessible for disabled people.

1. AI provides language translation and captioning for people who are deaf

Microsoft offers a free service through the Microsoft Translator app where audio is translated into other languages, and into text (for captions). People who have English as a second language benefit, as do people who are deaf or who have hearing loss.

Read more about Microsoft Translator app helping people who are deaf.

2. AI could provide automatic sign language provision

Many users of British Sign Language – or other languages such as American Sign Language – learn BSL as their first language and find it difficult to read standard written captions. Prototypes already exist that use AI to deliver automatic sign language translation, which would be would be a huge benefit for people who are deaf or have hearing loss.

Check out further information from Nvidia on AI and sign language.

3. AI provides automatic image recognition and alt text for people who are blind

One of the most common issues with accessibility is the lack of alternative text for images, which means people who are blind or have sight loss could be missing important information. Alt-text is usually added by the person creating the content, but the image recognition power of AI is already providing a solution, with Facebook, Microsoft, Google and many others offering automatic descriptions and captions. It’s not foolproof but it does offer a useful step forward when nothing has been entered information manually.

Find out more about the Cloud Vision API, image recognition and alt text.

4. AI makes information easier to understand for those with reading difficulties

The internet is full of an ever-growing amount of information. Distilling that information is a challenge that machine learning is working towards. Services are being developed to automatically summarise lengthy articles by creating short abstracts, or related headlines.

If done well – and it might take machines a while to learn to do it really well—this could be good for creating ‘easier-to-read’ content or snapshots of articles to help users with reading difficulties or those who feel easily overwhelmed by information.

Find out more about how AI could help us more quickly find the info we need, on Technology Review and here on Machine Learning Mastery.

5. Could AI make entire websites accessible?

Whether you use Amazon Alexa, Microsoft Cortana, Apple’s Siri or Google Home, voice-based services are built on the power of AI and are now available in millions of devices, from phones to cars to home assistants and many more. Although much of the web is text-based these new devices face a huge challenge in navigation websites that haven’t been designed for this new type of conversational interface.

One possibility is that machine learning will bring a new type of accessibility to websites – side-stepping or ‘fixing’ accessibility barriers so that they can deliver relevant  answers and information on these new devices. There are still lots of hurdles to jump, but it could be that accessibility in the future will be solved by robots that can navigate inaccessible websites.


 

More Thought Leadership

 

Comments are closed.