Artificial Intelligence is revolutionising sign language

two hand signing

Written by Robin Christopherson, MBE, Head of Digital Inclusion at AbilityNet

Using machine learning to recognise the delicate nuances of British Sign Language (BSL) isn’t easy, but now the University of Surrey is hard at work cracking the challenge with funding from the Engineering and Physical Sciences Research Council.

BSL in a nutshell

For anyone who isn’t familiar with BSL it’s a language in its own right used by people with a hearing impairment. It comprises a complex mixture of hand gestures, facial expressions and body posture. Add to this the fact that the grammar, vocabulary and sentence structure of BSL is very different from spoken or written English, and it soon becomes clear that the technical challenge of using Artificial Intelligence (AI) to observe a signer and translate BSL into the written word is difficult in the extreme – let alone capturing the subtle changes in emphasis and emotion conveyed by the signer.

We’re still in the relatively early days of spoken speech interpretation – with all its regional variations, accents and slang – and yet being able to bridge the gap between the spoken and signing worlds would be even more impactful and life-changing for those with a hearing impairment and their friends and colleagues.

A collaboration to crack the signing challenge

A partnership comprising linguists from the Deafness Cognition and Language Research Centre at University College London, the Engineering Science team at the University of Oxford, and experts at the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey will use the circa £1m grant to develop AI that will recognise not only hand motion and shape, but also the facial expression and body posture of the signer.

A bright future for signing and AI

“We believe that this project will be seen as an important landmark for deaf-hearing communications – allowing the deaf community to fully participate in the digital revolution that we are all currently enjoying” says Richard Bowden, Professor of Computer Vision, University of Surrey.

At the ‘IEEE Computer Vision and Pattern Recognition’ conference in Salt Lake City earlier this year, Professor Bowden’s team published a paper detailing the first AI and deep learning system that can perform end-to-end translation directly from sign language to spoken language.

“We are passionate about sign language at CVSSP,” says Professor Bowden, “so much so that everyone who works in this area within our lab is asked to learn how to sign.”

AI is becoming smarter by the day. It won’t be long before the processors within our smartphones will be powerful enough, and the software smart enough, to do for BSL what free apps such as Seeing AI can already do in interpreting the visual and written world for blind users.

Who knows how quickly such tech will arrive for BSL users – but all the signs are good.


Originally posted here 

Find More thought leadership

Comments are closed.