Big Tech seeks to improve speech recognition tools for people with disabilities

Big tech companies are using machine learning to improve voice recognition
Big tech companies are using machine learning to improve voice recognition Copyright Unsplash
Copyright Unsplash
By Luke Hurst
Share this articleComments
Share this articleClose Button

Tech giants including Apple and Google are working with university researchers to improve voice recognition tools for people with disabilities.

ADVERTISEMENT

Major tech companies are teaming up with a university to develop voice recognition technology that better recognises people with speech patterns often associated with disabilities.

Amazon, Apple, Google, Meta, and Microsoft are working with the University of Illinois Urbana-Champaign (UIUC) on its Speech Accessibility Project, which aims to make voice recognition more inclusive.

Many current voice recognition systems, such as voice assistants and translation tools, struggle to recognise people with certain speech patterns, including some whose speech is affected by Amyotrophic lateral sclerosis (ALS), Parkinson’s disease, cerebral palsy, and Down syndrome.

This leaves some people without the option to use speech recognition systems successfully.

The Speech Accessibility Project is looking to change that, and is harnessing the power of Big Tech to develop a solution with the help of artificial intelligence (AI) and machine learning.

“The option to communicate and operate devices with speech is crucial for anyone interacting with technology or the digital economy today,” said Mark Hasegawa-Johnson, the UIUC professor of electrical and computer engineering leading the project.

“Speech interfaces should be available to everybody, and that includes people with disabilities. This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we’ve created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security, and privacy to help us meet this important challenge”.

The project will collect speech samples from people representing a range of different speech patterns, creating a dataset which will be used to train machine learning models to understand more of these patterns - and ultimately improve the inclusiveness of speech recognition systems.

Technology to overcome communication barriers

The Davis Phinney Foundation, a community-based organisation, supports people with Parkinson’s disease. Its executive director, Polly Dawkins, said: “Part of that commitment includes ensuring people with Parkinson’s have access to the tools, technologies, and resources needed to live their best lives.

“Parkinson’s affects motor symptoms, making typing difficult, so speech recognition is a critical tool for communication and expression. We are thrilled to partner with this team to ensure that this effort can benefit our community”.

Another organisation involved in the project, Team Gleason, helps the ALS community with assistive technology, equipment, and robust support services.

“Team Gleason strives each day to provide the best available assistive technology for the ALS community while simultaneously exploring ways to advance future solutions,” said Blair Casey, the group’s executive director.

“Technology has the ability to overcome communication barriers and increase independence. Team Gleason is proud to help accelerate this effort for people living with ALS and anyone else with speech differences”.

Share this articleComments

You might also like