Google Develops AI to Support and Assist Individuals with Speech Impairments

Voice Assistants and Speech Impairments: Google's Project Euphonia

For many, voice assistants are invaluable tools; however, for millions with speech impairments due to neurological conditions, these technologies can pose significant challenges. Google is determined to change this narrative. At its recent I/O developer conference, the company unveiled its efforts to enhance AI's ability to understand diverse speech patterns, specifically focusing on impaired speech resulting from conditions like brain injuries and ALS.

Through Project Euphonia, Google has collaborated with the ALS Therapy Development Institute (ALS TDI) and the ALS Residence Initiative (ALSRI). The goal is clear: if friends and family can comprehend their loved ones with ALS, then AI can be trained to do the same, given enough examples of impaired speech.

To achieve this, Google set out to collect thousands of voice samples. Notably, Dimitri Kanevsky, a speech researcher at Google who learned English after becoming deaf as a child in Russia, contributed 15,000 phrases. These recordings were transformed into spectrograms—visual representations of sound—and were key in training the AI to understand Kanevsky’s speech.

This initiative is still in development, currently focusing on English speakers with ALS-related impairments. Google is actively seeking volunteers who can complete a brief form and record specific phrases. Additionally, the company aims to enhance its AI to interpret sounds and gestures, allowing users to perform actions such as issuing commands to Google Home or sending text messages. Ultimately, Google envisions a future where its AI can understand every individual, irrespective of their communication style.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles