Webinar: Automated Speech Recognition (ASR) to help hearing-impaired people communicate easier

This webinar was presented in Computational Audiology Network 2022 podcast series episode 2

The overall objective was to discuss the use of automated speech recognition technology, now with high accuracy, to help hearing-impaired people communicate easier.

This interview was prepared as an experiment using an automated speech recognition system (speech to text). One of the participants, Dimitri Kanevsky lost his hearing in early childhood and reads the transcript of what is said in order to follow the discussion, the other participants are normal hearing. The participants all need to take time to read the transcript and confirm that they all can understand each other properly. The discussion used Google Meet and Google Relate, a prototype system not yet publicly released, that was trained on Dimitri’s speech. In addition, the participants hadn’t met in person before and English is not the first language for everyone. The (edited) video recording below will include the transcript of what is said by Dimitri.

Listen to the audio recording and find out more from the conference organisers, the Computational Audiology Network.

Presenters

Jessica Monaghan works as a research scientist at the National Acoustic Laboratories (NAL, Sydney) with a special interest in machine learning applications in audiology. She studied physics in Cambridge (UK) and received a Ph.D. in Nottingham (UK). She worked as a research fellow in Southampton and Macquarie University in Sydney. Her work focuses on speech reception and how to improve this in case of hearing loss. Recently she studied the effect of facemasks on speech recognition.

Nicky Chong-White is a research engineer at the National Acoustic Laboratories (NAL, Sydney). She studied Electrical Engineering at the University of Auckland (NZ) and received a Ph.D. in speech signal processing at the University of Wollongong (AU). She has worked as DSP engineer with several research organisations including Motorola Australian Research Centre and AT&T Labs. Nicky holds 10 patents. She is the lead developer behind NALscribe, a live captioning app to help people with hearing difficulties understand conversations more easily, designed especially for clinical settings.  She has a passion for mobile application development and creating innovative digital solutions to enrich the lives of people with hearing loss.

Dimitri Kanevsky is a researcher at Google. He lost his hearing in early childhood. He studied mathematics and received a Ph.D. at Moskow State University. Subsequently, Dimitri worked at various research centers including Max Planck Institute in Bonn (Germany) and the Institute for Advanced Studies in Princeton (USA) before joining IBM in 1986 and Google in 2014. He has been working for over 25 years in developing and improving speech recognition for people with profound hearing loss leading to Live Transcribe and Relate. Dimitri has also worked on other technologies to improve accessibility. In 2012 he was honored at the White House as a Champion of Change for his efforts to advance access to science, technology, engineering, and math (STEM) for people with disabilities. Dimitri currently holds over 295 patents.

Further reading: A speech to text technology developed by NAL to help hearing-impaired people communicate easier – NALscribe.

Share this article