Author Archives: Sagar Savla

What World Hearing Day means for this Googler

Dimitri Kanevsky, a research scientist at Google with an extensive background in mathematics, knows the impact technology can have when built with accessibility in mind. Having lost his hearing in early childhood, he imagines a world where technology can make it easier for people who are deaf or hard of hearing to be a part of everyday, in-person conversations with hearing people. Whether it's ordering coffee at a cafe, conversing with coworkers or checking out at the grocery store.

Dimitri has been turning that idea into a reality. He co-created Live Transcribe, our speech-to-text technology, which launched in 2019 and is now used daily by over a million people to communicate — including Dimitri. He works closely with the team to develop new and helpful features — like an offline mode that will be launching in the coming weeks to give people access to real-time captions even when Wi-Fi and data are unavailable.

For World Hearing Day, we talked with Dimitri about his work, why building for everyone matters and the future of accessible technology.

Tell us more about your background and job at Google.

When I moved to the U.S in 1984, there were no transcription services. I wanted to change that, so I focused my work on optimizing speech and language recognition to help people who are deaf or hard of hearing.

I eventually moved from academia to Google’s speech recognition team in 2014. The work my team and I accomplished allowed us to create practical applications — like Live Transcribe and Live Caption.

How has your personal experience shaped your career?

I completely lost my hearing when I was one. I learned to lipread well so I could communicate with other students and teachers. My family was also very helpful to me. When I switched to a school where my father taught, he made sure I was in a class with children I knew so it was a smoother transition.

But in eighth grade, I moved to a math school with new teachers and students and was unable to lipread what they taught in class or communicate with my new classmates. I sat, day after day, not understanding the material they were teaching and had to teach myself from textbooks. If I had a tool like Live Transcribe when I was growing up, my experience would have been very different.

In what ways has assistive technology — like Live Transcribe — changed your experience today?

Technology provides tremendous opportunities to help people with disabilities — I know this firsthand.

I use Live Transcribe every day to communicate with others. I use it to play games and share stories with my twin granddaughters — which is life-changing. And just last week, I gave a lecture at a mathematical seminar at John Hopkins University. During it, I could interact with the audience and answer questions — without Live Transcribe that would have been very difficult for me to do.

I used to rely heavily on lipreading for day-to-day tasks, but when people wear masks I can't do that — I don't even know when someone who's wearing a mask is talking to me. Because of this, Live Transcribe is even more important to me — especially when at stores, riding public transit or visiting a doctor.

What are you excited about when you think about speech recognition technology ten years from now?

My dream is to use speech recognition technology to help people communicate. As technology advances, it will unlock new possibilities — such as transcribing speech even as people switch languages, understanding people with all accents and speech motor skills, indicating more sound events with visual symbols and automatically integrating sign recognition or additional haptic feedback technologies.

Further in the future, I hope to see an experience where people are no longer dependent on a mobile phone to see transcriptions. Perhaps transcriptions will be available in convenient wearable eye technologies or appear on a wall when someone looks at it. There's a variant of prediction that there will be no mobile phones since all devices around us — like our walls — will act as mobile devices when people need them to.

What do you want others to learn from World Hearing Day?

According to WHO, one in ten people will experience hearing loss by 2050. Still, a lot of people with hearing loss don’t know about novel speech recognition technologies that could help them communicate, and hearing people aren’t aware of these tools.

World Hearing Day is an opportunity to make everybody aware of the needs of people with hearing loss and the technology that everyone can use to have a tremendous impact on their lives.

Important household sounds become more accessible

Appliances beeping. Water running. Dogs barking. These are all sounds that are meant to grab your attention when something important is happening. But, if you have hearing loss or are wearing headphones, these sounds might not be able to draw your attention like they’re intended to. 


Sound Notifications is a new feature on Android that provides push notifications for critical sounds around you. Designed for the estimated 466 million people in the world with hearing loss, Sound Notifications makes important and critical household sounds more accessible with push notifications, a flash from your camera light, or vibrations on your Android phone. This feature can also be helpful if someone is unable to hear temporarily as a result of an injury, wearing earplugs or headphones.
Cropped Sound Notification.png

Receive real-time push notifications of critical sounds around you.

Sound Notifications works with other devices, including Wear OS by Google smartwatches. You can get text notifications with vibrations on your wrist when there is important noise detected by your phone. That way you can continue to get alerts about critical sounds even when you are asleep, a concern shared by many in the deaf and hard of hearing community.

Sound Notification Smartwatch.png

Receive critical sound notifications on other devices, including Wear OS by Google smartwatches.

Developed with machine learning, Sound Notifications works completely offline and uses your phone's microphone to recognize ten different noises—including baby sounds, water running, smoke and fire alarms, appliances beeping and door knocking. This expands our sound detection work in Live Transcribe which shows over 30 sound events alongside real time captions, to provide a better picture of overall sound awareness.

Timeline_Snapshot.png

Use Timeline view to scroll through a snapshot of detected sounds from the past few hours.

While we can notify you about baby sounds or dog barking, it often helps to know more about the preceding events that might have caused that disturbance. With the Timeline view, you can scroll through a brief snapshot of detected sounds from the past few hours. This shows when and how long the sound occurred to get a better sense of the sound’s importance. So if the dog has been barking because of a siren heard before that for 10 minutes, you can see that.


To start using Sound Notifications, go into Settings, then the Accessibility menu and enable Sound Notifications. If you don’t see this option on your phone, you can download both Live Transcribe and Sound Notifications from Google Play, then go to your settings and turn on Sound Notifications. To learn more about using Sound Notifications, visit the help center.

Source: Android