Tag Archives: accessibility

How ultrasound sensing makes Nest displays more accessible

Last year, I gave my 74-year-old father a Nest Hub for Christmas. Over the following months, I noticed he would often walk up to the device to read the information on the screen, because he couldn’t see it easily from across the room. I wondered if other people were having the same issue. 

My team at Google Nest and I started having conversations with older adults in our lives who use our products, asking them questions about ways they use their devices and observing how they interact with them. In the course of our research, we learned that one in three people over the age of 65 have a vision-reducing eye disease, and that’s on top of the millions of people of all ages who also deal with some form of vision impairment. 

We wanted to create a better experience for people who have low vision. So we set out to create a way for more people to easily see our display from any distance in a room, without compromising the useful information the display could show when nearby. The result is a feature we call ultrasound sensing. 

We needed to find a sensing technology that could detect whether you were close to a device or far away from it and show you the right things based on that distance, while protecting people’s privacy. Our engineers landed on one that was completely new to Google Assistant products, but has been used in the animal kingdom for eons: echolocation. 

Animals with low vision—like bats and dolphins—use echolocation to understand and navigate their environments. Bats emit ultrasonic “chirps” and listen to how those chirps bounce off of objects in their environments and travel back to them. In the same way, Nest Hub and Nest Hub Max emit inaudible sound waves to gauge your proximity to the device. If you’re close, the screen will show you more details and touch controls, and when you’re further away, the screen changes to show only the most important information in larger text. Ultrasound sensing allows our smart displays to react to a user’s distance. 

Directions on a Nest Hub

Ultrasound sensing allows your display to show the most important information when you’re far away, like your total commute time, and show more detail as you get close to the device.

To develop the right screen designs, the team tested varying text heights, contrast levels and information density and measured the ease with which people could read what’s on the screen. It was refreshing when, regardless of age or visual impairment, testers would make comments like, “it just feels easier to read.” It turned out that designing for people with low vision improved the experience for everyone.

Ultrasound testint

Testing ultrasound sensing during the design process.

Ultrasound waves

What ultrasound sensing “sees” on a smart display.

Ultrasound sensing already works for timers, commute times and weather. And over the coming week, your devices will also begin to show reminders, appointments and alerts when you approach the display. Because this is using a low-resolution sensing technology, ultrasound sensing happens entirely on the device and is only able to detect large-scale motion (like a person moving), without being able to identify who the person is.

After we built the ultrasound sensing feature, I tested it with my dad. As soon as I saw him reading his cooking timer on the screen from across the kitchen, I knew we’d made something that would make our devices even more helpful to more people. 

How I’m making Maps better for wheelchair users like me

If you visit a city and don’t see anyone using a wheelchair, it doesn’t mean they’re not there. It means the city hasn’t been built in such a way as to let them be part of things. I know this firsthand: I’m one of 65 million people around the world who uses a wheelchair, and I see every day how a city’s infrastructure can prevent people like me from being active, visible members of society.

On July 29, 2009, I was taking my usual morning walk through New York’s Central Park when a dead tree branch snapped and fell on my head. The spinal damage partly paralyzed my lower body. I spent the next seven months in the hospital, where I got the first glimpse of what my life would be like from then on. I was going to use a wheelchair for the rest of my life—and my experience as a born and bred New Yorker was about to change forever.  

That’s because much of the city isn’t accessible for people like me. Fewer than one in four subway stations in New York City have wheelchair access. And plenty of places, from restaurants to schools, lack a way for me to even get inside. It was humbling to realize these  barriers had been there throughout my growing up in New York; I simply hadn’t noticed.

Those realizations were in my mind when I returned to work in 2011 as an engineer on the Search team, especially because I could no longer take my usual subway route to work. However, the more I shared with colleagues, the more I found people who wanted to help solve real-world access needs. Using “20 percent time”—time spent outside day-to-day job descriptions—my colleagues like Rio Akasaka and Dianna Hu pitched in and we launched wheelchair-friendly transit directions. That initial work has now led to a full-time team dedicated to accessibility on Maps.

I’ve also collaborated with another group of great allies, stretching far beyond Google. For the past several years, I’ve worked with our Local Guides, a community of 120 million people worldwide who contribute information to Google Maps. By answering questions like “Does this place have a wheelchair accessible entrance,” Local Guides help people with mobility impairments decide where to go. Thanks to them, we can now provide crowdsourced accessibility information for more than 50 million places on Google Maps. At our annual event last year and againseveral weeks ago, I met some amazing Guides--like Emeka from NigeriaandIlankovan from Sri Lanka--who have become informal accessibility ambassadors themselves, promoting the inclusion of people with disabilities in their communities around the world.

Today, on International Day of Persons With Disabilities, I hope our work to make Google Maps more inclusive underscores what Angela Glover Blackwell wrote so powerfully about in “The Curb-Cut Effect.” When we build with accessibility in mind, it doesn’t just help people with disabilities. It helps everyone. Curb cuts in sidewalks don’t just help me cross the street—they also help parents pushing strollers, workers with deliveries and tourists with suitcases. As Blackwell puts it, building equity is not a zero-sum game—everyone benefits.

The people in wheelchairs you don’t see in your city? They've been shut out, and may not be able to be a part of society because their environment isn't accessible. And that’s not merely a loss for them. It’s a loss for everyone, including friends, colleagues and loved ones of people with disabilities. I’m grateful to those who stay mindful of the issues faced by people like me to ensure that our solutions truly help the greater community.

Source: Google LatLong


Google Disability Support now includes American Sign Language

There are 466 million people in the world who are deaf and hard-of-hearing, and products like Live Transcribe and Sound Amplifier help them communicate and interact with others. If people with disabilities need specialized technical support for Google’s products and services, they can go to Google Disability Support, and starting today, there will be American Sign Language (ASL) specialists to help people who are deaf or hard-of-hearing through video chat, with help from Connect Direct through TELUS international.

ASL specialists are available Monday through Friday from 8:00 a.m. to 5:00 p.m. PT to answer questions about assistive features and functionalities within Google’s products. For example, an ASL specialist could show you how to set up your new Pixel using Live Caption or how to present Google Slides with captions

The Google Disability Support team is composed of inclusion advocates who are eager to work with the community and Googlers to improve and shape Google’s products with feedback from the people who use our products. Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport to connect with an ASL specialist today.

11 ways Google is making life more accessible

On December 3, 1992, the United Nations founded the International Day of Persons with Disabilities to promote the well-being of people who have disabilities. At Google, we’re doing this by emphasizing accessibility-first design and partnering with communities directly so we can create the most helpful products. This year, we launched a few products and features with the goal of becoming more accessible. Here are a few ways anyone, but especially people with disabilities, can use these tools.

1. An important conversation is happening, but it’s difficult to follow and you wish someone could transcribe it in real-time. 

With Live Transcribe, you can get real-time transcriptions of conversations that are going on around you, even if they’re in another language. 

2. You and your friends are talking about weekend plans, but it’s too loud for you to hear them.

It’s a good thing you downloaded Sound Amplifier from the Google Play Store. Open it, pull out your headphones and get the audio boost you need. 

3. You were challenged to play Chessback, but you’re wondering if you’ll be able to fully experience the game. 

By selecting the "blind-friendly” filter in Google Play, you can quickly identify games that work well with your screen reader or are entirely audio-based. 

4. Someone just handed you a piece of paper, but you’re not sure what it says. 

Just say “Hey Google, open Lookout,” raise your phone over the paper, and wait for the AI-powered app to read out the information to you. If you have trouble, just say “Hey Google, open Be My Eyes for Google” and get connected to someone who can help.

5. You’re in a new city and want more help navigating your way on foot to a must-visit museum. 

If you’re in the U.S. or Japan, plug in your headphones and turn on Detailed Voice Guidance in the “Navigation” setting of Maps. Then you’ll get updates about when your next turn is, consistent assurance you’re on the right route, and a heads up when you’re coming up to a busy road. 

6. You want to watch your favorite show on your phone but can’t figure out all the steps you need to go through to access it. 

We’re working on an app called Action Blocks to help you (or anyone you care for who has a cognitive disability) turn multiple actions into one personalized icon on your phone. So you can watch your favorite show and do other tasks simply by clicking on an image that denotes the action you’re trying to complete. 

7. An adorable photo of a puppy would totally spruce up your email, but your screen reader keeps picking up “unlabeled image.”

Turn on “Get Image Descriptions from Google” to start using Chrome’s AI-powered feature to get alt-text automatically generated on millions of previously unlabeled images. 

8. Your mom just sent you a video of your cousin announcing something, but you can’t hear the audio.

Go ahead and touch the Live Caption icon near volume control and turn your phone into a personal, pocket-sized captioning system (currently available on Pixel 3 and Pixel 4).  

9. It’s an emergency: You dial 911, but you can’t speak to the operator.

The most important thing in this situation is getting the help you need. We’re working on Emergency Assisted Dialing to help anyone communicate through text-to-speech and share information with a 911 operator, whether or not they can speak.

10. You and your grandma are on the phone trying to find time to schedule a visit, but hearing or speaking, especially on the phone, are difficult for you.

A research project called Live Relay is working to create a feature to make it easier for you to use text-to-speech or speech-to-text on your phone to communicate when you aren’t able to speak or hear a conversation. 

11. You’re a developer who wants to start creating more inclusive products for people with disabilities.

Accessibility is something that should be emphasized from the beginning of development. Visit our developer guidelines for in-depth examples of how to make your apps and websites more accessible. 

We hope these tips help you get the most out of your Google devices and apps, as well as give you a peek into what we’re thinking about for the future. 

Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport to connect with a Disability Support specialist. 

Chord Assist makes playing the guitar more accessible

Joe Birch, a developer based in the UK, has a genetic condition that causes low vision. He grew up playing music, but he knows it’s not easy for people who have visual impairments or hearing loss to learn how to play. 

He wanted to change that, so he created Chord Assist, which aims to make learning the guitar more accessible for people who are blind, Deaf and mute. It gives instructions on how to play the guitar through braille, a speaker or visuals on a screen, allowing people to have a conversation to learn to play a certain chord.

“Chord Assist” is powered by Actions on Google, a platform that allows developers to create additional commands for unique applications. The guitar is used as a conversational tool to allow the student to learn a chord by simply saying, “Show me how to play a G chord,” for example. The guitar understands the request, and then gives either a voice output or braille output, depending on the need. 

“I love seeing people pushing the boundaries and breaking the expectations of others,” Joe says. “When someone builds an innovative project that can change the lives of others, it inspires me to achieve the things that I am passionate about. That’s what this whole developer community is really all about, we are here to inspire each other.” 

With the emergence of new technology and easy-to-access educational resources, it’s easier than ever to become a developer. The developer community is global, and is made up of people from all walks of life and backgrounds, with one thing in common—using technology to take an idea and turn it into reality. 


That is what the Google Developers Experts program aims to do by connecting 700 outstanding developers around the world. They gather to share the skills they’ve mastered through application development, podcasts, public speaking and bringing technology to local communities. Each Google Developers Expert has experience and expertise in one or more specific Google technologies.

Joe is a GDE focused on Actions on Google and Android, and has been an engineer for seven years. “Being a GDE allows me to fulfill my passion for both technology and education,” Joe says. “I learned so much by following  designers and developers online. Seeing the cool work that these people are doing helps to fuel my brain and inspire me for the next idea that I might have.”


Google Classroom accessibility empowers inclusive learning

Grace is a 5th grader at Village Elementary School near San Diego, CA. As a student who is blind, she’s used to using multiple pieces of equipment or having an aide support her. But when she started using Google Classroom with a screen reader, “it opened up a whole world for her,” according to Grace’s mom. She is now able to participate confidently alongside her sighted peers. 

Many tools in G Suite have accessibility features built in, including screen readers, voice typing, and braille displays—and Classroom is no different. It helps teachers create and organize assignments quickly, provide feedback efficiently, and easily communicate with students and guardians. Classroom is now used by 40 million students and educators globally, each of whom learns and teaches in a unique way. 

Grace is one story of a student excelling in her class with the support of technology, and we’d love to hear from you about the tools you’re using to support all learners. To learn more about the accessibility features built into G Suite and Chromebooks, head to edu.google.com/accessibility.

On-Device Captioning with Live Caption



Captions for audio content are essential for the deaf and hard of hearing, but they benefit everyone. Watching video without audio is common — whether on the train, in meetings, in bed or when the kids are asleep — and studies have shown that subtitles can increase the duration of time that users spend watching a video by almost 40%. Yet caption support is fragmented across apps and even within them, resulting in a significant amount of audio content that remains inaccessible, including live blogs, podcasts, personal videos, audio messages, social media and others.
Recently we introduced Live Caption, a new Android feature that automatically captions media playing on your phone. The captioning happens in real time, completely on-device, without using network resources, thus preserving privacy and lowering latency. The feature is currently available on Pixel 4 and Pixel 4 XL, will roll out to Pixel 3 models later this year, and will be more widely available on other Android devices soon.
When media is playing, Live Caption can be launched with a single tap from the volume control to display a caption box on the screen.
Building Live Caption for Accuracy and Efficiency
Live Caption works through a combination of three on-device deep learning models: a recurrent neural network (RNN) sequence transduction model for speech recognition (RNN-T), a text-based recurrent neural network model for unspoken punctuation, and a convolutional neural network (CNN) model for sound events classification. Live Caption integrates the signal from the three models to create a single caption track, where sound event tags, like [APPLAUSE] and [MUSIC], appear without interrupting the flow of speech recognition results. Punctuation symbols are predicted while text is updated in parallel.

Incoming sound is processed through a Sound Recognition and ASR feedback loop. The produced text or sound label is formatted and added to the caption.
For sound recognition, we leverage previous work that was done for sound events detection, using a model that was built on top of the AudioSet dataset. The Sound Recognition model is used not only to generate popular sound effect labels but also to detect speech periods. The full automatic speech recognition (ASR) RNN-T engine runs only during speech periods in order to minimize memory and battery usage. For example, when music is detected and speech is not present in the audio stream, the [MUSIC] label will appear on screen, and the ASR model will be unloaded. The ASR model is only loaded back into memory when speech is present in the audio stream again.

In order for Live Caption to be most useful, it should be able to run continuously for long periods of time. To do this, Live Caption’s ASR model is optimized for edge-devices using several techniques, such as neural connection pruning, which reduced the power consumption to 50% compared to the full sized speech model. Yet while the model is significantly more energy efficient, it still performs well for a variety of use cases, including captioning videos, recognizing short queries and narrowband telephony speech, while also being robust to background noise.

The text-based punctuation model was optimized for running continuously on-device using a smaller architecture than the cloud equivalent, and then quantized and serialized using the TensorFlow Lite runtime. As the caption is formed, speech recognition results are rapidly updated a few times per second. In order to save on computational resources and provide a smooth user experience, the punctuation prediction is performed on the tail of the text from the most recently recognized sentence, and if the next updated ASR results do not change that text, the previously punctuated results are retained and reused.

Looking forward
Live Caption is now available in English on Pixel 4 and will soon be available on Pixel 3 and other Android devices. We look forward to bringing this feature to more users by expanding its support to other languages and by further improving the formatting in order to improve the perceived accuracy and coherency of the captions, particularly for multi-speaker content.

Acknowledgements
The core team includes Robert Berry, Anthony Tripaldi, Danielle Cohen, Anna Belozovsky, Yoni Tsafir, Elliott Burford, Justin Lee, Kelsie Van Deman, Nicole Bleuel, Brian Kemler, and Benny Schlesinger. We would like to thank the Google Speech team, especially Qiao Liang, Arun Narayanan, and Rohit Prabhavalkar for their insightful work on the ASR model as well as Chung-Cheng Chiu from Google Brain Team; Dan Ellis and Justin Paul for their help with integrating the Sound Recognition model; Tal Remez for his help in developing the punctuation model; Kevin Rocard and Eric Laurent‎ for their help with the Android audio capture API; and Eugenio Marchiori, Shivanker Goel, Ye Wen, Jay Yoo, Asela Gunawardana, and Tom Hume for their help with the Android infrastructure work.

Source: Google AI Blog


Investing in affordable and inclusive communities

Editor’s note: This guest post comes from Micaela Connery, Founder and CEO of The Kelsey.

My cousin Kelsey and I were born three months apart, going through every life milestone together. When it came time to live on our own, it took me several months to find housing—but it took Kelsey almost eight years. Her family struggled to find a home that was supportive of her disabilities, while still letting Kelsey be part of the broader community.

Kelsey and Micaela

Kelsey and me

It’s a challenge almost every adult with disabilities faces. Kelsey was one of the lucky ones, with supportive parents and good local resources. The reality is that over 70 percent of people with developmental disabilities never move from their family home. This challenge is particularly acute in lower income communities or communities of color.

Addressing this critical housing need for adults with disabilities can, and must, be done through inclusion in design, funding, policies and culture. The Kelsey creates and advocates for housing where people with and without disabilities live, play, and serve together. With a $5.3 million investment from Google, we're building our first community—The Kelsey Ayer Station—in San Jose, California.

The Kelsey Ayer Station will provide 115 homes to people of all abilities and all incomes. Our rent prices accommodate people with a range of incomes and 25 percent of the community is specifically reserved for people with disabilities. Developed in partnership with Sares Regis Group of Northern California, the entire space (including each unit) is designed to be accessible and inclusive to everyone. The site includes on-site features like a drop-off for accessible transit, sensory garden, and space for support staff. The building will have an Inclusion Concierge™, which means that two staff members will live in the community full time and connect residents to each other, the services and support they need, and the broader city around them. It will be a community where everyone—regardless of background, disability, identity, gender, age and race—can feel at home.

Google’s investment is part of its broader commitment to Bay Area housing. With it, we no longer have to worry about critical pre-development costs like purchasing and entitling our land and completing initial design work. At the same time, Google’s financing will help us focus on securing permanent financing and philanthropic support to complete the project. Google’s investment allows us to stick to our ambitious pace: residents will move into the space in four years, a timeline rarely seen in the housing industry. 

Less than 12 percent of adults with developmental disabilities own or rent their own home. But what people with disabilities want in housing isn’t particularly special or different. People want a place where they have privacy and independence, but also community where they feel safe without being constrained. People want a home they are proud of and can thrive in. Most importantly, housing for people with disabilities isn’t a problem to be solved “for them”—it’s an opportunity to create better designed, higher-quality, more connected communities for everyone.

The Kelsey Ayer Station will demonstrate what’s possible when people, funding, and cities come together with a shared commitment to inclusion. With help from companies like Google and cities like San Jose we’re well on our way and we’re confident that their support will attract others to step up to make inclusive community a reality. 


More from this Collection

Making technology accessible for everyone

To mark National Disability Employment Awareness Month, we’re sharing more about our efforts to make technology, and the world around us, more accessible.

View all 10 articles

Boxing coach uses Live Transcribe to connect with at-risk youth

Editor’s note: Anya Karir is a Toronto-based youth boxing coach who uses Google’s accessibility tools to communicate with those around her.

Isolated and alienated. That’s how I’d describe the moment I realized I was deaf. That transition, from just a kid to a deaf person, is so clear in my memory—I was three years old, standing on my balcony on a warm New Delhi evening, watching people go by, and not hearing a sound. I wondered if I was the only deaf person on Earth. I had never met anyone like me.  

My parents sent me to a deaf school where the teachers only spoke Hindi. I noticed adults using large gestures to communicate with me, and in those early years we built a unique language to communicate to one another. When they would say "water" or "milk," they would make a closed fist with a thumb out (like giving a thumbs down), but in this case the thumbs down would be toward your mouth. 

When it was time to enroll into school, there was no sign language at the time, which made it difficult for me to connect or engage with the other students. That was my “deaf” moment—the moment that all those with accessibility challenges can relate to, where you realize that you are fundamentally different. 

We ended up moving to Canada where I learned American Sign Language. The ability to communicate more freely helped those feelings of isolation slowly fade away. And, today, I’m part of a strong community of deaf people that has helped me to learn, grow and shed the feeling of loneliness. 

While I’ve become more comfortable straddling the communities of both the deaf and those who can hear, there’s still friction when it comes to engaging with those who can’t sign, relying on my cochlear implant (a surgically-implanted device that provides a sense of sound with electric signals,) lip reading or cumbersome note taking. Thankfully, technology is helping to change that. A few months ago, I started to use Google’s accessibility app “Live Transcribe,” which basically provides real time captions when someone is speaking to you. I think of it as a super accurate and personalized note taker in your phone. 

Anya boxing.jpg

Anya at the boxing gym

I’m a boxing coach for at-risk youth. Imagine you’re in a loud gym: thud, smack, laughter, doors opening and closing. It’s just you and a teenager, learning to communicate with each other: “Move your feet,” “improve your jab,” “take a quick break.” It would be tough enough to give and receive detailed instructions if you could hear, but bring in the loud noises interrupting conversation and it’s nearly impossible at times. In my case, Live Transcribe helps me listen to the kids in a noisy environment; it also detects ambient noises which gives me important situational context. Success in boxing is measured by one’s ability to give and receive punches and technology like this helps me truly engage in the ring so I can help these kids roll with the punches and rise to the top, inside and outside of the gym.

I look forward to seeing how technology will continue to build inclusion and nurture our community. It’s something my three-year-old self would have wanted, and something I’m excited that three year old’s of this generation will experience. 

Source: Android


Using personal experience to make Chromebooks accessible

David Tseng has dedicated his career to using technology to break down barriers for people with disabilities. At Google, he’s the Technical Lead for Chrome OS accessibility services, which means that his team makes Chromebooks easier to use for people with a wide range of disabilities. In honor of Disability Awareness Month, we sat down with David to hear more about his experiences making Chromebooks more accessible. 

What led you to a career in tech and accessibility?

I happen to be blind myself, so I grew up closely tied to technology. My “pen and paper” consisted of digital braille displays. My textbooks and exams came in digital formats even when my sighted peers used the usual physical variety. My interactions with computers meant listening to computerized text-to-speech. Looking back, all of this nudged me to wonder how these crucial pieces of my daily life worked, and led me to study them in college and beyond.

My interest specifically in Chromebook accessibility stems from this personal passion. In large part, it comes from the fact that I not only use my Chromebook every day to accomplish all sorts of tasks at home and at work, but also that I’m an engineer with expertise in making those very products more helpful. When you work on something like accessibility it can be challenging because the user population has specific and detailed needs that aren’t always obvious or intuitive. It’s these challenges that motivate me. I’ve always thought that opportunities are boundless with software, and I still believe that today.

What does Disability Awareness Month mean to you?

I’ve always been eager to share with people the resources we have available through technology. For me, technology has served as a way to level the playing field. Now that so many of us have devices in our pockets at all times, we can move around more easily with our mobile phones, read our own mail, identify colors, recognize people’s faces and their expressions -- there are so many wonderful and empowering things we can do with technology that can help us all lead fulfilling and independent lives.

What's the best part of your job?

I love getting to lead the creation of features that tangibly make Chromebook better for users with disabilities, and also make Chromebook better for everyone. My team and I have the opportunity to create features for Chromebook like ChromeVox, which enables blind and low vision users to navigate the screen with audio spoken feedback, or with a connected braille display. This feature is personally meaningful to me, since I use it during my day-to-day work. 

David using ChromeVox on his Chromebook at work

My team and I have also developed Dictation on Chromebook, which allows a user to input text into any field on a Chromebook using their voice. This is especially useful not only for people with dexterity impairments, but also for anyone who wants to take a break from using their keyboard on Chromebook to type with their voice.  

Our team is on a journey to make Chromebook as strong as possible for people with disabilities. Over the past couple of months, we dramatically improved the usability of Automatic Clicks, where users can set the cursor to automatically click or take action when the cursor stops moving for a certain amount of time -- something that can be helpful for users with motor or dexterity challenges. 

I believe that accessibility is a mindset that can be integrated into any aspect of technology. Whether you're interested in machine learning, graphics, operating systems, hardware or gaming, there’s probably a pressing need for inclusive design. 

To learn more about how to turn on accessibility features that work best for your needs on Chromebook, check out the Chromebook accessibility help page for more information.