Tag Archives: accessibility

Chromebook accessibility tools for distance learning

Around the world, 1.5 billion students are now adjusting to learning from home. For students with disabilities, this adjustment is even more difficult without hands-on classroom instruction and support from teachers and learning specialists.

For educators and families using Chromebooks, there are a variety of built-in accessibility features to customize students’ learning experience and make them even more helpful. We’ve put together a list of some of these tools to explore as you navigate at-home learning for students with disabilities.

Supporting students who are low vision

To help students see screens more easily, you can find instructions for locating and turning on several Chromebook accessibility features in this Chromebook Help article. Here are a few examples of things you can try, based on students’ needs:

  • Increase the size of the cursor, or increase text size for better visibility. 

  • Add ahighlighted circle around the cursor when moving the mouse, text caret when typing, or keyboard-focused item when tabbing. These colorful rings appear when the items are in motion to draw greater visual focus, and then fade away.

  • For students with light sensitivity or eye strain, you can turn on high-contrast mode to invert colors across the Chromebook (or add this Chrome extension for web browsing in high contrast).

  • Increase the size of browser or app content, or make everything on the screen—including app icons and Chrome tabs—larger for greater visibility. 

  • For higher levels of zoom, try thefullscreen or docked magnifiers in Chromebook accessibility settings. The fullscreen magnifier zooms the entire screen, whereas the docked magnifier makes the top one-third of the screen a magnified area. Learn more in this Chromebook magnification tutorial.

002-B2S-Tips-Resize-GIF.gif

Helping students read and understand text

Features that read text out loud can be useful for students with visual impairments, learning and processing challenges, or even students learning a new language.

  • Select-to-speak lets students hear the text they choose on-screen spoken out loud, with word-by-word visual highlighting for better audio and visual connection.

  • With Chromevox, the built-in screen reader for Chromebooks, students can navigate around the Chromebook interface using audio spoken feedback or braille. To hear whatever text is under the cursor, turn on Speak text under the mouse in ChromeVox options. This is most beneficial for students who have significant vision loss. 

  • Add the Read&Write Chrome extension from Texthelp for spelling and grammar checks,  talking and picture dictionaries, text-to-speech and additional reading and writing supports- all in one easy to use toolbar. 

  • For students with dyslexia, try the OpenDyslexic Font Chrome extension to replace web page fonts with a more readable font. Or use the BeeLine Reader Chrome extension to color-code text to reduce eye strain and help students better track from one line of text to the next. You can also use the Thomas Jockin font in Google Docs, Sheets and Slides.

Guiding students with writing challenges or mobility impairments

Students can continue to develop writing skills while they’re learning from home.

  • Students can use their voice to enter text by enabling dictation in Chromebook accessibility settings, which works in edit fields across the device. If dictating longer assignments, students can also use voice typing in Google Docs to access a rich set of editing and formatting voice commands. Dictating writing assignments can also be very helpful for students who get a little stuck and want to get thoughts flowing by speaking instead of typing. 

  • Students with mobility impairments can use features like the on-screen keyboard to type using a mouse or pointer device, or automatic clicks to hover over items to click or scroll.

  • Try the Co:Writer Chrome extension for word prediction and completion, as well as excellent grammar help. Don Johnston is offering free access to this and other eLearning tools. Districts, schools, and education practitioners can submit a request for access.

How to get started with Chromebook accessibility tools

We just shared a 12-part video series with training for G Suite and Chromebook Accessibility features made by teachers for teachers. These videos highlight teachers’ experience using these features in the classroom, as well as what type of diverse learner specific features benefit. For more, you can watch these videos from the Google team, read our G Suite accessibility user guide, or join a Google Group to ask questions and get real time answers. To find great accessibility apps and ideas on how to use them, check out the Chromebook App Hub, and for training, head to the Teacher Center.


We’re also eager to hear your ideas—leave your thoughts in this Google Form and help educators benefit from your experience.

Accessibility improvements for Google Docs

Quick launch summary 

We’re making several improvements to accessibility features in Google Docs. Some enhancements you’ll notice are:

  • Improvements in how screen readers verbalize content, including for non-text information like comments and suggestions. 
  • Improvements in how Braille displays render content, including symbols, emojis, and other glyphs. 
  • Improved support for navigating through elements such as tables, headers, and footers.
  • Improved caret tracking. 
We hope these improvements make it easier for users of assistive technologies to work in Google Docs.

Getting started 

Admins: There is no admin action required for this feature.

End users: These improvements will be automatically available to end users. Visit the Help Center to learn more about accessibility for Docs editors.

Rollout pace 


Availability 



  • Available to all G Suite customers and users with personal Google Accounts 

Resources 



How Tim Shaw regained his voice

His entire life, Tim Shaw dedicated himself to football and dreamed of playing professionally. At 23, his dream came true when he was drafted and spent six years as an NFL linebacker. Then, in 2013, Tim felt his body begin to change. It started with small muscle twitches or bicep spasms; once, a gallon of milk slipped out of his hand while he was unloading groceries. During a game when he was perfectly positioned to tackle his opponent, his arm couldn’t hang on and the player slid past. His performance kept inexplicably declining and just before the 2013 season, Tim was cut from the Titans. 


Five months later, Tim was diagnosed with Amyotrophic Lateral Sclerosis (ALS, also known as Lou Gehrig’s disease). With no known cause or cure, ALS not only impacts movement, but can make speaking, swallowing and even breathing difficult. Through our partnership with the ALS Therapy Development Institute, we met Tim and learned that the inability to communicate was one of the hardest parts of living with the disease. We showcase Tim’s journey in the new YouTube Originals learning series “The Age of A.I.” hosted by Robert Downey Jr.


For many people with ALS, losing their voice can be one of the most devastating aspects of the disease. But technology has the potential to help. Earlier this year, we announced a research project called Project Euphonia, which aims to use AI to improve communication for people who have impaired speech caused by neurologic conditions, including ALS. When we heard Tim's story, we thought we might have a way to help him regain a part of identity he'd lost—his voice. 


Current text-to-speech technology requires at least 30-40 minutes of recordings to create a high-quality synthetic voice—which people with ALS don’t always have. In Tim’s case, though, we were able to pull together a bank of voice samples from the many interviews he had done while playing for the NFL. The DeepMind, Google AI and Project Euphonia teams created tools that were able to take these recordings and use them to create a voice that resembles how Tim sounded before his speech degraded; he was even able to use the voice to read out the letter he’d recently written to his younger self. While it lacks the expressiveness, quirks and controllability of a real voice, it shows that this technology holds promise. 


"It has been so long since I've sounded like that, I feel like a new person,” Tim said when he first heard his recreated voice. “I felt like a missing part was put back in place. It's amazing." 


In the aforementioned letter, Tim told his younger self to “wake up every day and choose to make a positive impact on other people.” Our research and work with Tim makes us hopeful we can do just that by improving communication systems and ultimately giving people with impaired speech more independence. You can learn more about our project with Tim and the vital role he played in our research in “The Age of A.I.” now streaming on YouTube.com/Learning.

How ultrasound sensing makes Nest displays more accessible

Last year, I gave my 74-year-old father a Nest Hub for Christmas. Over the following months, I noticed he would often walk up to the device to read the information on the screen, because he couldn’t see it easily from across the room. I wondered if other people were having the same issue. 

My team at Google Nest and I started having conversations with older adults in our lives who use our products, asking them questions about ways they use their devices and observing how they interact with them. In the course of our research, we learned that one in three people over the age of 65 have a vision-reducing eye disease, and that’s on top of the millions of people of all ages who also deal with some form of vision impairment. 

We wanted to create a better experience for people who have low vision. So we set out to create a way for more people to easily see our display from any distance in a room, without compromising the useful information the display could show when nearby. The result is a feature we call ultrasound sensing. 

We needed to find a sensing technology that could detect whether you were close to a device or far away from it and show you the right things based on that distance, while protecting people’s privacy. Our engineers landed on one that was completely new to Google Assistant products, but has been used in the animal kingdom for eons: echolocation. 

Animals with low vision—like bats and dolphins—use echolocation to understand and navigate their environments. Bats emit ultrasonic “chirps” and listen to how those chirps bounce off of objects in their environments and travel back to them. In the same way, Nest Hub and Nest Hub Max emit inaudible sound waves to gauge your proximity to the device. If you’re close, the screen will show you more details and touch controls, and when you’re further away, the screen changes to show only the most important information in larger text. Ultrasound sensing allows our smart displays to react to a user’s distance. 

Directions on a Nest Hub

Ultrasound sensing allows your display to show the most important information when you’re far away, like your total commute time, and show more detail as you get close to the device.

To develop the right screen designs, the team tested varying text heights, contrast levels and information density and measured the ease with which people could read what’s on the screen. It was refreshing when, regardless of age or visual impairment, testers would make comments like, “it just feels easier to read.” It turned out that designing for people with low vision improved the experience for everyone.

Ultrasound testint

Testing ultrasound sensing during the design process.

Ultrasound waves

What ultrasound sensing “sees” on a smart display.

Ultrasound sensing already works for timers, commute times and weather. And over the coming week, your devices will also begin to show reminders, appointments and alerts when you approach the display. Because this is using a low-resolution sensing technology, ultrasound sensing happens entirely on the device and is only able to detect large-scale motion (like a person moving), without being able to identify who the person is.

After we built the ultrasound sensing feature, I tested it with my dad. As soon as I saw him reading his cooking timer on the screen from across the kitchen, I knew we’d made something that would make our devices even more helpful to more people. 

How I’m making Maps better for wheelchair users like me

If you visit a city and don’t see anyone using a wheelchair, it doesn’t mean they’re not there. It means the city hasn’t been built in such a way as to let them be part of things. I know this firsthand: I’m one of 65 million people around the world who uses a wheelchair, and I see every day how a city’s infrastructure can prevent people like me from being active, visible members of society.

On July 29, 2009, I was taking my usual morning walk through New York’s Central Park when a dead tree branch snapped and fell on my head. The spinal damage partly paralyzed my lower body. I spent the next seven months in the hospital, where I got the first glimpse of what my life would be like from then on. I was going to use a wheelchair for the rest of my life—and my experience as a born and bred New Yorker was about to change forever.  

That’s because much of the city isn’t accessible for people like me. Fewer than one in four subway stations in New York City have wheelchair access. And plenty of places, from restaurants to schools, lack a way for me to even get inside. It was humbling to realize these  barriers had been there throughout my growing up in New York; I simply hadn’t noticed.

Those realizations were in my mind when I returned to work in 2011 as an engineer on the Search team, especially because I could no longer take my usual subway route to work. However, the more I shared with colleagues, the more I found people who wanted to help solve real-world access needs. Using “20 percent time”—time spent outside day-to-day job descriptions—my colleagues like Rio Akasaka and Dianna Hu pitched in and we launched wheelchair-friendly transit directions. That initial work has now led to a full-time team dedicated to accessibility on Maps.

I’ve also collaborated with another group of great allies, stretching far beyond Google. For the past several years, I’ve worked with our Local Guides, a community of 120 million people worldwide who contribute information to Google Maps. By answering questions like “Does this place have a wheelchair accessible entrance,” Local Guides help people with mobility impairments decide where to go. Thanks to them, we can now provide crowdsourced accessibility information for more than 50 million places on Google Maps. At our annual event last year and againseveral weeks ago, I met some amazing Guides--like Emeka from NigeriaandIlankovan from Sri Lanka--who have become informal accessibility ambassadors themselves, promoting the inclusion of people with disabilities in their communities around the world.

Today, on International Day of Persons With Disabilities, I hope our work to make Google Maps more inclusive underscores what Angela Glover Blackwell wrote so powerfully about in “The Curb-Cut Effect.” When we build with accessibility in mind, it doesn’t just help people with disabilities. It helps everyone. Curb cuts in sidewalks don’t just help me cross the street—they also help parents pushing strollers, workers with deliveries and tourists with suitcases. As Blackwell puts it, building equity is not a zero-sum game—everyone benefits.

The people in wheelchairs you don’t see in your city? They've been shut out, and may not be able to be a part of society because their environment isn't accessible. And that’s not merely a loss for them. It’s a loss for everyone, including friends, colleagues and loved ones of people with disabilities. I’m grateful to those who stay mindful of the issues faced by people like me to ensure that our solutions truly help the greater community.

Source: Google LatLong


Google Disability Support now includes American Sign Language

There are 466 million people in the world who are deaf and hard-of-hearing, and products like Live Transcribe and Sound Amplifier help them communicate and interact with others. If people with disabilities need specialized technical support for Google’s products and services, they can go to Google Disability Support, and starting today, there will be American Sign Language (ASL) specialists to help people who are deaf or hard-of-hearing through video chat, with help from Connect Direct through TELUS international.

ASL specialists are available Monday through Friday from 8:00 a.m. to 5:00 p.m. PT to answer questions about assistive features and functionalities within Google’s products. For example, an ASL specialist could show you how to set up your new Pixel using Live Caption or how to present Google Slides with captions

The Google Disability Support team is composed of inclusion advocates who are eager to work with the community and Googlers to improve and shape Google’s products with feedback from the people who use our products. Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport to connect with an ASL specialist today.

11 ways Google is making life more accessible

On December 3, 1992, the United Nations founded the International Day of Persons with Disabilities to promote the well-being of people who have disabilities. At Google, we’re doing this by emphasizing accessibility-first design and partnering with communities directly so we can create the most helpful products. This year, we launched a few products and features with the goal of becoming more accessible. Here are a few ways anyone, but especially people with disabilities, can use these tools.

1. An important conversation is happening, but it’s difficult to follow and you wish someone could transcribe it in real-time. 

With Live Transcribe, you can get real-time transcriptions of conversations that are going on around you, even if they’re in another language. 

2. You and your friends are talking about weekend plans, but it’s too loud for you to hear them.

It’s a good thing you downloaded Sound Amplifier from the Google Play Store. Open it, pull out your headphones and get the audio boost you need. 

3. You were challenged to play Chessback, but you’re wondering if you’ll be able to fully experience the game. 

By selecting the "blind-friendly” filter in Google Play, you can quickly identify games that work well with your screen reader or are entirely audio-based. 

4. Someone just handed you a piece of paper, but you’re not sure what it says. 

Just say “Hey Google, open Lookout,” raise your phone over the paper, and wait for the AI-powered app to read out the information to you. If you have trouble, just say “Hey Google, open Be My Eyes for Google” and get connected to someone who can help.

5. You’re in a new city and want more help navigating your way on foot to a must-visit museum. 

If you’re in the U.S. or Japan, plug in your headphones and turn on Detailed Voice Guidance in the “Navigation” setting of Maps. Then you’ll get updates about when your next turn is, consistent assurance you’re on the right route, and a heads up when you’re coming up to a busy road. 

6. You want to watch your favorite show on your phone but can’t figure out all the steps you need to go through to access it. 

We’re working on an app called Action Blocks to help you (or anyone you care for who has a cognitive disability) turn multiple actions into one personalized icon on your phone. So you can watch your favorite show and do other tasks simply by clicking on an image that denotes the action you’re trying to complete. 

7. An adorable photo of a puppy would totally spruce up your email, but your screen reader keeps picking up “unlabeled image.”

Turn on “Get Image Descriptions from Google” to start using Chrome’s AI-powered feature to get alt-text automatically generated on millions of previously unlabeled images. 

8. Your mom just sent you a video of your cousin announcing something, but you can’t hear the audio.

Go ahead and touch the Live Caption icon near volume control and turn your phone into a personal, pocket-sized captioning system (currently available on Pixel 3 and Pixel 4).  

9. It’s an emergency: You dial 911, but you can’t speak to the operator.

The most important thing in this situation is getting the help you need. We’re working on Emergency Assisted Dialing to help anyone communicate through text-to-speech and share information with a 911 operator, whether or not they can speak.

10. You and your grandma are on the phone trying to find time to schedule a visit, but hearing or speaking, especially on the phone, are difficult for you.

A research project called Live Relay is working to create a feature to make it easier for you to use text-to-speech or speech-to-text on your phone to communicate when you aren’t able to speak or hear a conversation. 

11. You’re a developer who wants to start creating more inclusive products for people with disabilities.

Accessibility is something that should be emphasized from the beginning of development. Visit our developer guidelines for in-depth examples of how to make your apps and websites more accessible. 

We hope these tips help you get the most out of your Google devices and apps, as well as give you a peek into what we’re thinking about for the future. 

Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport to connect with a Disability Support specialist. 

Chord Assist makes playing the guitar more accessible

Joe Birch, a developer based in the UK, has a genetic condition that causes low vision. He grew up playing music, but he knows it’s not easy for people who have visual impairments or hearing loss to learn how to play. 

He wanted to change that, so he created Chord Assist, which aims to make learning the guitar more accessible for people who are blind, Deaf and mute. It gives instructions on how to play the guitar through braille, a speaker or visuals on a screen, allowing people to have a conversation to learn to play a certain chord.

“Chord Assist” is powered by Actions on Google, a platform that allows developers to create additional commands for unique applications. The guitar is used as a conversational tool to allow the student to learn a chord by simply saying, “Show me how to play a G chord,” for example. The guitar understands the request, and then gives either a voice output or braille output, depending on the need. 

“I love seeing people pushing the boundaries and breaking the expectations of others,” Joe says. “When someone builds an innovative project that can change the lives of others, it inspires me to achieve the things that I am passionate about. That’s what this whole developer community is really all about, we are here to inspire each other.” 

With the emergence of new technology and easy-to-access educational resources, it’s easier than ever to become a developer. The developer community is global, and is made up of people from all walks of life and backgrounds, with one thing in common—using technology to take an idea and turn it into reality. 


That is what the Google Developers Experts program aims to do by connecting 700 outstanding developers around the world. They gather to share the skills they’ve mastered through application development, podcasts, public speaking and bringing technology to local communities. Each Google Developers Expert has experience and expertise in one or more specific Google technologies.

Joe is a GDE focused on Actions on Google and Android, and has been an engineer for seven years. “Being a GDE allows me to fulfill my passion for both technology and education,” Joe says. “I learned so much by following  designers and developers online. Seeing the cool work that these people are doing helps to fuel my brain and inspire me for the next idea that I might have.”


Google Classroom accessibility empowers inclusive learning

Grace is a 5th grader at Village Elementary School near San Diego, CA. As a student who is blind, she’s used to using multiple pieces of equipment or having an aide support her. But when she started using Google Classroom with a screen reader, “it opened up a whole world for her,” according to Grace’s mom. She is now able to participate confidently alongside her sighted peers. 

Many tools in G Suite have accessibility features built in, including screen readers, voice typing, and braille displays—and Classroom is no different. It helps teachers create and organize assignments quickly, provide feedback efficiently, and easily communicate with students and guardians. Classroom is now used by 40 million students and educators globally, each of whom learns and teaches in a unique way. 

Grace is one story of a student excelling in her class with the support of technology, and we’d love to hear from you about the tools you’re using to support all learners. To learn more about the accessibility features built into G Suite and Chromebooks, head to edu.google.com/accessibility.

On-Device Captioning with Live Caption



Captions for audio content are essential for the deaf and hard of hearing, but they benefit everyone. Watching video without audio is common — whether on the train, in meetings, in bed or when the kids are asleep — and studies have shown that subtitles can increase the duration of time that users spend watching a video by almost 40%. Yet caption support is fragmented across apps and even within them, resulting in a significant amount of audio content that remains inaccessible, including live blogs, podcasts, personal videos, audio messages, social media and others.
Recently we introduced Live Caption, a new Android feature that automatically captions media playing on your phone. The captioning happens in real time, completely on-device, without using network resources, thus preserving privacy and lowering latency. The feature is currently available on Pixel 4 and Pixel 4 XL, will roll out to Pixel 3 models later this year, and will be more widely available on other Android devices soon.
When media is playing, Live Caption can be launched with a single tap from the volume control to display a caption box on the screen.
Building Live Caption for Accuracy and Efficiency
Live Caption works through a combination of three on-device deep learning models: a recurrent neural network (RNN) sequence transduction model for speech recognition (RNN-T), a text-based recurrent neural network model for unspoken punctuation, and a convolutional neural network (CNN) model for sound events classification. Live Caption integrates the signal from the three models to create a single caption track, where sound event tags, like [APPLAUSE] and [MUSIC], appear without interrupting the flow of speech recognition results. Punctuation symbols are predicted while text is updated in parallel.

Incoming sound is processed through a Sound Recognition and ASR feedback loop. The produced text or sound label is formatted and added to the caption.
For sound recognition, we leverage previous work that was done for sound events detection, using a model that was built on top of the AudioSet dataset. The Sound Recognition model is used not only to generate popular sound effect labels but also to detect speech periods. The full automatic speech recognition (ASR) RNN-T engine runs only during speech periods in order to minimize memory and battery usage. For example, when music is detected and speech is not present in the audio stream, the [MUSIC] label will appear on screen, and the ASR model will be unloaded. The ASR model is only loaded back into memory when speech is present in the audio stream again.

In order for Live Caption to be most useful, it should be able to run continuously for long periods of time. To do this, Live Caption’s ASR model is optimized for edge-devices using several techniques, such as neural connection pruning, which reduced the power consumption to 50% compared to the full sized speech model. Yet while the model is significantly more energy efficient, it still performs well for a variety of use cases, including captioning videos, recognizing short queries and narrowband telephony speech, while also being robust to background noise.

The text-based punctuation model was optimized for running continuously on-device using a smaller architecture than the cloud equivalent, and then quantized and serialized using the TensorFlow Lite runtime. As the caption is formed, speech recognition results are rapidly updated a few times per second. In order to save on computational resources and provide a smooth user experience, the punctuation prediction is performed on the tail of the text from the most recently recognized sentence, and if the next updated ASR results do not change that text, the previously punctuated results are retained and reused.

Looking forward
Live Caption is now available in English on Pixel 4 and will soon be available on Pixel 3 and other Android devices. We look forward to bringing this feature to more users by expanding its support to other languages and by further improving the formatting in order to improve the perceived accuracy and coherency of the captions, particularly for multi-speaker content.

Acknowledgements
The core team includes Robert Berry, Anthony Tripaldi, Danielle Cohen, Anna Belozovsky, Yoni Tsafir, Elliott Burford, Justin Lee, Kelsie Van Deman, Nicole Bleuel, Brian Kemler, and Benny Schlesinger. We would like to thank the Google Speech team, especially Qiao Liang, Arun Narayanan, and Rohit Prabhavalkar for their insightful work on the ASR model as well as Chung-Cheng Chiu from Google Brain Team; Dan Ellis and Justin Paul for their help with integrating the Sound Recognition model; Tal Remez for his help in developing the punctuation model; Kevin Rocard and Eric Laurent‎ for their help with the Android audio capture API; and Eugenio Marchiori, Shivanker Goel, Ye Wen, Jay Yoo, Asela Gunawardana, and Tom Hume for their help with the Android infrastructure work.

Source: Google AI Blog