Tag Archives: accessibility

Building for all learners with new apps, tools, and resources

Everyone deserves access to a quality education—no matter your background, where you live, or your abilities. We’re recognizing this on Global Accessibility Awareness Day, an effort to promote digital accessibility and inclusion for people with disabilities, by sharing new features, training, and partners, along with the many new products announced at Google I/O.

Since everyone learns in different ways, we design technology that can adapt to a broad range of needs and learning styles. For example, you can now add captions in Slides and turn on live captions in Hangouts Meet, and we’ve improved discoverability in the G Suite toolbar. By making these features available—with even more in the works—teachers can help students learn in ways that work best for them.

Working with our partners to expand access

We’re not the only ones trying to make learning more accessible, so we’ve partnered with companies who are building apps to make it easier for teachers to communicate with all students.

One of our partners, Crick Software, just launched Clicker Communicator, a child-friendly communication tool for the classroom: bridging the gap between needs/wants and curriculum access, empowering non-verbal students with the tools to initiate and lead conversations, and enabling proactive participation in the classroom. It’s one of the first augmentative and alternative communication (AAC) apps specifically created for Chromebook users.

Learn more about the Clicker Communicator for Chromebooks, one of the first augmentative and alternative communication (AAC) apps specifically created for Chromebook users.

Learn more about Clicker Communicator, an AAC app for Chromebooks.

Assessing with accessibility in mind

Teachers use locked mode when giving Quizzes in Google Forms, only on managed Chromebooks, to eliminate distractions. Locked mode is now used millions of times per month, and many students use additional apps for accommodations when taking quizzes. We’ve been working with many developers to make sure their tools work with locked mode. One of those developers is our partner Texthelp®. Coming soon, when you enable locked mode in Quizzes in Google Forms, your students will be able to access Read&Write for Google Chrome and EquatIO® for Google that they rely on daily.

Another partner, Don Johnston, supports students with their apps including Co:Writer for word prediction, translation, and speech recognition and Snap&Read for read aloud, highlighting, and note-taking. Students signed into these extensions can use them on the quiz—even in locked mode. This integration will be rolling out over the next couple of weeks.

Learn more about the accessibility features available in locked mode, including ChromeVox, select-to-speak, and visual aids including high contrast mode and magnifiers.

Tools, training, and more resources

Assistive technology has the power to transform learning for more students, but educators need training, support, and tutorials to help their students get the most from the technology.

The new Accessibility section on our Google for Education website has information on Chromebooks and G Suite for Education, a module on our Teacher Center and printable flashcards, and EDU in 90 YouTube videos on G Suite and Chromebook accessibility features. Check out our accessibility tools and find training on how to use them to create more engaging, accessible learning experiences.

EDU in 90 video of Chromebook accessibility features

Watch the EDU in 90 on Chrome accessibility features.

We love hearing stories of how technology is making learning more accessible for more learners, so please share how you're using accessibility tools to support all types of learners, and requests for how we can continue to improve to meet the needs of more learners.

Make your smart home more accessible with new tutorials

I’m legally blind, so from the moment I pop out of bed each morning, I use technology to help me go about my day. When I wake up, I ask my Google Assistant for my custom-made morning Routine which turns on my lights, reads my calendar and plays the news. I use other products as well, like screen readers and a refreshable braille display, to help me be as productive as possible.

I bring my understanding of what it's like to have a disability to work with me, where I lead accessibility for Google Search, Google News and the Google Assistant. I work with cross-functional teams to help fulfill Google’s mission of building products for everyone—including those of us in the disabled community.

The Assistant can be particularly useful for helping people with disabilities get things done. So today, Global Accessibility Awareness Day, we’re releasing a series of how-to videos with visual and audible directions, designed to help the accessibility community set up and get the most out of their Assistant-enabled smart devices.

You can find step-by-step tutorials to learn how to interact with your Assistant, from setting up your Assistant-enabled device to using your voice to control your home appliances, at our YouTube playlist which we’ll continue to update throughout the year.

Intro to Assistant Accessibility Videos

This playlist came out of conversations within the team about how we can use our products to make life a little easier. Many of us have some form of disability, or have a friend, co-worker or family member who does. For example, Stephanie Wilson, an engineer on the Google Home team, helped set up her parents’ smart home after her dad was diagnosed with Parkinson’s disease.

In addition to our own teammates, we're always listening to suggestions from the broader community on how we can make our products more accessible. Last week at I/O, we showed how we’re making the Google Assistant more accessible, using AI to improve products for people with a speech impairment, and added Live Caption in Android Q to give the Deaf community automatic captions for media that’s playing audio on your phone. All these changes were made after receiving feedback from people like you.

Head over to our Accessibility website to learn more, and if you have questions or feedback on accessibility within Google products, please share your feedback with us via our dedicated Disability Support team.

New features to make audio more accessible on your phone

Smartphones are key to helping all of us get through our days, from getting directions to translating a word. But for people with disabilities, phones have the potential to do even more to connect people to information and help them perform everyday tasks. We want Android to work for all users, no matter their abilities. And on Global Accessibility Awareness Day, we’re taking another step toward this aim with updates to Live Transcribe, coming next month.


Available on 1.8 billion Android devices, Live Transcribe helps bridge the connection between the deaf and the hearing via real-time, real-world transcriptions for everyday conversations. With this update, we’re building on our machine learning and speech recognition technology to add new capabilities.


First, Live Transcribe will now show you sound events in addition to transcribing speech. You can see, for example, when a dog is barking or when someone is knocking on your door.  Seeing sound events allows you to be more immersed in the non-conversation realm of audio and helps you understand what is happening in the world. This is important to those who may not be able to hear non-speech audio cues such as clapping, laughter, music, applause, or the sound of a speeding vehicle whizzing by.


Second, you’ll now be able to copy and save transcripts, stored locally on your device for three days. This is useful not only for those with deafness or hearing loss—it also helps those who might be using real-time transcriptions in other ways, such as those learning a language for the first time or even, secondarily, journalists capturing interviews or students taking lecture notes. We’ve also made the audio visualization indicator bigger, so that users can more easily see the background audio around them.

New features of Live Transcribe

Caption: See sound events, like whistling or a dog barking, in the bottom left corner of the updated Live Transcribe.

With billions of active devices powered by Android, we’re humbled by the opportunity to build helpful tools that make the world’s information more accessible in the palm of everyone’s hand. As long as there are barriers for some people, we still have work to do. We’ll continue to release more features to enrich the lives of our accessibility community and the people around them.

How DIVA makes Google Assistant more accessible

My 21 year old brother Giovanni loves to listen to music and movies. But because he was born with congenital cataracts, Down syndrome and West syndrome, he is non-verbal. This means he relies on our parents and friends to start or stop music or a movie.  

Over the years, Giovanni has used everything from DVDs to tablets to YouTube to Chromecast to fill his entertainment needs. But as new voice-driven technologies started to emerge, they also came with a different set of challenges that required him to be able to use his voice or a touchscreen. That’s when I decided to find a way to let my brother control access to his music and movies on voice-driven devices without any help. It was a way for me to give him some independence and autonomy.

Working alongside my colleagues in the Milan Google office, I set up Project DIVA, which stands for DIVersely Assisted. The goal was to create a way to let people like Giovanni trigger commands to the Google Assistant without using their voice. We looked at many different scenarios and methodologies that people could use to trigger commands, like pressing a big button with their chin or their foot, or with a bite.  For several months we brainstormed different approaches and presented them at different accessibility and tech events to get feedback.

We had a bunch of ideas on paper that looked promising. But in order to turn those ideas into something real, we took part in an Alphabet-wide accessibility innovation challenge and built a prototype which went on to win the competition. We identified that many assistive buttons available on the market come with a 3.5mm jack, which is the kind many people have on their wired headphones. For our prototype, we created a box to connect those buttons and convert the signal coming from the button to a command sent to the Google Assistant.

Project DIVA diagram

To move from a prototype to reality, we started working with the team behind Google Assistant Connect, and today we are announcing DIVA at Google I/O 2019.


The real test, however, was giving this to Giovanni to try out. By touching the button with his hand, the signal is converted into a command sent to the Assistant. Now he can listen to music on the same devices and services our family and all his friends use,  and his smile tells the best story.


Getting this to work for Giovanni was just the start for Project DIVA. We started with single-purpose buttons, but this could be extended to more flexible and configurable scenarios. Now, we are investigating attaching RFID tags to objects and associating a command to each tag. That way, a person might have a cartoon puppet trigger a cartoon on the TV, or a physical CD trigger the music on their speaker.


Learn more about the idea behind the DIVA project at our publication site, and learn how to build your own device at our technical site.


Sharing what’s new in Android Q

 This year, Android is reaching version 10 and operating on over 2.5 billion active devices. A lot has changed since version 1.0, back when smartphones were just an early idea. Now, they’re an integral tool in our lives—helping us stay in touch, organize our days or find a restaurant in a new place.

Looking ahead, we’re continuing to focus on working with partners to shape the future of mobile and make smartphones even more helpful. As people carry their phones constantly and trust them with lots of personal information, we want to make sure they’re always in control of their data and how it’s shared. And as people spend more time on their devices, building tools to help them find balance with technology continues to be our priority. That’s why we’re focusing on three key areas for our next release, Android Q: innovation, security and privacy and digital wellbeing.

New mobile experiences

Together with over 180 device makers, Android has been at the forefront of new mobile technologies. Many of them—like the first OLED displays, predictive typing, high density and large screens with edge-to-edge glass—have come to Android first. 

This year, new industry trends like foldable phone displays and 5G are pushing the boundaries of what smartphones can do. Android Q is designed to support the potential of foldable devices—from multi-tasking to adapting to different screen dimensions as you unfold the phone. And as the first operating system to support 5G, Android Q offers app developers tools to build for faster connectivity, enhancing experiences like gaming and augmented reality.

We’re also seeing many firsts in software driven by on-device machine learning. One of these features is Live Caption. For 466 million deaf and hard of hearing people around the world, captions are more than a convenience—they make content more accessible. We worked closely with the Deaf community to develop a feature that would improve access to digital media. With a single tap, Live Caption will automatically caption media that’s playing audio on your phone. Live Caption works with videos, podcasts and audio messages, across any app—even stuff you record yourself. As soon as speech is detected, captions will appear, without ever needing Wifi or cell phone data, and without any audio or captions leaving your phone.

On-device machine learning also powers Smart Reply, which is now built into the notification system in Android, allowing any messaging app to suggest replies in notifications. Smart Reply will now also intelligently predict your next action—for example, if someone sends you an address, you can just tap to open that address in Maps.

A phone screen showing a message coming in with an address, and a chip in the notification that opens the address in Google Maps.

Security and privacy as a central focus

Over the years, Android has built out many industry-first security and privacy protections, like file-based encryption, SSL by default and work profile. Android has the most widely-deployed security and anti-malware service of any operating system today thanks to Google Play Protect, which scans over 50 billion apps every day. 

We’re doing even more in Android Q, with almost 50 new features and changes focused on security and privacy. For example, we created a dedicated Privacy section under Settings, where you’ll find important controls in one place. Under Settings, you’ll also find a new Location section that gives you more transparency and granular control over the location data you share with apps. You can now choose to share location data with apps only while they’re in use. Plus, you’ll receive reminders when an app has your location in the background, so you can decide whether or not to continue sharing. Android Q also provides protections for other sensitive device information, like serial numbers.

Finally, we're introducing a way for you to get the latest security and privacy updates, faster. With Android Q, we’ll update important OS components in the background, similar to the way we update apps. This means that you can get the latest security fixes, privacy enhancements and consistency improvements as soon as they’re available, without having to reboot your phone.

Helping you find balance

Since creating our set of Digital Wellbeing tools last year, we’ve heard that they’ve helped you take better control of your phone usage. In fact, app timers helped people stick to their goals over 90 percent of the time, and people who use Wind Down had a 27 percent drop in nightly phone usage.

This year, we’re going even further with new features like Focus mode, which is designed to help you focus without distraction. You can select the apps you find distracting—such as email or the news—and silence them until you come out of Focus mode. And to help children and families find a better balance with technology, we’re making Family Link part of every device that has Digital Wellbeing (starting with Android Q), plus adding top-requested features like bonus time and the ability to set app-specific time limits.

Phone screens showing new Family Link controls in Android Q.

Available in Beta today

Android Q brings many more new features to your smartphone, from a new gesture-based navigation to Dark Theme (you asked, we listened!) to streaming media to hearing aids using Bluetooth LE. 

A grid of logos that demonstrates which devices and brands Android Q beta is available on, including Pixel, Sony, Nokia, Huawei and LG.

You can find some of these features today in Android Q Beta, and thanks to Project Treble and our partners for their commitment to enable faster platform updates, Beta is available for 21 devices from 13 brands, including all Pixel phones.

Source: Android


Easier phone calls without voice or hearing

Last year, I read a social media post from a young woman in Israel. She shared a story about a guy she was in a relationship with, who was deaf, struggling to fix the internet connection at their home. The internet service provider’s tech support had no way to communicate with him via text, email or chat, even though they knew he was deaf. She wrote about how important it was for him to feel independent and be empowered.

This got me thinking: How can we help people make and receive phone calls without having to speak or hear? This led to the creation of our research project, Live Relay.

Live Relay uses on-device speech recognition and text-to-speech conversion to allow the phone to listen and speak on the users’ behalf while they type. By offering instant responses and predictive writing suggestions, Smart Reply and Smart Compose help make typing fast enough to hold a synchronous phone call.

Live Relay is running entirely on the device, keeping calls private. Because Live Relay is interacting with the other side via a regular phone call (no data required), the other side can even be a landline.

Of course, Live Relay would be helpful to anyone who can’t speak or hear during a call, and it may be particularly helpful to deaf and hard-of-hearing users, complementing existing solutions. In the U.S., for example, there are relay and real-time text (RTT) services available for the deaf and hard-of-hearing. These offer advantages in some situations, and our goal isn’t to replace these systems. Rather, we mean to complement them with Live Relay as an additional option for the contexts where it can help most, like handling an incoming call or  when the user prefers a fully automated system for privacy consideration.

We’re even more excited for Live Relay in the long term because we believe it can help all of our users. How many times have you gotten an important call but been unable to step out and chat? With Live Relay, you would be able to take that call anywhere, anytime with the option to type instead of talk. We are also exploring the integration of real-time translation capability, so that you could potentially call anyone in the world and communicate regardless of language barriers. This is the power of designing for accessibility first.

Live Relay is still in the research phase, but we look forward to the day it can give our users more and better ways to communicate—especially those who may be underserved by the options available today.

Follow @googleaccess for continued updates, and contact the Disability Support team (g.co/disabilitysupport) with any feedback.

Source: Android


How AI can improve products for people with impaired speech

Most aspects of life involve communicating with others—and being understood by those people as well. Many of us take this understanding for granted, but you can imagine the extreme difficulty and frustration you’d feel if people couldn’t easily understand the way you talk or express yourself. That’s the reality for millions of people living with speech impairments caused by neurologic conditions such as stroke, ALS, multiple sclerosis, traumatic brain injuries and Parkinson's.

To help solve this problem, the Project Euphonia team—part of our AI for Social Good program—is using AI to improve computers’ abilities to understand diverse speech patterns, such as impaired speech. We’ve partnered with the non-profit organizations ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to record the voices of people who have ALS, a neuro-degenerative condition that can result in the inability to speak and move. We collaborated closely with these groups to learn about the communication needs of people with ALS, and worked toward optimizing AI based algorithms so that mobile phones and computers can more reliably transcribe words spoken by people with these kinds of speech difficulties. To learn more about how our partnership with ALS TDI started, read this article from Senior Director, Clinical Operations Maeve McNally and ALS TDI Chief Scientific Officer Fernando Vieira.

Euphonia_Waveform_TopologyGroups_ER_examples.png

Example of phrases that we ask participants to read

To do this, Google software turns the recorded voice samples into a spectrogram, or a visual representation of the sound. The computer then uses common transcribed spectrograms to "train" the system to better recognize this less common type of speech. Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS, but we believe that our research can be applied to larger groups of people and to different speech impairments.

In addition to improving speech recognition, we are also training personalized AI algorithms to detect sounds or gestures, and then take actions such as generating spoken commands to Google Home or sending text messages. This may be particularly helpful to people who are severely disabled and cannot speak.

The video below features Dimitri Kanevsky, a speech researcher at Google who learned English after he became deaf as a young child in Russia. Dimitri is using Live Transcribe with a customized model trained uniquely to recognize his voice. The video also features collaborators who have ALS like Steve Saling—diagnosed with ALS 13 years ago—who use non-speech sounds to trigger smart home devices and facial gestures to cheer during a sports game.

We’re excited to see where this can take us, and we need your help. These improvements to speech recognition are only possible if we have many speech samples to train the system. If you have slurred or hard to understand speech, fill out this short form to volunteer and record a set of phrases. Anyone can also donate to or volunteer with our partners, ALS TDI and the ALS Residence Initiative. The more speech samples our system hears, the more potential we have to make progress and apply these tools to better support everyone, no matter how they communicate.

Supporting people with disabilities: Be My Eyes and phone support now available

15 percent of the world’s population has some form of disability—that’s over 1 billion people. Last January, we introduced a dedicated Disability Support team available to help answer questions about assistive features and functionalities within Google products. Access to a Disability Support team—and specifically, video and phone support—was a popular request we heard from the community.

Now, people with questions on assistive technology and/or accessibility features within Google’s products can utilize the Specialized Help section on the Be My Eyes app or connect directly through phone support with a Google Disability Support specialist, Monday through Friday 8:00 a.m. until 5:00 p.m. PT, in English only.

Be My Eyes is a free app available for both iOS and Android that connects people who are blind and low-vision to nearly two million sighted volunteers in the Be My Eyes community. Through a live connection, a volunteer can assist someone with a task that requires visual assistance, such as checking expiry dates, distinguishing colors, reading instructions or navigating new surroundings. This new partnership comes from a common goal between Be My Eyes and Google to help people with disabilities live more independent lives.

Specialized Help_Be My Eyes.png

Image showing two phones in front of the other displaying the Google profile on the Specialized Help section of the Be My Eyes app.

Google’s Disability Support team is composed of strong advocates for inclusion who are eager to work with Googlers to continuously improve and shape Google’s products with user feedback. The team has been working on implementing Be My Eyes and phone support to the community and looks forward to rolling out this support starting today.

Disability Support team.jpg

The Disability Support team at work providing phone support

Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport for steps to use Be My Eyes and more ways to connect with a Disability Support specialist.

iOS Accessibility Scanner Framework

At Google, we are committed to accessibility and are constantly looking for ways to improve our development process to discover, debug and fix accessibility issues. Today we are excited to announce a new open source project: Accessibility Scanner for iOS (or GSCXScanner as we lovingly call it). This is a developer tool that can assist in locating and fixing accessibility issues while an app is being developed.

App development can be a time consuming process, especially when it involves human testers. Sometimes, as in the case with accessibility testing, they are necessary. A developer can write automated tests to perform some accessibility checks, but GSCXScanner takes this one step further. When a new feature is being developed, often there are several iterations of code changes, building, launching and trying out the new feature. It is faster and easier to fix accessibility issues with the feature if they can be detected during this phase when the developer is working with the new feature.

GSCXScanner lives in your app process and can perform accessibility checks on the UI currently on the screen simply with the touch of a button. The scanner’s UI which is overlaid on the app can be moved around so you can use your app normally and trigger a scan only when you need it. Also, it uses GTXiLib, a library of iOS accessibility checks to scan your app, and you can author your own GTX checks and have them run along with scanner’s default checks.

Using the scanner does not eliminate the need for manual testing or automated tests, these are must haves for delivering quality products. But GCSXScanner can speed up the development process by showing issues in app during development.

Help us improve GSCXScanner by suggesting a feature or better yet, writing one.

By Sid Janga, Central Accessibility Team

With Lookout, discover your surroundings with the help of AI

Whether it’s helping to detect cancer cells or drive our cars, artificial intelligence is playing an increasingly larger role in our lives. With Lookout, our goal is to use AI to provide more independence to the nearly 253 million people in the world who are blind or visually impaired.

Now available to people with Pixel devices in the U.S. (in English only), Lookout helps those who are blind or have low vision identify information about their surroundings. It draws upon similar underlying technology as Google Lens, which lets you search and take action on the objects around you, simply by pointing your phone. Since we announced Lookout at Google I/O last year, we’ve been working on testing and improving the quality of the app’s results.

We designed Lookout to work in situations where people might typically have to ask for help—like learning about a new space for the first time, reading text and documents, and completing daily routines such as cooking, cleaning and shopping. By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space. Once you’ve opened the Lookout app, all you have to do is keep your phone pointed forward. You won’t have to tap through any further buttons within the app, so you can focus on what you're doing in the moment.

Lookout modes.png

Screenshot image of Lookout’s modes including, “Explore,” “Shopping,” “Quick read” Second screenshot of Lookout detecting a dog in the camera frame.

As with any new technology, Lookout will not always be 100 percent perfect. Lookout detects items in the scene and takes a best guess at what they are, reporting this to you. We’re very interested in hearing your feedback and learning about times when Lookout works well (and not so well) as we continue to improve the app. Send us feedback by contacting the Disability Support team at g.co/disabilitysupport.

We hope to bring Lookout to more devices, countries and platforms soon. People with a Pixel device in the US can download Lookout on Google Play today. To learn more about how Lookout works, visit the Help Center.