Author Archives: Eve Andersson

New ways we’re making speech recognition work for everyone

Voice-activated technologies, like Google Home or the Google Assistant, can help people do things like make a phone call to someone, adjust the lighting in their house, or play a favorite song — all with the sound of their voice. But these technologies may not work as well for the millions of people around the world who have non-standard speech. In 2019 we launched our research initiative Project Euphonia with the aim of finding ways to leverage AI to make speech recognition technology more accessible.

Today, we’re expanding this commitment to accessibility through our involvement in the Speech Accessibility Project, a collaboration between researchers at the University of Illinois Urbana-Champaign and five technology companies, including Google. The university is working with advocacy groups, like Team Gleason and the Davis Phinney Foundation, to create datasets of impaired speech that can help accelerate improvements to automated speech recognition (ASR) for the communities these organizations support.

Since the launch of Project Euphonia, we’ve had the opportunity to work with community organizations to compile a collection of speech samples from over 2,000 people. This collection of utterances has allowed Project Euphonia researchers to adapt standard speech recognition systems to understand non-standard speech more accurately, and ultimately reduce median word error rates by an average of more than 80%. These promising results created the foundation for Project Relate, an Android app that allows people to submit samples of their voice and receive a personalized speech recognition model that more accurately understands their speech. It also encouraged the expansion of Project Euphonia to include additional languages like French, Japanese, and Spanish.

There’s still a lot to be done to develop ASR systems that can understand everyone’s voice — regardless of speech pattern. However, it’s clear that larger, more diverse datasets and collaboration with the communities we want to reach will help get us to where we want to go. That is why we’re making it easy for Project Euphonia participants to share copies of their recordings to the Speech Accessibility Project. Our hope is that by making these datasets available to research and development teams, we can help improve communication systems for everyone, including people with disabilities.

How we build with and for people with disabilities

Editor’s note: Today is Global Accessibility Awareness Day. We’re also sharing how we’re making education more accessibleand launching a newAndroid accessibility feature.

Over the past nine years, my job has focused on building accessible products and supporting Googlers with disabilities. Along the way, I’ve been constantly reminded of how vast and diverse the disability community is, and how important it is to continue working alongside this community to build technology and solutions that are truly helpful.

Before delving into some of the accessibility features our teams have been building, I want to share how we’re working to be more inclusive of people with disabilities to create more accessible tools overall.

Nothing about us, without us

In the disability community, people often say “nothing about us without us.” It’s a sentiment that I find sums up what disability inclusion means. The types of barriers that people with disabilities face in society vary depending on who they are, where they live and what resources they have access to. No one’s experience is universal. That’s why it’s essential to include a wide array of people with disabilities at every stage of the development process for any of our accessibility products, initiatives or programs.

We need to work to make sure our teams at Google are reflective of the people we’re building for. To do so, last year we launched our hiring site geared toward people with disabilities — including our Autism Career Program to further grow and strengthen our autistic community. Most recently, we helped launch the Neurodiversity Career Connector along with other companies to create a job portal that connects neurodiverse candidates to companies that are committed to hiring more inclusively.

Beyond our internal communities, we also must partner with communities outside of Google so we can learn what is truly useful to different groups and parlay that understanding into the improvement of current products or the creation of new ones. Those partnerships have resulted in the creation of Project Relate, a communication tool for people with speech impairments, the development of a completely new TalkBack, Android’s built-in screen reader, and the improvement of Select-to-Speak, a Chromebook tool that lets you hear selected text on your screen spoken out loud.

Equitable experiences for everyone

Engaging and listening to these communities — inside and outside of Google — make it possible to create tools and features like the ones we’re sharing today.

The ability to add alt-text, which is a short description of an image that is read aloud by screen readers, directly to images sent through Gmail starts rolling out today. With this update, people who use screen readers will know what’s being sent to them, whether it’s a GIF celebrating the end of the week or a screenshot of an important graph.

Communication tools that are inclusive of everyone are especially important as teams have shifted to fully virtual or hybrid meetings. Again, everyone experiences these changes differently. We’ve heard from some people who are deaf or hard of hearing, that this shift has made it easier to identify who is speaking — something that is often more difficult in person. But, in the case of people who use ASL, we’ve heard that it can be difficult to be in a virtual meeting and simultaneously see their interpreter and the person speaking to them.

Multi-pin, a new feature in Google Meet, helps solve this. Now you can pin multiple video tiles at once, for example, the presenter’s screen and the interpreter’s screen. And like many accessibility features, the usefulness extends beyond people with disabilities. The next time someone is watching a panel and wants to pin multiple people to the screen, this feature makes that possible.

We've also been working to make video content more accessible to those who are blind or low-vision through audio descriptions that describe verbally what is on the screen visually. All of our English language YouTube Originals content from the past year — and moving forward — will now have English audio descriptions available globally. To turn on the audio description track, at the bottom right of the video player, click on “Settings”, select “Audio track”, and choose “English descriptive”.

For many people with speech impairments, being understood by the technology that powers tools like voice typing or virtual assistants can be difficult. In 2019, we started work to change that through Project Euphonia, a research initiative that works with community organizations and people with speech impairments to create more inclusive speech recognition models. Today, we’re expanding Project Euphonia’s research to include four more languages: French, Hindi, Japanese and Spanish. With this expansion, we can create even more helpful technology for more people — no matter where they are or what language they speak.

I’ve learned so much in my time working in this space and among the things I’ve learned is the absolute importance of building right alongside the very people who will most use these tools in the end. We’ll continue to do that as we work to create a more inclusive and accessible world, both physically and digitally.

Doing more to design for and with people with disabilities

In 2013, I joined Google’s Central Accessibility Team. Since then, I've continuously worked to include people with disabilities across all of the work that we do at Google. October is National Disability Employment Awareness Month, a time to celebrate and recognize the contributions of people with disabilities. Today, we’re sharing a few ways we’re continuing to support hiring people with disabilities and how we design products for and with the one billion people in the world with disabilities.

Building a helpful workplace with new career resources 

In the United States, only 19 percent of people with disabilities are employed—leaving employers with a largely untapped talent pool. We need to do more to encourage the employment of people with disabilities, and we want to support that change at Google. Through the years, we’ve evaluated and iterated on our own processes to help improve disability inclusion and awareness in the workplace. Doing so has helped us build a more diverse team of people with different backgrounds and experiences that is more representative of the people using our products. 

Google PwD Careers Page

Visit the dedicated Google Careers resource page for people with disabilities to gain access to helpful resources.

We know that one of the first steps to finding a job at a new company is visiting their Careers website, but those resources may not be designed with people with disabilities in mind. This is why we've launched a dedicated Google Careers resource page that is specifically tailored toward what a job seeker with a disability might find helpful. Prospective candidates can find career resources and tips for applying, as well as read stories about Googlers with disabilities and our employee-led Google Disability Alliance community. The page also highlights the work we're doing to create products with and for people with disabilities. 

Action Blocks makes communication more accessible

Earlier this year we launched Action Blocks, an Android app designed with people with cognitive disabilities in mind that allows you to create customizable home screen buttons to navigate your device. Today, we are beginning to roll out an update to Action Blocks that will help make communication more accessible for people who are non-verbal.  

People who are non-verbal often use augmentative and alternative communication (AAC) to communicate with those around them. Updates to Action Blocks make it more familiar for anyone who uses an AAC device to communicate.
Action Blocks - select speaking block.png

You can now create Action Blocks that speak common phrases. 

You can now use a quick setup process to create Action Blocks that speak common phrases. For instance, you can set up your blocks to say, “yes,” “no,” or “Excuse me, I have something to say.” 

Action Blocks Tobii Symbols.png

Use Tobii Dynavox’s library of Picture Communication Symbols to customize your Action Blocks.

Action Blocks also comes loaded with thousands of Picture Communication Symbols from Tobii Dynavox’s library, which is commonly used on other AAC devices to communicate efficiently using pictures and symbols assigned to blocks. Having a similar set of icons available makes using Action Blocks’ AAC features more familiar for people who use Tobii Dynavox technology. 


Action Blocks works on Android phones without any additional hardware, making communication more convenient and accessible to people whether they are on the go, without their AAC devices, or don’t have access to an AAC device. And it’s now available to more people with expanded language options such as French, Italian, German, Spanish and Japanese. 


In addition to AAC functionality, if you prefer to use physical adaptive switches—which can make it easier to navigate assistive technology—you can now assign a switch to an Action Block. This way, you can tap a physical button to easily trigger a Google Assistant action on your phone—such as making a call, watching a video or controlling a home device like a thermostat. To learn more about using Action Blocks, visit the help center. 


We believe designing for and with people with disabilities means building better products all around. Today’s announcements are a few steps forward in the journey to make the world a more inclusive place for people with disabilities. 

Source: Android


Building accessible products for everyone

Over one billion people—15 percent of the population—live with some kind of disability, and this number will continue to rise as people get older and live longer. At Google I/O this week, we shared a few new ways that we’re helping people with disabilities. Here’s a bit more about these new products, as well as a behind-the-scenes look at how we designed I/O to make it more accessible and enjoyable for everyone:

3:36 Shennice Cleckley: One-woman show Google 61K views   1:07:47 #madebygoogle Google Recommended for you   1:10:15 Tech Talk: Linus Torvalds on git Google Recommended for you   1:05 Announcing the Lookout app Google 29K views New   1:14 Tour Creator- Show people your world Google 16K views New  Hey Google: How to get movie tickets with your Google Assistant Google 32K views New  Google Maps Navigation (Beta) Google Recommended for you  Detecting cancer in real-time with machine learning Google 123K views  Service Brewing Company: On a mission Google 65K views  Introducing Google Nose Google Recommended for you  Take Your Child to Work Day at Google 2018 Google 109K views  Learning “what architecture really means” with some help from Pixelbook Google 32K views  Google's US Data Centers Google 82K views  Making every phone smarter with Federated Learning Google 60K views New  A Chrome Superhero Google Recommended for you  Accessibility at Google I/O: Working to Make Events More Inclusive

Lookout:

Lookout is a new Android app designed to help people who are blind or visually impaired gain more independence by giving auditory cues about objects, text and people around them. People simply wear a Pixel device on a lanyard around their neck, with the camera pointing away from their body, and the app shares relevant information about the things around them, as they move through a space. Lookout is a big step in an effort to use technology to make the ever-changing and evolving world around us more tangible to people. It uses AI technology to bridge the virtual world with the physical world, making day to day tasks and interactions a little easier.
Announcing the Lookout app
Morse Code on Gboard

Now, people who communicate using Morse code can do so on Gboard. To do this, we collaborated closely with Tania Finlayson, who was born with cerebral palsy and is an expert in Morse code assistive technology. Tania has been using Morse code to communicate since the 1980s, and she’s also the designer and co-developer of the TandemMaster. Her insights into the nuances of Morse code as an alternative assistive technology were invaluable throughout the design process, and by bringing Morse code to Gboard, we hope that more people might also be able to use Morse to communicate more freely. To get Morse for Gboard beta and to learn how to type Morse code, go to g.co/morse. This feature is currently available in the public beta version of Gboard, and will roll out more widely on Gboard for Android in the coming weeks.

Tania’s Story: Morse code meets machine learning

YouTube Live Automatic Captions

In February, we announced that YouTube is bringing English automatic captions to live streams, and have been slowly rolling it out. With our new live automatic captions, creators have a quick and inexpensive way to make live streams more accessible to more people. With our speech recognition (LASR) technology, you’ll get captions with error rates and latency approaching industry standards.
LSAR

Also at I/O, we introduced more features that developers can use to create more accessible app experiences for users with disabilities, including new accessibility testing, best practices and APIs for Android P.


Time and time again, we’ve seen the benefits of not just designing for one person or one community, but with them. By working together, we can truly make technology more available and useful to everyone.


Building more accessible technology

Nearly 20 percent of the U.S. population will have a disability during their lifetime, which can make it hard for them to access and interact with technology, and limits the opportunity that technology can bring. That’s why it’s so important to build tools to make technology accessible to everyone—from people with visual impairments who need screen readers or larger text, to people with motor restrictions that prevent them from interacting with a touch screen, to people with hearing impairments who cannot hear their device’s sounds. Here are some updates we’ve made recently to make our technology more accessible:

A tool to help develop accessible apps

Accessibility Scanner is a new tool for Android that lets developers test their own apps and receive suggestions on ways to enhance accessibility. For example, the tool might recommend enlarging small buttons, increasing the contrast between text and its background and more.

ScannerBlog_Alpha.width-1000.png

Improvements for the visually impaired in Android N

A few weeks ago we announced a preview of Android N for developers. As part of this update we’re bringing Vision Settings—which lets people control settings like magnification, font size, display size and TalkBack—to the Welcome screen that appears when people activate new Android devices. Putting Vision Settings front and center means someone with a visual impairment can independently set up their own device and activate the features they need, right from the start.

Welcome_Alpha.width-1000.png

An improved screen reader on Chromebooks

Every Chromebook comes with a built-in screen reader called ChromeVox, which enables people with visual impairments to navigate the screen using text to speech software. Our newest version, ChromeVox Next Beta, includes a simplified keyboard shortcut model, a new caption panel to display speech and Braille output, and a new set of navigation sounds. For more information, visit chromevox.com.

Edit documents with your voice

Google Docs now allows typing, editing and formatting using voice commands—for example, “copy” or “insert table”—making it easier for people who can’t use a touchscreen to edit documents. We’ve also continued to work closely with Freedom Scientific, a leading provider of assistive technology products, to improve the Google Docs and Drive experience with the JAWS screen reader.

Docs_voice.width-1600.png
demo.gif

Voice commands on Android devices

We recently launched Voice Access Beta, an app that allows people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons to control their Android devices by voice. For example, you can say “open Chrome” or “go home” to navigate around the phone, or interact with the screen by saying “click next” or “scroll down.” To download, follow the instructions at http://g.co/voiceaccess.

To learn more about Google accessibility as a whole, visit google.com/accessibility.

Source: Android