Tag Archives: accessibility

If it has audio, now it can have captions

A decade ago, we added automatic captions to videos on YouTube, making online videos more accessible. However, they’re not always available on other types of content—like audio messages from your friends, trending videos on social media feeds or even the stuff you record yourself. It’s hard to enjoy that content if you forgot your headphones and can’t play the audio out loud—or if you’re one of the 466 million people in the world who are Deaf or hard of hearing, that content may be entirely inaccessible. 


That’s why we created Live Caption, an automatic captioning system that is fast and small enough to fit on a smartphone. Live Caption is helpful whether you’re on a loud commuter train, trying not to wake a baby, or want to follow along the conversation more closely. 


With the launch of Pixel 4, Live Caption is now officially available to make digital media more accessible. With a single tap, Live Caption automatically captions videos and spoken audio on your device (except phone and video calls). It happens in real time and completely on-device, so it works even if you don’t have cell data or Wi-Fi, and the captions always stay private and never leave your phone. The captions won’t get in the way of whatever you’re looking at because you can position them anywhere on the screen. If you want to see more text, simply double tap to expand the caption box.

Google_Live_Caption_UIDemo@720-16MB.gif

Live Caption wouldn’t have been possible without the Deaf and hard of hearing communities who helped guide us from the very beginning. Similar to how we designed Live Transcribe earlier this year, we developed Live Caption in collaboration with individuals from the community and partners like Gallaudet University, the world’s premier university for Deaf and hard of hearing people. An early Deaf tester, Naiajah Wilson, explained how Live Caption   would impact her daily life: “Now I don't have to wake up my mom or dad and ask what's being said.”

Today, Live Caption supports English with plans to support more languages in the near future. And while the captions may not always be perfect as it depends on the quality of the sound, we’ll continue to improve the technology over time. 


In addition to Pixel 4, Live Caption will roll out to Pixel 3, Pixel 3 XL, Pixel 3a and Pixel 3a XL later this year, and we’re working closely with other Android phone manufacturers to make it more widely available in the coming year. 


You can learn more about our broader commitment to build for everyone on our new Android Accessibility site

Source: Android


Using AI to give people who are blind the “full picture”

Everything that makes up the web—text, images,video and audio—can be easily discovered. Many people who are blind or have low vision rely on screen readers to make the content of web pages accessible through spoken feedback or braille. 

For images and graphics, screen readers rely on descriptions created by developers and web authors, which are usually referred to as “alt text” or “alt attributes” in the code. However, there are millions of online images without any description, leading screen readers to say “image,” “unlabeled graphic,” or a lengthy, unhelpful reading of the image’s file name. When a page contains images without descriptions, people who are blind may not get all of the information conveyed, or even worse, it may make the site totally unusable for them. To improve that experience, we’ve built an automatic image description feature called Get Image Descriptions from Google. When a screen reader encounters an image or graphic without a description, Chrome will create one. 

Image descriptions automatically generated by a computer aren't as good as those written by a human who can include additional context, but they can be accurate and helpful. An image description might help a blind person read a restaurant menu, or better understand what their friends are posting on social media.

If someone using a screen reader chooses to opt in through Settings, an unlabeled image on Chrome is sent securely to a Google server running machine learning software. The technology aggregates data from multiple machine-learning models. Some models look for text in the image, including signs, labels, and handwritten words. Other models look for objects they've been trained to recognize—like a pencil, a tree, a person wearing a business suit, or a helicopter. The most sophisticated model can describe the main idea of an image using a complete sentence.

The description is evaluated for accuracy and valuable information: Does the annotation describe the image well? Is the description useful? Based on whether the annotation meets that criteria, the machine learning model determines what should be shown to the person, if anything. We’ll only provide a description if we have reasonable confidence it's correct. If any of our models indicate the results may be inaccurate or misleading, we err on the side of giving a simpler answer, or nothing at all. 

Here are a couple of examples of the actual descriptions generated by Chrome when used with a screen reader.

Pineapples, bananas and coconuts

Machine-generated description for this image: "Appears to be: Fruits and vegetables at the market."

Man playing guitar on gray sofa

Machine-generated description for this image: "Appears to be: Person playing guitar on the sofa." 

Over the past few months of testing, we’ve created more than 10 million descriptions with hundreds of thousands being added every day. The feature is available in English, but we plan to add more languages soon. Image descriptions in Chrome are not meant to replace diligent and responsible web authoring; we always encourage developers and web authors to follow best practices and provide image descriptions on their sites. But we hope that this feature is a step toward making the web more accessible to everyone. 

Voice guidance in Maps, built for people with impaired vision

Think about the last time you walked to a new place. How many streets did you cross to get there? Which intersections were the most complex? How did you prepare before making a turn? And how did you know you weren’t lost?

Now think about making that same trip if you were one of the 36 million people who are blind worldwide, or one of the 217 million people more who have moderate-to-severe vision impairments.

As a legally blind woman living in Tokyo, I know that getting around unfamiliar environments can be a challenge. I can easily commute from my front door to my desk at work; it’s a trip I take regularly and know well. But going some place new and unfamiliar can be an intimidating experience without sight to guide you. In some cases, I’ll have a friend to join me on a trip, but in others I may decide not to take the journey at all.

Detailed voice guidance in Google Maps helps people with visual impairments

Starting today, World Sight Day, Google Maps is rolling out a new feature that gives people the ability to receive more detailed voice guidance and new types of verbal announcements for walking trips. This feature is the first in Google Maps to be built from the ground up by, and for, people with vision impairments. I feel fortunate to have had the opportunity to work closely with the Maps team on this project as an early advisor and tester—outside of my day job as a business analyst in the Tokyo office.

With this feature, I can navigate the streets of Tokyo with more comfort and confidence. As I take my journey, Google Maps proactively lets me know that I’m on the correct route, the distance until my next turn and the direction I’m walking in. As I approach large intersections, I get a heads-up to cross with added caution. And if I accidentally leave my route, I’ll get a spoken notification that I'm being re-routed. 

Frequent updates like these not only help a visually impaired person get from A to B, they can also give us more confidence and reassurance when we travel alone. With detailed voice guidance in Google Maps, my journey fades into the background and I can focus more on what I’ll do at my final destination. This may not sound extraordinary to those with sight, but for people who are blind or have low vision, this can help us explore new and unfamiliar places.

Googler Wakana Sugiyama talks about how detailed voice guidance in Google Maps helps everyone navigate with ease.

(Versions of this video with full audio descriptions for people with vision impairments are also available in English and Japanese.)

Building a more helpful Google Maps for everyone

I hope this new technology will give more people added confidence when navigating unfamiliar routes--after all, building for everyone is core to our work at Google. 

While this new feature can be enormously helpful to people with visual impairments, it can also help someone who wants a more screen-free experience on their next walking trip. Similar to the announcements you might hear at crosswalks or on a bus, everyone can benefit from it. Not everyone will need this level of assistance, but it’s great to know it’s available and only a tap away.

Detailed voice guidance for walking navigation starts rolling out today on Android and iOS. Right now, it’s available in English in the United States and Japanese in Japan, with support for additional languages and countries on the way.

To turn the feature on, go to your Google Maps settings and select “Navigation.” At the bottom of the list you'll find the option to enable "Detailed voice guidance," beneath the “Walking options” heading.

Source: Google LatLong


How classroom tech brings accessibility with dignity

For Lisa Berghoff, Director of Instructional Technology at Highland Park High School in Highland Park, Illinois, one of her big assistive technology “aha” moments came while working with a student with autism. The student, often disruptive in class because she wanted immediate answers to questions, needed a teaching aide at her side—an accommodation that set her apart from her peers. “There’s nothing less cool than having an adult next to you in a high school class,” Berghoff says. 

Berghoff decided to open up a Google Doc on the student’s Chromebook, with the teaching aide accessing the same Doc on her own Chromebook from across the room and responding to the student’s questions in real time. “That document, with all the questions and answers captured by the student, actually became a resource for other students—it was a huge win for everyone,” Berghoff says. “That’s something we couldn't have done years ago.” 

In Berghoff’s 25 years in education, she’s seen the many changes that technology has brought to every student—but particularly those with learning challenges. In honor of Disability Awareness Month, we asked Berghoff about the impact of assistive technology and accessibility up close. Just getting started with G Suite and Chromebooks, and want to learn more about accessibility? Head to edu.google.com/accessibility to learn more. 

How’d you get started in special education?

I did my undergrad degree in psychology with grand plans to be a psychologist, but when I applied to some Ph.D programs they told me to get some experience in the real world. My first job was working at a crisis shelter for teenage girls. Because of my work with the girls who struggled so much to learn, I took some courses in special education—and realized that was where I wanted to be.

How’d you make the switch from special education to instructional technology?

I’d spent the last several years working with high school students with an array of significant disabilities. I would try anything if I thought it could help my kids learn, so the technology office started throwing all the tech my way—everything from Chromebooks to iPads to Promethean boards—because they knew I’d give it all an honest try. 

I saw that when used with integrity, technology could really be a game changer in helping kids learn. I distinctly recall a reading lesson where I recorded myself reading and shared a YouTube link, so students could pause and replay the video at their own pace.

Timing was on my side, and when the instructional technology director position opened up at Highland Park, the thought of having a wider influence appealed to me. At the time, I was fascinated by all kinds of kids with learning challenges—not just the students with Individualized Education Programs (IEPs). No matter what challenges kids have, many often need some kind of special support and could benefit from the right technology. 

Lisa Berghoff in the classroom

So you’re seeing the value of the “accessibility for all” movement up close.

I do a lot of training in universal design, which is about making everything more accessible. When you design things for people at the edges, everyone benefits—like how ramps help people in wheelchairs, but if you’re pushing a baby stroller, you’ll benefit too. 

What’s changed in special education and EdTech over your time in the field?

It’s the attitude of the kids, and that’s because of the better tools we have. In the past we had to give struggling students big, bulky laptops with accessibility tools—and they hated them, because the laptops made the students look different than everyone else. Now laptops like Chromebooks are so ubiquitous; everyone has one. I love that students with disabilities can access the tools they need in a way that gives them dignity, and that doesn’t separate them from the rest of the class. Having a device in each student's hand has completely changed teaching and learning.

What’s the next new thing in assistive technology?

I think there’s a lot coming with augmented reality and virtual reality, especially for students with physical disabilities who don’t have access to the wider world. There’s also the possibility to use technology for global connections. We see kids who have a rare disease or disorder, and feel like they’re the only ones out there. If they can connect to other students just like them out in the world, it makes a big difference for them psychologically. 

I have a student who doesn’t speak, and hasn’t physically been to school for a long time. Even simply using Gmail helps her make friends at school—and her friends feel like they are her ally. Her lack of speech is no longer a barrier.

Action Blocks: one tap to make technology more accessible

Think about the last time you did something seemingly simple on your phone, like booking a rideshare. To do this, you had to unlock your phone, find the right app, and type in your pickup location. The process required you to read and write, remember your selections, and focus for several minutes at a time. For the 630 million people in the world with some form of cognitive disability, it’s not that easy. So we’ve been experimenting with how the Assistant and Android can work together to reduce the complexity of these tasks for people with cognitive disabilities. 

Back at I/O, we shared how Googler Lorenzo Caggioni used the Assistant to build a device called DIVA for his brother Giovanni, who is legally blind, deaf and has Down Syndrome. DIVA makes people with disabilities more autonomous, helping them interact with the Assistant in a nonverbal way. With DIVA, Giovanni can watch his favorite shows and listen to his music on his own. 

DIVA was the starting point for Action Blocks, which uses the Google Assistant to make it easier for people who have a cognitive disability to use Android phones and tablets. With Action Blocks, you add Assistant commands to your home screen with a custom image, which acts as a visual cue.

BedtimeStory_web.gif

Use Action Blocks to create a home screen shortcut for a bedtime story.

The Action Block icon—for example, a photograph of a cab—triggers the corresponding Assistant command, like ordering a rideshare. Action Blocks can be configured to do anything the Assistant can do, in just one tap: call a loved one, share your location, watch your favorite show, control the lights and more.  

Action Blocks is the first of our many efforts to empower people with cognitive disabilities, help them gain independence, connect with loved ones and engage in the world as they are. 

The product is still in the testing phase, and if you’re the caregiver or family member of someone with a cognitive disability that could benefit, please join our trusted tester program. Follow us @googleaccess to learn more.

Accessibility for the digital world and beyond

When I joined Google’s central Accessibility team in 2013, our mission was to make our products work better for everyone. That mission hasn’t changed, but our ambition has. We’ve always worked to make it easier for people with accessibility needs to navigate the digital world, whether it’s watching a YouTube video or reading a website. Today, we also want to help people navigate the physical world.

The start of National Disability Employment Awareness Month provides a moment to reflect on the journey we’re on and what lies ahead as we deliver on our commitment to create technology that has a positive impact. A clear example of how we’re thinking through our approach can be seen via two apps: Live Transcribe and Lookout


Tale of two apps

The first version of Live Transcribe was built to take real-world speech and turn it into real-time captions using just the microphone on a phone. This app helps bridge the connection between people who are deaf and those who can hear. 

A few months ago we went a step further to provide a visual indicator of sounds, like a dog barking nearby, someone knocking on their door or a speeding vehicle whizzing past them. This is important to those who may not be able to hear non-speech audio cues, providing more color or information to help understand what is going on around them. 

The shift to the physical world however presents challenges that are not easy to control for. After all, we're trying to provide people with context for environments that aren’t easily understood or readily documented. This is the ambition behind Lookout, which aims to help the more than 250 million blind or visually impaired people in the world deal with the ever-changing environment we live in. The app gives auditory cues as people encounter objects, text and others around them. These spoken notifications are designed to be used with minimal interaction and provide useful information in any given environment, like if you’re standing near an elevator or what a nearby sign says.


The power of working together 

As assistive technologies, both these apps were built using a combination of an “accessibility-first” design mentality coupled with advances in technologies, like AI. But in order to ensure these products meaningfully impact the lives of the more than 1 billion people in the world with a disability, we also had to collaborate with people and communities directly affected by disabilities. 

Working together, we’re able to get real-time feedback that helps improve a product or feature and make sure we are on the right track. For Live Transcribe we worked closely with Gallaudet University, a world renowned university for deaf and hard-of-hearing students. They helped us design, test and validate that Live Transcribe met the needs of their community.

Similarly, with Lookout we relied on our Trusted Tester Program. Artist and teacher Maya Scott—along with other testers—used early prototypes to make sure it was truly beneficial for people who are blind or have low vision. 


Future focus

Next on our roadmap is building technology that benefits people with cognitive disabilities—an umbrella term used to describe someone’s inability to sufficiently process information, use their memory, make decisions or learn. Circumstances can range from mild to profound, and the population of people who have cognitive disabilities is on the rise because as we age, our cognitive functions age with us.

We’re working to understand the needs of this community so that we can build the right products. An early signpost of the direction we’re headed can be seen with Project Diva. Lorenzo Cagglioni, a Googler in our Milan office, created this app to make the Assistant more accessible for his brother Giovanni, who is legally blind and deaf and has Downs and West Syndrome. Lorenzo has since joined the Accessibility team so he can scale his work to help others like Giovanni. 

Like most accessibility advancements, these technologies will also benefit people without a disability—all the more reason that we should never assume that accessibility is someone else's problem. In the end, we’re all in this together. 

If you give a student a Chromebook

We created Chromebooks to help people, students included, achieve anything. These shareable, versatile devices connect people to the internet, to each other and to quality apps and extensions. Give a student a Chromebook and you give them endless access to information and resources. By learning to find answers to their questions, collaborate with others and work independently and effectively, students build digital skills that will help them succeed throughout school and for the rest of their lives. 

So, give a student a Chromebook and they will… 

Find answers and solve problems

Chromebook apps can help students navigate the online world with confidence while improving digital literacy and comprehension skills. These apps have recently been updated for back to school: 

  • Epic!, the world’s largest digital reading platform for kids, has a massive library of books, audiobooks, videos and quizzes to help children develop a love of reading and learning. Teachers can now log in with Google single sign-on, add students with Google Classroom and download student reports into Google Sheets.

  • CK-12 offers a free, personalized learning platform spanning K-12 math, science and more. Their customizable FlexBook® Courses foster interactivity and continuous feedback, and now include new reports showing class level insights for Google Classroom assignments. 

  • DOGO media teaches literacy, reading fluency and global awareness through current events, books and movies. They’ve also launched Spanish-language resources that integrate with Google Classroom. 

TIP: Head to the Chromebook App Hub, where you can find educator and admin preferred apps, hear from app developers directly for up-to-date information, and get real classroom inspiration from teachers. Educators interested in apps on the App Hub should connect with their IT admins who can evaluate purchasing options. 

Learn alongside peers 

Thanks to built-in accessibility features and an array of assistive apps, students with learning differences can develop new strategies. Check out these apps with recently updated features and new integrations: 

  • Capti Voice is a reading support tool. Its new Classroom integration allows teachers to accommodate different learning needs and make tests accessible to more students. 

  • Texthelp offers assistive technology for reading, writing and language learning. With a new WriQ Classroom integration, educators can view dashboards with writing metrics by class and monitor student progress.

  • Don Johnston’s curriculum, learning and evaluation tools are designed to support all types of learning styles and abilities. For tools that integrate with G Suite/Classroom and support dyslexia and dysgraphia, check out the Snap&Read and Co:Writer extensions.

  • ViewSonic’s myViewBoard is an interactive, cloud-based whiteboard teachers can use to engage students. And it now integrates with Classroom and Drive.

  • BeeLine's reading tool is a Chrome extension that improves reading fluency and reading comprehension by displaying text using a color gradient that draws the reader’s eyes from the end of one line to the beginning of the next.

TIP: Once settings on a Chromebook are customized for a student, they’re applied every time they log in on any managed Chrome OS device. Bookmark this handy guide about Google’s accessibility tools for the classroom. 

Connect and collaborate in new ways

Virtual communication and collaboration are skills that students will use throughout their lives. With Chromebooks, they can cement these skills as they collaborate with peers in apps and sites or built-in ones like Docs, Sheets and Slides. Here are a few recently-updated apps that teachers can use to engage students while fostering communication and collaboration:

  • Remind, a communication app designed to connect parents, guardians, educators and others who matter to student success, has integrated connected accounts in Classroom and Drive. 

  • Kami, a PDF and document annotation app that fosters collaboration, now integrates with the Classroom grading page. Kami assignments are categorized to support Classroom’s topics.

  • Nearpod, a platform for creating engaging lessons or using existing ones, now lets you embed and edit activities directly within Google Slides.

TIP: Different devices work for different types of students. A rugged laptop, for example, can work well for young students. Touchscreen tablets with stylus compatibility and cameras in the front and back, on the other hand, work for students conducting science experiments or creating artistic masterpieces. With different options, you can customize the outside as much as you customize the inside. 

Schools pick Chromebooks because they are versatile, affordable and easy to manage. When you give an admin a fleet of Chromebooks with the Chrome Education Upgrade, they can easily and securely deploy and manage any number of devices from one cloud-based console. And they no longer need to worry about updating devices. Chromebooks update automatically and have multi-layered security, so—like students—they continue to improve over time. Read more about why admins love Chromebooks, and explore Chromebooks built for education and a range of apps that transform them into learning devices.

Improving real-time collaboration in Google Docs for assistive technology users

Quick launch summary 

It’s now easier for users of assistive technologies, like screen readers and Braille displays, to keep track of real-time updates made by collaborators in a document. With live edits, you can view a periodically updated summary of collaborator changes in a convenient sidebar. In Google Docs we believe that collaboration works best when it works for everyone.

New edits made by collaborators appear in the live edits sidebar.


To see live edits, open the Accessibility settings by going to Tools > Accessibility settings and check “Turn on screen reader support.” Then, select “Show live edits” from the Accessibility menu. To learn more, see this article in our Help Center.

Helpful links

Availability

Rollout details

G Suite editions
  • Available to all G Suite editions.

On/off by default? 
  • This feature will be available by default and can be enabled in the settings of Google Docs.


Stay up to date with G Suite launches

Bringing Live Transcribe’s Speech Engine to Everyone

Earlier this year, Google launched Live Transcribe, an Android application that provides real-time automated captions for people who are deaf or hard of hearing. Through many months of user testing, we've learned that robustly delivering good captions for long-form conversations isn't so easy, and we want to make it easier for developers to build upon what we've learned. Live Transcribe's speech recognition is provided by Google's state-of-the-art Cloud Speech API, which under most conditions delivers pretty impressive transcript accuracy. However, relying on the cloud introduces several complications—most notably robustness to ever-changing network connections, data costs, and latency. Today, we are sharing our transcription engine with the world so that developers everywhere can build applications with robust transcription.

Those who have worked with our Cloud Speech API know that sending infinitely long streams of audio is currently unsupported. To help solve this challenge, we take measures to close and restart streaming requests prior to hitting the timeout, including restarting the session during long periods of silence and closing whenever there is a detected pause in the speech. Otherwise, this would result in a truncated sentence or word. In between sessions, we buffer audio locally and send it upon reconnection. This reduces the amount of text lost mid-conversation—either due to restarting speech requests or switching between wireless networks.



Endlessly streaming audio comes with its own challenges. In many countries, network data is quite expensive and in spots with poor internet, bandwidth may be limited. After much experimentation with audio codecs (in particular, we evaluated the FLAC, AMR-WB, and Opus codecs), we were able to achieve a 10x reduction in data usage without compromising accuracy. FLAC, a lossless codec, preserves accuracy completely, but doesn't save much data. It also has noticeable codec latency. AMR-WB, on the other hand, saves a lot of data, but delivers much worse accuracy in noisy environments. Opus was a clear winner, allowing data rates many times lower than most music streaming services while still preserving the important details of the audio signal—even in noisy environments. Beyond relying on codecs to keep data usage to a minimum, we also support using speech detection to close the network connection during extended periods of silence. That means if you accidentally leave your phone on and running Live Transcribe when nobody is around, it stops using your data.

Finally, we know that if you are relying on captions, you want them immediately, so we've worked hard to keep latency to a minimum. Though most of the credit for speed goes to the Cloud Speech API, Live Transcribe's final trick lies in our custom Opus encoder. At the cost of only a minor increase in bitrate, we see latency that is visually indistinguishable to sending uncompressed audio.

Today, we are excited to make all of this available to developers everywhere. We hope you'll join us in trying to build a world that is more accessible for everyone.

By Chet Gnegy, Alex Huang, and Ausmus Chang from the Live Transcribe Team