Tag Archives: accessibility

Supporting people with disabilities: Be My Eyes and phone support now available

15 percent of the world’s population has some form of disability—that’s over 1 billion people. Last January, we introduced a dedicated Disability Support team available to help answer questions about assistive features and functionalities within Google products. Access to a Disability Support team—and specifically, video and phone support—was a popular request we heard from the community.

Now, people with questions on assistive technology and/or accessibility features within Google’s products can utilize the Specialized Help section on the Be My Eyes app or connect directly through phone support with a Google Disability Support specialist, Monday through Friday 8:00 a.m. until 5:00 p.m. PT, in English only.

Be My Eyes is a free app available for both iOS and Android that connects people who are blind and low-vision to nearly two million sighted volunteers in the Be My Eyes community. Through a live connection, a volunteer can assist someone with a task that requires visual assistance, such as checking expiry dates, distinguishing colors, reading instructions or navigating new surroundings. This new partnership comes from a common goal between Be My Eyes and Google to help people with disabilities live more independent lives.

Specialized Help_Be My Eyes.png

Image showing two phones in front of the other displaying the Google profile on the Specialized Help section of the Be My Eyes app.

Google’s Disability Support team is composed of strong advocates for inclusion who are eager to work with Googlers to continuously improve and shape Google’s products with user feedback. The team has been working on implementing Be My Eyes and phone support to the community and looks forward to rolling out this support starting today.

Disability Support team.jpg

The Disability Support team at work providing phone support

Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport for steps to use Be My Eyes and more ways to connect with a Disability Support specialist.

iOS Accessibility Scanner Framework

At Google, we are committed to accessibility and are constantly looking for ways to improve our development process to discover, debug and fix accessibility issues. Today we are excited to announce a new open source project: Accessibility Scanner for iOS (or GSCXScanner as we lovingly call it). This is a developer tool that can assist in locating and fixing accessibility issues while an app is being developed.

App development can be a time consuming process, especially when it involves human testers. Sometimes, as in the case with accessibility testing, they are necessary. A developer can write automated tests to perform some accessibility checks, but GSCXScanner takes this one step further. When a new feature is being developed, often there are several iterations of code changes, building, launching and trying out the new feature. It is faster and easier to fix accessibility issues with the feature if they can be detected during this phase when the developer is working with the new feature.

GSCXScanner lives in your app process and can perform accessibility checks on the UI currently on the screen simply with the touch of a button. The scanner’s UI which is overlaid on the app can be moved around so you can use your app normally and trigger a scan only when you need it. Also, it uses GTXiLib, a library of iOS accessibility checks to scan your app, and you can author your own GTX checks and have them run along with scanner’s default checks.

Using the scanner does not eliminate the need for manual testing or automated tests, these are must haves for delivering quality products. But GCSXScanner can speed up the development process by showing issues in app during development.

Help us improve GSCXScanner by suggesting a feature or better yet, writing one.

By Sid Janga, Central Accessibility Team

With Lookout, discover your surroundings with the help of AI

Whether it’s helping to detect cancer cells or drive our cars, artificial intelligence is playing an increasingly larger role in our lives. With Lookout, our goal is to use AI to provide more independence to the nearly 253 million people in the world who are blind or visually impaired.

Now available to people with Pixel devices in the U.S. (in English only), Lookout helps those who are blind or have low vision identify information about their surroundings. It draws upon similar underlying technology as Google Lens, which lets you search and take action on the objects around you, simply by pointing your phone. Since we announced Lookout at Google I/O last year, we’ve been working on testing and improving the quality of the app’s results.

We designed Lookout to work in situations where people might typically have to ask for help—like learning about a new space for the first time, reading text and documents, and completing daily routines such as cooking, cleaning and shopping. By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space. Once you’ve opened the Lookout app, all you have to do is keep your phone pointed forward. You won’t have to tap through any further buttons within the app, so you can focus on what you're doing in the moment.

Lookout modes.png

Screenshot image of Lookout’s modes including, “Explore,” “Shopping,” “Quick read” Second screenshot of Lookout detecting a dog in the camera frame.

As with any new technology, Lookout will not always be 100 percent perfect. Lookout detects items in the scene and takes a best guess at what they are, reporting this to you. We’re very interested in hearing your feedback and learning about times when Lookout works well (and not so well) as we continue to improve the app. Send us feedback by contacting the Disability Support team at g.co/disabilitysupport.

We hope to bring Lookout to more devices, countries and platforms soon. People with a Pixel device in the US can download Lookout on Google Play today. To learn more about how Lookout works, visit the Help Center.

Accessibility settings are now easier to access on Docs, Sheets, and Slides

Quick launch summary

It’s now easier to discover accessibility features like screen reader support, braille support, and screen magnifier support in Docs, Sheets, and Slides.

While these accessibility features were previously available, they required additional steps to access the accessibility menu. This change will make these settings more readily available by placing the Accessibility settings in the Tools menu.

Access the Accessibility menu by selecting Tools > Accessibility settings.

The accessibility settings dialog showing screen reader, braille, and screen magnifier support options.
If the screen reader option is selected from the accessibility settings dialog, an Accessibility menu will be displayed at the top of Docs, Sheets, and Slides for easy access.

Availability 

Rollout details 

G Suite editions 
Available to all G Suite editions.

On/off by default? 
This feature will be ON by default.

Stay up to date with G Suite launches

Real-time Continuous Transcription with Live Transcribe



The World Health Organization (WHO) estimates that there are 466 million people globally that are deaf and hard of hearing. A crucial technology in empowering communication and inclusive access to the world's information to this population is automatic speech recognition (ASR), which enables computers to detect audible languages and transcribe them into text for reading. Google's ASR is behind automated captions in Youtube, presentations in Slides and also phone calls. However, while ASR has seen multiple improvements in the past couple of years, the deaf and hard of hearing still mainly rely on manual-transcription services like CART in the US, Palantypist in the UK, or STTR in other countries. These services can be prohibitively expensive and often require to be scheduled far in advance, diminishing the opportunities for the deaf and hard of hearing to participate in impromptu conversations as well as social occasions. We believe that technology can bridge this gap and empower this community.

Today, we're announcing Live Transcribe, a free Android service that makes real-world conversations more accessible by bringing the power of automatic captioning into everyday, conversational use. Powered by Google Cloud, Live Transcribe captions conversations in real-time, supporting over 70 languages and more than 80% of the world's population. You can launch it with a single tap from within any app, directly from the accessibility icon on the system tray.

Building Live Transcribe
Previous ASR-based transcription systems have generally required compute-intensive models, exhaustive user research and expensive access to connectivity, all which hinder the adoption of automated continuous transcription. To address these issues and ensure reasonably accurate real-time transcription, Live Transcribe combines the results of extensive user experience (UX) research with seamless and sustainable connectivity to speech processing servers. Furthermore, we needed to ensure that connectivity to these servers didn't cause our users excessive data usage.

Relying on cloud ASR provides us greater accuracy, but we wanted to reduce the network data consumption that Live Transcribe requires. To do this, we implemented an on-device neural network-based speech detector, built on our previous work with AudioSet. This network is an image-like model, similar to our published VGGish model, which detects speech and automatically manages network connections to the cloud ASR engine, minimizing data usage over long periods of use.

User Experience
To make Live Transcribe as intuitive as possible, we partnered with Gallaudet University to kickstart user experience research collaborations that would ensure core user needs were satisfied while maximizing the potential of our technologies. We considered several different modalities, computers, tablets, smartphones, and even small projectors, iterating ways to display auditory information and captions. In the end, we decided to focus on the smartphone form factor because of the sheer ubiquity of these devices and the increasing capabilities they have.

Once this was established, we needed to address another important issue: displaying transcription confidence. Traditionally considered to be helpful to the user, our research explored whether we actually needed to show word-level or phrase-level confidence.
Displaying confidence level of the transcription. Yellow is high confidence, green is medium and blue is low confidence. White is fresh text awaiting context before finalizing. On the left, the coloring is at a per-phrase level while on the right is at a per-word level.1 Research found them to be distracting to the user without providing conversational value.
Reinforcing previous UX research in this space, our research shows that a transcript is easiest to read when it is not layered with these signals. Instead, Live Transcribe focuses on better presentation of the text and supplementing it with other auditory signals besides speech.

Another useful UX signal is the noise level of their current environment. Known as the cocktail party problem, understanding a speaker in a noisy room is a major challenge for computers. To address this, we built an indicator that visualizes the volume of user speech relative to background noise. This also gives users instant feedback on how well the microphone is receiving the incoming speech from the speaker, allowing them to adjust the placement of the phone.
The loudness and noise indicator is made of two concentric circles. The inner brighter circle, indicating the noise floor, tells a deaf user how audibly noisy the current environment is. The outer circle shows how well the speaker’s voice is received.Together, the circles visually show the relative difference intuitively.
Future Work
Potential future improvements in mobile-based automatic speech transcription include on-device recognition, speaker-separation, and speech enhancement. Relying solely on transcription can have pitfalls that can lead to miscommunication. Our research with Gallaudet University shows that combining it with other auditory signals like speech detection and a loudness indicator, makes a tangibly meaningful change in communication options for our users.

Live Transcribe is now available in a staged rollout on the Play Store, and is pre-installed on all Pixel 3 devices with the latest update. Live Transcribe can then be enabled via the Accessibility Settings. You can also read more about it on The Keyword.

Acknowledgements
Live Transcribe was made by researchers Chet Gnegy, Dimitri Kanevsky, and Justin S. Paul in collaboration with Android Accessibility team members Brian Kemler, Thomas Lin, Alex Huang, Jacqueline Huang, Ben Chung, Richard Chang, I-ting Huang, Jessie Lin, Ausmus Chang, Weiwei Wei, Melissa Barnhart and Bingying Xia. We'd also like to thank our close partners from Gallaudet University, Christian Vogler, Norman Williams and Paula Tucker.


1 Eagle-eyed readers can see the phrase level confidence mode in use by Dr. Obeidat in the video above.


Source: Google AI Blog


Making audio more accessible with two new apps

The World Health Organization estimates that by the year 2055, there will be 900 million people with hearing loss. We believe in the power of technology to help break down barriers and make life a little easier for everyone. Today, we’re introducing two new apps for Android designed to help deaf and hard-of-hearing people: Live Transcribe and Sound Amplifier.

Bringing captions to conversations with Live Transcribe

Dimitri Kanevsky is a research scientist at Google who has worked on speech recognition and communications technology for the last 30 years. Through his work, Dimitri—who has been deaf since early childhood—has helped shape the accessibility technologies he relies on. One of them is CART: a service where a captioner virtually joins a meeting to listen and create a transcription of spoken dialogue, which then displays on a computer screen. Dimitri’s teammate, Chet Gnegy, saw the challenges Dimitri faced using CART: he always carried multiple devices, it was costly and each meeting required a lot of preparation. This meant Dimitri could only use CART for formal business meetings or events, and not everyday conversations.

That inspired Chet to work with the Accessibility team to build a tool that could reduce Dimitri’s effort spent preparing for conversations. We thought: What if we used cloud-based automatic speech recognition to display spoken words on a screen? A prototype was built and Googlers across a bunch of our offices—from Mountain View to Taipei—got involved. The result is Live Transcribe, an app that takes real-world speech and turns it into real-time captions using just the phone’s microphone.

Live Transcribe has the potential to give people who are deaf or hard of hearing greater independence in their everyday interactions. It brought Dimitri closer to his loved ones—he’s now able to easily communicate with his six-year-old twin granddaughters without help from other family members. We’ve heard similar feedback from partners at Gallaudet University, the world’s premier university for deaf and hard of hearing people, who helped us design and validate that Live Transcribe met needs of their community.

livetranscribe

Live Transcribe is available in over 70 languages and dialects. It also enables two-way conversation via a type-back keyboard for users who can’t or don’t want to speak, and connects with external microphones to improve transcription accuracy. To use Live Transcribe, enable it in Accessibility Settings, then start Live Transcribe from the accessibility button on the navigation bar. Starting today, Live Transcribe will gradually rollout in a limited beta to users worldwide via the Play Store and pre-installed on Pixel 3 devices. Sign up here to be notified when it’s more widely available.

Clarifying sound with Sound Amplifier

Everyone can use a little audio boost from time to time, especially in situations where there’s a lot of background noise—like at a loud cafe or airport lounge. Today, we’re launching Sound Amplifier, which we announced at Google I/O last year.

With Sound Amplifier, audio is more clear and easier to hear. You can use Sound Amplifier on your Android smartphone with wired headphones to filter, augment and amplify the sounds in your environment. It works by increasing quiet sounds, while not over-boosting loud sounds. You can customize sound enhancement settings and apply noise reduction to minimize distracting background noise with simple sliders and toggles. 

sound amplifier

Sound Amplifier is available on the Play Store and supports Android 9 Pie or later phones and comes pre-installed on Pixel 3. With both Live Transcribe and Sound Amplifier, our goal is to help the hundreds of millions of people who are deaf or hard of hearing communicate more clearly.

Source: Android


Imagining new ways to learn Morse code’s dots and dashes

We first met Emmett at Adaptive Design Association, an organization near Google’s NYC office that builds custom adaptations for children with disabilities. Communicating for him is difficult—he uses a clear plastic word board and looks at specific squares to try and get across what he wants to say. We thought we might be able to help.

At the time, we were working on a special Morse Code layout for Gboard. With its simple dot and dash encoding, Morse is a good fit for assistive tech like switch access and sip-and-puff devices. Emmett was hoping to learn Morse as a more robust form of communication, and we wanted to make a small game to help him learn the new alphabet.

Our first attempt was a small connect-the-dots spelling toy that drew Emmett's favorite cartoon character and only took a few days to build. After watching Emmett get set up with his switches and start excitedly conquering pieces of the little Morse toy, we knew we wanted to do more. We partnered with Adaptive Design on a 48 hour hackathon, where independent designers and game developers worked with Emmett and another 4 kids to prototype games that made Morse code fun to learn.

The kids played the role of creative directors, using their imagination to set the vision for their own games. Each game reflected their interests and personalities. Hannah’s passion for music led to a game where you play notes by typing them in Morse. Matthew combined his interest in soccer and spy thrillers to make a game where you shoot soccer balls at targets by typing their corresponding Morse letters. Emmett made a maze you solve writing different letters. Ben, who likes trains, made a game where YouTube videos are shown on a train once the correct letters are typed in Morse Code. And Olivia’s love for talent shows led to a game called “Alphabet’s Got Talent.”

We’re posting the code for each independent team's games on the Experiments with Google website, where you can also find open-source examples that will help you get started with your own Morse-based apps. If you’re a developer, we hope these resources will inspire you to get involved with the community and make a difference by building your own accessibility projects.

Source: Search


Blind veterans kayak the Grand Canyon, with Street View along for the ride

Editor's Note: Lonnie Bedwell is a blind U.S. Navy veteran who led a team of four veterans to kayak 226 miles down the Grand Canyon. Today, he shares more about this feat (which was documented on Street View).


5 blind veterans kayak the Grand Canyon, documented in Street View

Let me start out by introducing myself: my name is Lonnie Bedwell and I’m from Pleasantville, Indiana, population 120. I’ve been blessed with a full and amazing life—raising three daughters as a single father, serving in the U.S. Navy, and perfecting my chicken noodle soup recipe.


I lost my vision over two decades ago, and was frustrated by how little people come to expect out of a blind person. Fortunately, I benefited from a tight community who supported and challenged me to learn more and do more. Since then, I’ve pushed myself mentally and physically—from climbing and mountain biking to writing a book to dancing with my daughter at her wedding.


I believe we can’t abandon our sense of adventure because we lose our ability to see it, and it has become  my goal to help people who live with similar challenges, and show them that anything is possible. In 2013, I became the first blind person to kayak the entire 226 miles of the Colorado River through the Grand Canyon But, I always felt it didn’t mean anything unless I found a way to pay it forward. So I joined up with the good folks at Team River Runner, a nonprofit dedicated to providing all veterans and their families an opportunity to find health, healing, community, and  purpose. Together we had the audacious goal to support four other blind veterans take a trip down the Grand Canyon.

GC

[Lonnie lines up to surf the entrance wave of Georgie Rapid Gallery]

Google was keen to support our wild idea of a journey by capturing our exploration in 360 degree Street View. Each member of this journey felt strongly that we must share our experience to inspire and invigorate others—and Street View is a far-reaching platform that can be accessed by anyone, anytime, and from anyplace.    


A question I often get is “How do blind people kayak the Grand Canyon?” Well, it starts with grit. And a lot of preparation. Our other visually-impaired team members—Steve Baskis, Kathy Champion,Brian Harris, and Travis Fugate—practiced hundreds of rolls (flipping yourself back up if you go underwater) and ramped up on big rivers all around the country to prepare. From there, it was all about teamwork and trust.  Team River Runner pioneered a system in which a guide in front makes a homing noise that the blind kayaker then follows, as you can experience for yourself in this 360 video. Just like we relied on our squadron the military, we relied on each other out there in our kayaks.  Our deployments in Afghanistan or Iraq reinforced our ability to work together and survive as a group, which came to life again on this river.


Every single person challenged and pushed themselves on a daily basis. When the earthy warmth of the desert day cooled during the starry nights, delicious cooking smells filled the air, and sounds of music and laughter replaced the roar of the whitewater. I’ll never forget how I felt our last night in the Canyon—so humbled by our team and their devotion to each other. I mean, very few people ever kayak the Grand Canyon, let alone five blind people! We reveled at how far we had come together, but most importantly, how far we could go if set our minds to it.  We may have lost our sight, but we didn’t lose our vision.

While we no longer have the ability to see, we still have the power of our senses. The transformative and healing power of exploring wild and natural landscapes like the Grand Canyon can be experienced, felt, and sensed. We are elated to be able to share it with the world in Street View. You may not be able to feel, smell, and touch it like we did, but experiencing it in 360 seems like a good way to start.

Find out what motivates the Googlers building technology for everyone

There’s a common belief that having a disability means living a life with limits.  At Google, we believe that technology can remove some of those limits and give everyone the same power to achieve their goals.

Building products that are accessible and work for everyone is important to us, and it starts with understanding the challenges that the more than one billion people around the world with a disability face every day.

As we come to the close of National Disability Awareness Month in the U.S., we’d like to introduce you to a few Googlers dedicated to removing those challenges and making technology more accessible.

Making creative tools more accessible for everyone

Before I got into the accessibility field, I worked as an art therapist where I met people from all walks of life. No matter the reason why they came to therapy, almost everyone I met seemed to benefit from engaging in the creative process.  Art gives us the ability to point beyond spoken or written language, to unite us, delight, and satisfy. Done right, this process can be enhanced by technology—extending our ability and potential for play.

One of my first sessions as a therapist was with a middle school student on the autism spectrum. He had trouble communicating and socializing with his peers, but in our sessions together he drew, made elaborate scenes with clay, and made music.

Another key moment for me was when I met Chancey Fleet, a blind technology educator and accessibility advocate. I was learning how to program at the time, and together we built a tool to help her plan a dinner event. It was a visual and audio diagramming tool that paired with her screen reader technology. This collaboration got me excited about the potential of technology to make art and creativity more accessible, and it emphasized the importance of collaborative approaches to design.

This sentiment has carried over into the accessibility research and design work that I do at the NYU Ability Project, a research space where we explore the intersection of disability and technology. Our projects bring together engineers, designers, educators, artists and therapists within and beyond the accessibility community. Like so many technological innovations that have begun as assistive and rehabilitative tech, we hope our work will eventually benefit everyone. That’s why when Google reached out to me with an opportunity to explore ideas around creativity and accessibility, I jumped at the chance.

Together, we made Creatability, a set of experiments that explore how creative tools–drawing, music and more–can be made more accessible using web and AI technology. The project is a collaboration with creators and allies in the accessibility community, such as: Jay Alan Zimmerman, a composer who is deaf; Josh Miele, a blind scientist, designer, and educator; Chancey Fleet, a blind, accessibility advocate, and technology educator; as well as, Barry Farrimond and Doug Bott of Open Up Music, a group focused on empowering young disabled musicians to build inclusive youth orchestras.

Creatability keyboard

The experiments explore a diverse set of inputs--from a computer mouse and keystrokes to your body, wrist, nose, or voice. For example, you can make music by moving your facedraw using sight or sound, and experience music visually.

The key technology we used was a machine learning model called Posenet that can detect key body joints in images and videos. This technology lets you control the experiments with your webcam, simply by moving your body. And it’s powered by Tensorflow.js—a library that runs machine learning models on-device and in your browser, which means your images are never stored or sent to a server.

Creating sound

We hope these experiments inspire others to unleash their inner artist regardless of ability. That’s why we’re open sourcing the code and have created helpful guides as starting points for people to create their own projects. If you create a new experiment or want to share your story of how you used the experiments, you can submit to be featured on the Creatability site at g.co/creatability.