Earlier this year, we partnered with developer Tania Finlayson, an expert in Morse code assistive technology, to make Morse code more accessible. Today, we’re rolling out Morse code on Gboard for iOS and improvements to Morse code on Gboard for Android. To help you learn how to type in Morse code, we’ve created a game (on Android, iOS, and desktop) that can help you learn it in less than an hour! We’ve worked closely with Tania on these updates to the keyboard and more—here, she explains how Morse code changed her life:
My name is Tania Finlayson, and I was born with cerebral palsy. A few doctors told my parents that I probably would not amount to anything, and suggested my parents put me in an institution. Luckily, my parents did not take the advice, raised me like a normal child, and did not expect any less of me throughout my childhood. I had to eat my dinner first before I could have desserts, I had to go to bed at bedtime, and I got in trouble when I picked on my older brother.
The only difference was that I was not able to communicate very effectively; basically, I could only answer “yes” and “no” questions. When I was old enough to read, I used a communication word board with about 200 words on it. I used a head stick to point to the words. A couple of years later, my dad decided that I should try a typewriter and press the keys with the head stick. Amazingly, my vocabulary grew. My mom did not dress me in plaid any more, I could tell on my brother, and I finally had the chance to annoy my Dad with question after question about the world. I am quite sure that my Dad did not, in any way, regret letting me try a typewriter. Ha!
Several years later, I was one of four kids chosen to participate in a study for non-verbal children at the University of Washington. The study was led by Al Ross, who wrote a grant funding the creation of a Morse code communicator for disabled children. Morse code, which is a communication system that dates back to the 1800s, allowed us to spell out words and communicate just by using two buttons: a dot “.” and a dash “—”.
The device was revolutionary. It would convert my Morse code into letters then speak out loud in English and had a small printer installed in it. I could activate a light to “raise my hand in class.” At first I thought learning Morse code would be a waste of time, but soon learned that it gave me total freedom with my words, and for the first time, I could talk with ease, without breaking my neck. School became fun, instead of exhausting. I could focus on my studies, and have real conversations with my friends for the first time. Also, I did not need an adult figure with me every moment at school, and that was awesome.
My experience with the Morse code communicator led me to a partnership with Google on bringing Morse code to Gboard. Working closely with the team, I helped design the keyboard layout, added Morse sequences to the auto-suggestion strip above the keyboard, and developed settings that allow people to customize the keyboard to their unique needs. The Morse code keyboard on Gboard allows people to use Morse code (dots and dashes) to enter text, instead of the regular (QWERTY) keyboard. Gboard for Android lets you hook external switches to the device (check out the source code my husband Ken and I developed), so a person with limited mobility could operate the device.
I’m excited to see what people will build that integrates with Morse code—whether it’s a keyboard like Gboard, a game, or educational app, the possibilities are endless. Most technology today is designed for the mass market. Unfortunately, this can mean that people with disabilities can be left behind. Developing communication tools like this is important, because for many people, it simply makes life livable. Now, if anyone wants to try Morse code, they can use the phone in their pocket. Just by downloading an app, anyone anywhere can give communicating with Morse code a try.
When I was first able to communicate as a child, the first feeling that I had was “Wow! This is pretty far out!” The first thing I typed was “You’re an old fart, Dad!” That was the first time I saw him laugh with tears in his eyes; I still don’t know if I made him really laugh or if I made him really sad! Probably a little of both.
That's why we make accessibility a core consideration when we develop new products—from concept to launch and beyond. It's good for users and good for business: Building products that don't consider a diverse range of needs could mean missing a substantial group of potential users and customers.
But impairments and disabilities are as varied as people themselves. For designers, developers, marketers or small business owners, making your products and designs more accessible might seem like a daunting task. How can you make sure you're being more inclusive? Where do you start?
Today, Global Accessibility Awareness Day, we're launching a new suite of resources to help creators, marketers, and designers answer those questions and build more inclusive products and designs.
The first step is learning about accessibility. Simply start by downloading the Google Primer app and search "accessibility." You'll find five-minute lessons that help you better understand accessibility, and learn practical tips to start making your own business, products and designs more accessible, like key design principles for building a more accessible website. You may even discover that addressing accessibility issues can improve the user experience for everyone. For instance, closed captions can make your videos accessible to more people whether they have a hearing impairment or are sitting in a crowded room.
Next, visit the Google Accessibility page and discover free tools that can help you make your site or app more accessible for more people. The Android Developers site also contains a wide range of suggestions to help you improve the accessibility of your app.
We hope these resources will help you join us in designing and building for a more inclusive future. After all, an accessible web and world is a better one—both for people and for business.
"Excited to see the new lessons on accessibility that Primer launched today. They help us learn how to start making websites and products more accessible. With over 1 billion people in the world with some form of disability, building a more inclusive web is the right thing to do both for people and for business." - Ari Balogh, VP Engineering
Over one billion people—15 percent of the population—live with some kind of disability, and this number will continue to rise as people get older and live longer. At Google I/O this week, we shared a few new ways that we’re helping people with disabilities. Here’s a bit more about these new products, as well as a behind-the-scenes look at how we designed I/O to make it more accessible and enjoyable for everyone:
Lookout is a new Android app designed to help people who are blind or visually impaired gain more independence by giving auditory cues about objects, text and people around them. People simply wear a Pixel device on a lanyard around their neck, with the camera pointing away from their body, and the app shares relevant information about the things around them, as they move through a space. Lookout is a big step in an effort to use technology to make the ever-changing and evolving world around us more tangible to people. It uses AI technology to bridge the virtual world with the physical world, making day to day tasks and interactions a little easier.
Morse Code on Gboard
Now, people who communicate using Morse code can do so on Gboard. To do this, we collaborated closely with Tania Finlayson, who was born with cerebral palsy and is an expert in Morse code assistive technology. Tania has been using Morse code to communicate since the 1980s, and she’s also the designer and co-developer of the TandemMaster. Her insights into the nuances of Morse code as an alternative assistive technology were invaluable throughout the design process, and by bringing Morse code to Gboard, we hope that more people might also be able to use Morse to communicate more freely. To get Morse for Gboard beta and to learn how to type Morse code, go to g.co/morse. This feature is currently available in the public beta version of Gboard, and will roll out more widely on Gboard for Android in the coming weeks.
YouTube Live Automatic Captions
In February, we announced that YouTube is bringing English automatic captions to live streams, and have been slowly rolling it out. With our new live automatic captions, creators have a quick and inexpensive way to make live streams more accessible to more people. With our speech recognition (LASR) technology, you’ll get captions with error rates and latency approaching industry standards.
Also at I/O, we introduced more features that developers can use to create more accessible app experiences for users with disabilities, including new accessibility testing, best practices and APIs for Android P.
Time and time again, we’ve seen the benefits of not just designing for one person or one community, but with them. By working together, we can truly make technology more available and useful to everyone.
There are over 253 million blind or visually impaired people in the world. To make the world more accessible to them, we need to build tools that can work with the ever-changing environment around us. Our new Android app Lookout, coming to the Play Store in the U.S this year, helps people who are blind or visually impaired become more independent by giving auditory cues as they encounter objects, text and people around them.
We recommend wearing your Pixel device in a lanyard around your neck, or in your shirt pocket, with the camera pointing away from your body. After opening the app, and selecting a mode, Lookout processes items of importance in your environment and shares information it believes to be relevant—text from a recipe book, or the location of a bathroom, an exit sign, a chair or a person nearby. Lookout delivers spoken notifications, designed to be used with minimal interaction allowing people to stay engaged with their activity.
There are four modes to choose from within the app: Home, Work & Play, Scan or Experimental (this allows you to test out features we’re working on). When you select a specific mode, Lookout will deliver information that’s relevant to the selected activity. If you’re getting ready to do your daily chores you’d select “Home” and you’ll hear notifications that tell you where the couch, table or dishwasher is. It gives you an idea of where those objects are in relation to you, for example “couch 3 o’clock” means the couch is on your right. If you select “Work & Play” when heading into the office, it may tell you when you’re next to an elevator, or stairwell. As more people use the app, Lookout will use machine learning to learn what people are interested in hearing about, and will deliver these results more often.
The core experience is processed on the device, which means the app can be used without an internet connection. Accessibility will be an ongoing priority for us, and Lookout is one step in helping blind or visually impaired people gain more independence by understanding their physical surroundings.
We want our products to be accessible and automation, with frameworks like GTXiLib, is one of the ways we scale our accessibility testing. GTXiLib can automate the process of checking for some kinds of issues such as missing labels, hints, or low contrast text.
GTXiLib is written in Objective-C and will integrate with your existing XCTests to perform all the registered accessibility checks before the test tearDown. When the checks fail, the existing test fails as well. Fixing your tests will thus lead to better accessibility and your tests can catch new accessibility issues as well.
Reuse your tests: GTXiLib integrates into your existing functional tests, enhancing the value of any tests that you have or any that you write.
Incremental accessibility testing: GTXiLib can be installed onto a single test case, test class or a specific subset of tests giving you the freedom to add accessibility testing incrementally. This helped drive GTXiLib adoption in large projects at Google.
Author your own checks: GTXiLib has a simple API to create custom checks based on the specific needs of your app. For example, you can ensure every button in your app has an accessibilityHint using a custom check.
Do you also care about accessibility? Help us sharpen GTXiLib by suggesting a check or better yet, writing one. You can add GTXiLib to your project using CocoaPods or by using its Xcode project file.
We hope you find this useful and look forward to feedback and contributions from the community! Please check out the README for more information.
By Siddartha Janga, Google Central Accessibility Team
Posted by Sourish Chaudhuri, Software Engineer, Sound Understanding
The effect of audio on our perception of the world can hardly be overstated. Its importance as a communication medium via speech is obviously the most familiar, but there is also significant information conveyed by ambient sounds. These ambient sounds create context that we instinctively respond to, like getting startled by sudden commotion, the use of music as a narrative element, or how laughter is used as an audience cue in sitcoms.
Since 2009, YouTube has provided automatic caption tracks for videos, focusing heavily on speech transcription in order to make the content hosted more accessible. However, without similar descriptions of the ambient sounds in videos, much of the information and impact of a video is not captured by speech transcription alone. To address this, we announced the addition of sound effect information to the automatic caption track in YouTube videos, enabling greater access to the richness of all the audio content.
In this post, we discuss the backend system developed for this effort, a collaboration among the Accessibility, Sound Understanding and YouTube teams that used machine learning (ML) to enable the first ever automatic sound effect captioning system for YouTube.
Click the CC button to see the sound effect captioning system in action.
The application of ML – in this case, a Deep Neural Network (DNN) model – to the captioning task presented unique challenges. While the process of analyzing the time-domain audio signal of a video to detect various ambient sounds is similar to other well known classification problems (such as object detection in images), in a product setting the solution faces additional difficulties. In particular, given an arbitrary segment of audio, we need our models to be able to 1) detect the desired sounds, 2) temporally localize the sound in the segment and 3) effectively integrate it in the caption track, which may have parallel and independent speech recognition results.
A DNN Model for Ambient Sound The first challenge we faced in developing the model was the task of obtaining enough labeled data suitable for training our neural network. While labeled ambient sound information is difficult to come by, we were able to generate a large enough dataset for training using weakly labeled data. But of all the ambient sounds in a given video, which ones should we train our DNN to detect?
For the initial launch of this feature, we chose [APPLAUSE], [MUSIC] and [LAUGHTER], prioritized based upon our analysis of human-created caption tracks that indicates that they are among the most frequent sounds that are manually captioned. While the sound space is obviously far richer and provides even more contextually relevant information than these three classes, the semantic information conveyed by these sound effects in the caption track is relatively unambiguous, as opposed to sounds like [RING] which raises the question of “what was it that rang – a bell, an alarm, a phone?”
Much of our initial work on detecting these ambient sounds also included developing the infrastructure and analysis frameworks to enable scaling for future work, including both the detection of sound events and their integration into the automatic caption track. Investing in the development of this infrastructure has the added benefit of allowing us to easily incorporate more sound types in the future, as we expand our algorithms to understand a wider vocabulary of sounds (e.g. [RING], [KNOCK], [BARK]). In doing so, we will be able to incorporate the detected sounds into the narrative to provide more relevant information (e.g. [PIANO MUSIC], [RAUCOUS APPLAUSE]) to viewers.
Dense Detections to Captions When a video is uploaded to YouTube, the sound effect recognition pipeline runs on the audio stream in the video. The DNN looks at short segments of audio and predicts whether that segment contains any one of the sound events of interest – since multiple sound effects can co-occur, our model makes a prediction at each time step for each of the sound effects. The segment window is then slid to the right (i.e. a slightly later point in time) and the model is used to make a prediction again, and so on till it reaches the end. This results in a dense stream the (likelihood of) presence of each of the sound events in our vocabulary at 100 frames per second.
The dense prediction stream is not directly exposed to the user, of course, since that would result in captions flickering on and off, and because we know that a number of sound effects have some degree of temporal continuity when they occur; e.g. “music” and “applause” will usually be present for a few seconds at least. To incorporate this intuition, we smooth over the dense prediction stream using a modified Viterbi algorithm containing two states: ON and OFF, with the predicted segments for each sound effect corresponding to the ON state. The figure below provides an illustration of the process in going from the dense detections to the final segments determined to contain sound effects of interest.
(Left) The dense sequence of probabilities from our DNN for the occurrence over time of single sound category in a video. (Center) Binarized segments based on the modified Viterbi algorithm. (Right) The duration-based filter removes segments that are shorter in duration than desired for the class.
A classification-based system such as this one will certainly have some errors, and needs to be able to trade off false positives against missed detections as per the product goals. For example, due to the weak labels in the training dataset, the model was often confused between events that tended to co-occur. For example, a segment labeled “laugh” would usually contain both speech and laughter and the model for “laugh” would have a hard time distinguishing them in test data. In our system, we allow further restrictions based on time spent in the ON state (i.e. do not determine sound X to be detected unless it was determined to be present for at least Y seconds) to push performance toward a desired point in the precision-recall curve.
Once we were satisfied with the performance of our system in temporally localizing sound effect captions based on our offline evaluation metrics, we were faced with the following: how do we combine the sound effect and speech captions to create a single automatic caption track, and how (or when) do we present sound effect information to the user to make it most useful to them?
Adding Sound Effect Information into the Automatic Captions Track Once we had a system capable of accurately detecting and classifying the ambient sounds in video, we investigated how to convey that information to the viewer in an effective way. In collaboration with our User Experience (UX) research teams, we explored various design options and tested them in a qualitative pilot usability study. The participants of the study had different hearing levels and varying needs for captions. We asked participants a number of questions including whether it improved their overall experience, their ability to follow events in the video and extract relevant information from the caption track, to understand the effect of variables such as:
Using separate parts of the screen for speech and sound effect captions.
Interleaving the speech and sound effect captions as they occur.
Only showing sound effect captions at the end of sentences or when there is a pause in speech (even if they occurred in the middle of speech).
How hearing users perceive captions when watching with the sound off.
While it wasn’t surprising that almost all users appreciated the added sound effect information when it was accurate, we also paid specific attention to the feedback when the sound detection system made an error (a false positive when determining presence of a sound, or failing to detect an occurrence). This presented a surprising result: when sound effect information was incorrect, it did not detract from the participant’s experience in roughly 50% of the cases. Based upon participant feedback, the reasons for this appear to be:
Participants who could hear the audio were able to ignore the inaccuracies.
Participants who could not hear the audio interpreted the error as the presence of a sound event, and that they had not missed out on critical speech information.
Overall, users reported that they would be fine with the system making the occasional mistake as long as it was able to provide good information far more often than not.
Looking Forward Our work toward enabling automatic sound effect captions for YouTube videos and the initial rollout is a step toward making the richness of content in videos more accessible to our users who experience videos in different ways and in different environments that require captions. We’ve developed a framework to enrich the automatic caption track with sound effects, but there is still much to be done here. We hope that this will spur further work and discussion in the community around improving captions using not only automatic techniques, but also around ways to make creator-generated and community-contributed caption tracks richer (including perhaps, starting with the auto-captions) and better to further improve the viewing experience for our users.
I envision a future where everything will be captioned, so the more than 300 million people who are deaf or hard of hearing like me will be able to enjoy videos like everyone else. When I was growing up in Costa Rica, there were no closed captions in my first language, and only English movies had Spanish subtitles. I felt I was missing out because I often had to guess at what was happening on the screen or make up my own version of the story in my head. That was where the dream of a system that could just automatically generate high quality captions for any video was born.
Today I am lucky to be making my dream a reality as part of a team at YouTube exploring innovative ways to make captions more available for everyone. Over the years we have made great strides both in terms of the numbers of videos with captions and also in the accuracy of those captions.
Google first launched video captions back in 2006. Three years later these efforts were taken to a whole new level with automated captions on YouTube. This was a big leap forward to help us keep up with YouTube’s growing scale. Fast forward to today, and the number of videos with automatic captions now exceeds a staggering 1 billion. Moreover, people watch video with automatic captions more than 15 million times per day.
One of the ways that we were able to scale the availability of captions was by combining Google's automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions for videos. There were limitations with the technology that underscored the need to improve the captions themselves. Results were sometimes less than perfect, prompting some creators to have a little fun at our expense!
A major goal for the team has been improving the accuracy of automatic captions -- something that is not easy to do for a platform of YouTube’s size and diversity of content. Key to the success of this endeavor was improving our speech recognition, machine learning algorithms, and expanding our training data. All together, those technological efforts have resulted in a 50 percent leap in accuracy for automatic captions in English, which is getting us closer and closer to human transcription error rates.
Automatic captions example from our previous model
Continuing to improve the accuracy of captions remains an important goal going forward, as does the need to keep growing beyond 1 billion automatic captions. We also want to extend that work to all of our ten supported languages. But we can’t do it alone. We count on the amazing YouTube community of creators and viewers everywhere. Ideally, every video would have an automatic caption track generated by our system and then reviewed and edited by the creator. With the improvements we’ve made to the automated speech recognition, this is now easier than ever.
I know from firsthand experience that if you build with accessibility as a guiding force, you make technology work for everyone.
More than a billion people have a disability. And regardless of the country or community they live in, the gaps in opportunity for people with disabilities are striking: One in three people with a disability lives in poverty. In places like the United States, 50 to 70 percent of people with disabilities are unemployed; in developing countries that number increases to 80 to 90 percent. And only 10 percent of people with disabilities in developing countries have access to the assistive devices they need.
Last spring, Google.org kicked off the Google Impact Challenge: Disabilities, an open call to global nonprofits who are building transformative technologies for the billion people around the world with disabilities. We’ve been amazed by the ideas we’ve received, coming from 1,000+ organizations spanning 88 countries. We’ve shared a handful of the organizations we’re supporting already—and today we’re excited to share the full list of 30 winners.
The organizations we’re supporting all have big ideas for how technology can help create new solutions, and each of their ideas has the potential to scale. Each organization has also committed to open sourcing their technology—which helps encourage and speed up innovation in a sector that has historically been siloed. Meet some of our incredible grantees below, and learn more about all 30 organizations working to improve mobility, communication, and independence for people living with disabilities at g.co/disabilities.
The Center for Discovery, $1.125 million Google.org grant Power wheelchairs help provide greater independence to people with mobility limitations—allowing them to get around without a caregiver, or travel longer distances. But power chairs are expensive and often not covered by insurance, leaving many people limited to manual wheelchairs.
With their Google.org grant, the Center for Discovery will continue developing an open source power add-on device, the indieGo, which quickly converts any manual wheelchair into a power chair. The power add-on will provide the mobility and freedom of a power chair for around one-seventh the average cost, and will allow people who mainly use a manual wheelchair to have the option of using power when they need it. The device design will be open sourced to increase its reach—potentially improving mobility for hundreds of thousands of people.
A young man using the indieGo to greet friends.
Perkins School for the Blind, $750,000 Google.org grant Turn-by-turn GPS navigation allows people with visual impairments to get around, but once they get in vicinity of their destination, they often struggle to find specific locations like bus stops or building entrances that GPS isn’t precise enough to identify. (This is often called the “last 50 feet problem.”) Lacking the detailed information they need to find specific new places, people tend to limit themselves to familiar routes, leading to a less independent lifestyle.
With the support of Google.org, Perkins School for the Blind is building tools to crowdsource data from people with sight to help people navigate the last 50 feet. Using an app, people will log navigation clues in a standard format, which will be used to create directions that lead vision-impaired people precisely to their intended destination. Perkins School for the Blind is collaborating with transit authorities who will provide access to transportation data and support Perkin’s mission of making public transportation accessible to everyone.
Perkins School for the Blind employee, Joann Becker, travels by bus. It can be hard for people with visual impairments to locate the exact location of bus stops and other landmarks.
Miraclefeet, $1 million Google.org grant An estimated 1 million children currently live with untreated clubfoot, a lifelong disability that often leads to isolation, limited access to education, and poverty. Clubfoot can be treated without surgery, but treatment practices are not widely used in many countries around the world.
Miraclefeet partners with local healthcare providers to increase access to proper treatment for children born with clubfoot. They will use Google.org support to offer support to families via SMS, monitor patient progress through updated software, and provide extensive online training to local clinicians. To date, Miraclefeet has helped facilitate treatment for more than 13,000 children in 13 different countries; this effort will help them significantly scale up their work to reach thousands more.
Miraclefeet helps partners use a simple, affordable brace as part of the clubfoot treatment. Here, a doctor in India shows a mother how to use the miraclefeet brace.
Ezer Mizion and Click2Speak, $400,000 Google.org grant People with high cognitive function but impaired motor skills often have a hard time communicating—both speaking or using standard keyboards to type. Augmentative and alternative communication devices (AAC) help people more easily communicate, but are often unaffordable and restricted to specific platforms or inputs. Without an AAC, people may have difficulty maintaining personal relationships and professional productivity.
Ezer Mizion is working with Click2Speak to build an affordable, flexible, and customizable on-screen keyboard that allows people to type without use of their hands. With the grant from Google.org, Ezer Mizion and Click2Speak will gather more user feedback to improve the technology, including support for additional languages, operating systems, and different devices like switches, joysticks, or eye-tracking devices.
A young girl learns to use the Click2Speak on-screen keyboard with a joystick controller.
From employment to education, communication to mobility, each of our grantees is pushing innovation for people with disabilities forward. In addition to these grants, we’re always working to make our own technology more accessible, and yesterday we shared some of the latest on this front, including voice typing in Google Docs and a new tool that helps Android developers build more accessible apps. With all these efforts, our aim to create a world that works for everyone.
Posted by Brigitte Hoyer Gosselink, Google Impact Challenge: Disabilities Project Lead for Google.orghttps://3.bp.blogspot.com/-J50ZW-AEU9c/VwyJfVtGC0I/AAAAAAAASJs/geghloVcOQwGeoxmy2bURYFoZipAMIe0gCLcB/s1600/miraclefeet.jpgBrigitte Hoyer GosselinkGoogle Impact Challenge: Disabilities Project LeadGoogle.org
Nearly 20 percent of the U.S. population will have a disability during their lifetime, which can make it hard for them to access and interact with technology, and limits the opportunity that technology can bring. That’s why it’s so important to build tools to make technology accessible to everyone—from people with visual impairments who need screen readers or larger text, to people with motor restrictions that prevent them from interacting with a touch screen, to people with hearing impairments who cannot hear their device’s sounds. Here are some updates we’ve made recently to make our technology more accessible:
Tools to help develop accessible apps Accessibility scanner is a new tool for Android that lets developers test their own apps and receive suggestions on ways to enhance accessibility. For example, the tool might recommend enlarging small buttons, increasing the contrast between text and its background and more.
Improvements for the visually impaired in Android N A few weeks ago we announced a preview of Android N for developers. As part of this update we’re bringing Vision Settings—which lets people control settings like magnification, font size, display size and TalkBack—to the Welcome screen that appears when people activate new Android devices. Putting Vision Settings front and center means someone with a visual impairment can independently set up their own device and activate the features they need, right from the start.
An improved screen reader on Chromebooks
Every Chromebook comes with a built-in screen reader called ChromeVox, which enables people with visual impairments to navigate the screen using text to speech software. Our newest version, ChromeVox Next Beta, includes a simplified keyboard shortcut model, a new caption panel to display speech and Braille output, and a new set of navigation sounds. For more information, visit chromevox.com.
Edit documents with your voice Google Docs now allows typing, editing and formatting using voice commands—for example, “copy” or “insert table”—making it easier for people who can’t use a touchscreen to edit documents. We’ve also continued to work closely with Freedom Scientific, a leading provider of assistive technology products, to improve the Google Docs and Drive experience with the JAWS screen reader.
Voice commands on Android devices We recently launched Voice Access Beta, an app that allows people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons to control their Android devices by voice. For example, you can say “open Chrome” or “go home” to navigate around the phone, or interact with the screen by saying “click next” or “scroll down.” To download, follow the instructions at http://g.co/voiceaccess.
We believe in a world built for everyone, which is why we launched the global Google Impact Challenge: Disabilities earlier this year. The Impact Challenge is a Google.org initiative to invest $20 million in nonprofits who are using technology to make the world more accessible for the 1 billion people living with disabilities.
Today, as part of the program, we’re proud to celebrate the U.N. International Day of Persons with Disabilities with three new grants, totalling $2.95 million. Through our grants, the Royal London Society for Blind People will develop the Wayfindr project, helping visually impaired people navigate the London underground; Israeli NGO Issie Shapiro will distribute Sesame, an app that allows people with mobility impairments to control a smartphone using only head movements; and, finally, German grantee Wheelmap will expand its accessibility mapping efforts worldwide. This week, many Googlers around the world will also join Wheelmap’s Map My Day campaign to help out.
We’ve also collected 11 tips that help people with disabilities get more out of their favorite Google products. (Why 11? It’s a play on “a11y”, tech-speak for “accessibility.”)
Much of the accessibility work we do is driven by passionate Googlers from around the world. To give you a look at what motivates us to make Google, and the world, more inclusive, we asked four Googlers from our Disability Alliance to share more about what they’re working on:
Kiran Kaja, Technical Program Manager, London: Being blind from birth, I’ve always been excited by devices that talk to you or allow you to talk back to them. Today, I work on Google’s Text to Speech team developing technologies that talk to people with disabilities. I’m also helping improve eyes-free voice actions on Android so that people with low vision can accomplish standard tasks just by talking to their phone. This not only helps people with disabilities, but anyone whose hands are busy with another task—like cooking, driving or caring for an infant. The advances we’re making in speech recognition and text to speech output promise a bright future for voice user interfaces.
Paul Herzlich, Legal Analytics Specialist, Mountain View: As a wheelchair user from a spinal cord injury, I'm passionate about the potential impact of technology to solve disability-related issues. Outside of my job, I'm working alongside a team of mechanical and electrical engineers, UX designers, and medical professionals to develop a new technology called SmartSeat, which I hope to bring to life in tandem with Google.org through its Google Impact Challenge: Disabilities. SmartSeat is a device that notifies wheelchair users when they have been sitting in the same position for too long by using force sensors connected to a mobile app, thereby helping these users prevent pressure sores. You can watch a video of the early prototype on YouTube.
Aubrie Lee, Associate Product Marketing Manager, Mountain View: Like many other disabled people, I’ve spent most of my life as the minority in the room. In high school, I attended a state forum on disability and felt what it was like to be in the majority. Now, I work to create that feeling for other disabled people. I started the Googler Disability Community, a group that works on changing Google’s physical environment and workplace systems to help make our company truly inclusive. Outside of my job, I enjoy exploring the beauty in disability through photography and poetry. My own disabilities and the way they influence my interactions with others provide endless inspiration for my art.
Pablo Pacca, Language Market Manager, São Paulo: I’m in charge of making sure Google’s products are translated well into Brazilian Portuguese for the 180+ million Brazilians who don’t speak English. I’m also an activist and advocate for accessibility and inclusion, both as a blogger on disability issues and the lead for the Google Brazil People with Disabilities (PwD) group. At PwD Brazil, we educate Googlers about disability issues, and work to foster a more accessible office space and inclusive work environment across the company.
Posted by Jacquelline Fuller, Director of Google.org IMAGE URL Jacquelline FullerDirectorGoogle.org