Tag Archives: accessibility

An Open Source Vibrotactile Haptics Platform for On-Body Applications.

Most wearable smart devices and mobile phones have the means to communicate with the user through tactile feedback, enabling applications from simple notifications to sensory substitution for accessibility. Typically, they accomplish this using vibrotactile actuators, which are small electric vibration motors. However, designing a haptic system that is well-targeted and effective for a given task requires experimentation with the number of actuators and their locations in the device, yet most practical applications require standalone on-body devices and integration into small form factors. This combination of factors can be difficult to address outside of a laboratory as integrating these systems can be quite time-consuming and often requires a high level of expertise.

A typical lab setup on the left and the VHP board on the right.

In “VHP: Vibrotactile Haptics Platform for On-body Applications”, presented at ACM UIST 2021, we develop a low-power miniature electronics board that can drive up to 12 independent channels of haptic signals with arbitrary waveforms. The VHP electronics board can be battery-powered, and integrated into wearable devices and small gadgets. It allows all-day wear, has low latency, battery life between 3 and 25 hours, and can run 12 actuators simultaneously. We show that VHP can be used in bracelet, sleeve, and phone-case form factors. The bracelet was programmed with an audio-to-tactile interface to aid lipreading and remained functional when worn for multiple months by developers. To facilitate greater progress in the field of wearable multi-channel haptics with the necessary tools for their design, implementation, and experimentation, we are releasing the hardware design and software for the VHP system via GitHub.

Front and back sides of the VHP circuit board.
Block diagram of the system.

Platform Specifications.
VHP consists of a custom designed circuit board, where the main components are the microcontroller and haptic amplifier, which converts microcontroller’s digital output into signals that drive the actuators. The haptic actuators can be controlled by signals arriving via serial, USB, and Bluetooth Low Energy (BLE), as well as onboard microphones, using an nRF52840 microcontroller, which was chosen because it offers many input and output options and BLE, all in a small package. We added several sensors into the board to provide more experimental flexibility: an on-board digital microphone, an analog microphone amplifier, and an accelerometer. The firmware is a portable C/C++ library that works in the Arduino ecosystem.

To allow for rapid iteration during development, the interface between the board and actuators is critical. The 12 tactile signals’ wiring have to be quick to set up in order to allow for such development, while being flexible and robust to stand up to prolonged use. For the interface, we use a 24-pin FPC (flexible printed circuit) connector on the VHP. We support interfacing to the actuators in two ways: with a custom flexible circuit board and with a rigid breakout board.

VHP board (small board on the right) connected to three different types of tactile actuators via rigid breakout board (large board on the left).

Using Haptic Actuators as Sensors
In our previous blog post, we explored how back-EMF in a haptic actuator could be used for sensing and demonstrated a variety of useful applications. Instead of using back-EMF sensing in the VHP system, we measure the electrical current that drives each vibrotactile actuator and use the current load as the sensing mechanism. Unlike back-EMF sensing, this current-sensing approach allows simultaneous sensing and actuation, while minimizing the additional space needed on the board.

One challenge with the current-sensing approach is that there is a wide variety of vibrotactile actuators, each of which may behave differently and need different presets. In addition, because different actuators can be added and removed during prototyping with the adapter board, it would be useful if the VHP were able to identify the actuator automatically. This would improve the speed of prototyping and make the system more novice-friendly.

To explore this possibility, we collected current-load data from three off-the-shelf haptic actuators and trained a simple support vector machine classifier to recognize the difference in the signal pattern between actuators. The test accuracy was 100% for classifying the three actuators, indicating that each actuator has a very distinct response.

Different actuators have a different current signature during a frequency sweep, thus allowing for automatic identification.

Additionally, vibrotactile actuators require proper contact with the skin for consistent control over stimulation. Thus, the device should measure skin contact and either provide an alert or self-adjust if it is not loaded correctly. To test whether a skin contact measuring technique works in practice, we measured the current load on actuators in a bracelet as it was tightened and loosened around the wrist. As the bracelet strap is tightened, the contact pressure between the skin and the actuator increases and the current required to drive the actuator signal increases commensurately.

Current load sensing is responding to touch, while the actuator is driven at 250 Hz frequency.

Quality of the fit of the bracelet is measured.

Audio-to-Tactile Feedback
To demonstrate the utility of the VHP platform, we used it to develop an audio-to-tactile feedback device to help with lipreading. Lipreading can be difficult for many speech sounds that look similar (visemes), such as “pin” and “min”. In order to help the user differentiate visemes like these, we attach a microphone to the VHP system, which can then pick up the speech sounds and translate the audio to vibrations on the wrist. For audio-to-tactile translation, we used our previously developed algorithms for real-time audio-to-tactile conversion, available via GitHub. Briefly, audio filters are paired with neural networks to recognize certain viesemes (e.g., picking up the hard consonant “p” in “pin”), and are then translated to vibrations in different parts of the bracelet. Our approach is inspired by tactile phonemic sleeve (TAPS), however the major difference is that in our approach the tactile signal is presented continuously and in real-time.

One of the developers who employs lipreading in daily life wore the bracelet daily for several months and found it to give better information to facilitate lipreading than previous devices, allowing improved understanding of lipreading visemes with the bracelet versus lipreading alone. In the future, we plan to conduct full-scale experiments with multiple users wearing the device for an extended time.

Left: Audio-to-tactile sleeve. Middle: Audio-to-tactile bracelet. Right: One of our developers tests out the bracelets, which are worn on both arms.

Potential Applications
The VHP platform enables rapid experimentation and prototyping that can be used to develop techniques for a variety of applications. For example:

  • Rich haptics on small devices: Expanding the number of actuators on mobile phones, which typically only have one or two, could be useful to provide additional tactile information. This is especially useful as fingers are sensitive to vibrations. We demonstrated a prototype mobile phone case with eight vibrotactile actuators. This could be used to provide rich notifications and enhance effects in a mobile game or when watching a video.
  • Lab psychophysical experiments: Because VHP can be easily set up to send and receive haptic signals in real time, e.g., from a Jupyter notebook, it could be used to perform real-time haptic experiments.
  • Notifications and alerts: The wearable VHP could be used to provide haptic notifications from other devices, e.g., alerting if someone is at the door, and could even communicate distinguishable alerts through use of multiple actuators.
  • Sensory substitution: Besides the lipreading assistance example above, there are many other potential applications for accessibility using sensory substitution, such as visual-to-tactile sensing or even sensing magnetic fields.
  • Loading sensing: The ability to sense from the haptic actuator current load is unique to our platform, and enables a variety of features, such as pressure sensing or automatically adjusting actuator output.
Integrating eight voice coils into a phone case. We used loading sensing to understand which voice coils are being touched.

What's next?
We hope that others can utilize the platform to build a diverse set of applications. If you are interested and have ideas about using our platform or want to receive updates, please fill out this form. We hope that with this platform, we can help democratize the use of haptics and inspire a more widespread use of tactile devices.

Acknowledgments
This work was done by Artem Dementyev, Pascal Getreuer, Dimitri Kanevsky, Malcolm Slaney and Richard Lyon. We thank Alex Olwal, Thad Starner, Hong Tan, Charlotte Reed, Sarah Sterman for valuable feedback and discussion on the paper. Yuhui Zhao, Dmitrii Votintcev, Chet Gnegy, Whitney Bai and Sagar Savla for feedback on the design and engineering.

Source: Google AI Blog


A communication tool for people with speech impairments

For millions of people, being able to speak and be understood can be difficult as a result of conditions that can impact speech, including stroke, ALS, Cerebral Palsy, traumatic brain injury or Parkinson's disease. Today, we’re inviting an initial group of people to test Project Relate, a new Android app that aims to help people with speech impairments communicate more easily with others and interact with the Google Assistant.

Project Relate is a continuation of years of research from both Google’s Speech and Research teams, made possible by over a million speech samples recorded by participants of our research effort. We are now looking for English-speaking testers in Australia, Canada, New Zealand and the United States to try out the app and provide feedback to help us improve it.

As an early tester of Project Relate, you will be asked to record a set of phrases. The app will use these phrases to automatically learn how to better understand your unique speech patterns, and give you access to the app's three main features: Listen, Repeat and Assistant.

Listen: Through the Listen feature, the Relate app transcribes your speech to text in real time, so you can copy-paste text into other apps, or let people read what you want to tell them.

Repeat: You can use the Repeat feature to restate what you’ve said using a clear, synthesized voice. We hope this can be especially helpful in face-to-face conversation or even when you want to speak a command to your home assistant device.

Assistant: Speak directly to your Google Assistant from within the Relate app, so you can take care of different tasks, such as turning on the lights or playing a song, with ease.

In creating the app, we worked closely with many people with speech impairments, including Aubrie Lee, a brand manager at Google, whose speech is affected by muscular dystrophy. “I’m used to the look on people’s faces when they can’t understand what I’ve said,” Aubrie shared with us. “Project Relate can make the difference between a look of confusion and a friendly laugh of recognition.” Since Aubrie works on the marketing team that names new products, she also helped us name the app!

If you have a condition that makes your speech difficult to understand, you may be able to help provide feedback on the Project Relate Android app as a trusted tester. To express interest, please fill out our interest form at g.co/ProjectRelate, and the team will get back to you in the coming months.

With your help, we hope to build a future in which people with disabilities can more easily communicate and be understood.

16 founders with disabilities using technology for good

One billion people globally — including one in four people in the U.S. — are living with a disability, making it the largest minority group in the world. However, this diverse, vibrant and powerful community is often associated with pity and limitations. I have Cerebral Palsy, which, in my case, mainly affects my legs and motor skills. I still remember my elementary school classmate telling me his dad didn’t let him play with “weird” kids. Just last week, someone stopped me on the street asking if they could pray for me. These negative stereotypes can make entering the workforce challenging for many disabled people, who are unemployed at more than double the rate of nondisabled people.

How can we start to change these misconceptions? One word: entrepreneurship.

People with disabilities are innate problem solvers. From the moment we wake up, we have to figure out how to get dressed, how to drive, how to communicate, how to live in a world that is not built to fit our needs. In fact, people with disabilities are almost twice as likely compared to non-disabled individuals to start a business.

I founded 2Gether-International (2GI) to harness this entrepreneurial mindset. As the only startup accelerator run by and for entrepreneurs with disabilities, 2GI provides resources, training, opportunities and a community to help disabled founders create a pathway to funding and success. We envision a world in which disability is recognized as a source of innovation, strength and creativity.

This National Disability Employment Awareness Month, we teamed up with Google for Startups to launch our first-ever tech edition of the 2Gether-International Accelerator. This 10-week program is tailored to support early-stage tech startups around key areas of business growth, including market fit, management, sales, marketing and negotiations. The 16 selected founders work one-on-one with industry experts, accredited business coaches, and facilitators such as Bill Bellows, professor and co-director of the Entrepreneurship Incubator at American University, to leave the program with investor-ready pitches and a network of founders and Google experts.

Congratulations to the founders and startups selected for the inaugural 2Gether International tech class:

  • Adam David Jones (Philadelphia, Pennsylvania) of Zeer, a 911 enhancement that uses machine learning and connected devices to create an autonomous safety response system.
  • Arianna Mallozzi (Boston, Massachusetts) of Puffin Innovations, an assistive technology startup focused on developing solutions for people with disabilities to lead more inclusive and independent lives.
  • Beth Kume-Holland (London, U.K.) of Patchwork Hub, an accessible employment platform connecting employers to highly skilled professionals who are looking for work opportunities outside the conventional 9-to-5 office job.
  • Denis Goncharov (St. Petersburg, Russia) of NOLI Music, a smart guitar synthesizer and musical education app that facilitates distance learning and tracks progress over time.
  • Elizabeth Tikoyan (Fairfax, Virginia) of Healp, a health social network that connects patients to community and to crowdsourced health solutions.
  • Gareth Walkom (Ghent, Belgium) of WithVR, an app that uses virtual reality to prepare people with speech disorders for real-life situations.
  • Hua Wang (Alexandria, Virginia) of SmartBridge Health, which aims to democratize access to optimal cancer care to improve health outcomes for patients.
  • Kristy McCann (Philadelphia, Pennsylvania) of Go Coach, a business software platform designed to help candidates grow in their careers, unlock their potential and achieve greater happiness at work.
  • Kun Ho Kim (Seoul, South Korea) of Door Labs, a startup aiming to accelerate positive social changes in the real world by creating an inclusive virtual “metaverse” in which all identities are represented and celebrated.
  • Michael Zalle (Phoenix, Arizona) of YellowBird, an on-demand marketplace connecting environmental, health, and safety professionals with corporate needs and projects.
  • Nikolas Kelly (Rochester, New York) of Sign-Speak, an AI sign language interpreter for non-signers to easily communicate with individuals who are Deaf and hard of hearing.
  • Saida Florexil (West Palm Beach, Florida) of Imanyco, a live transcription app for people who are Deaf and hard of hearing.
  • Samantha Scott (Rockville, Maryland) of JuneBrain, a company building wearables and software monitoring solutions to detect and monitor eye and brain disease outside traditional clinical settings.
  • Sheryl Mattys (Westfield, Indiana) of Fetchadates, a social networking app for single pet lovers to connect with fellow animal lovers.
  • Toshe Ayo-Ariyo (Los Angeles, California) of UInclude, a bias mitigation tool that uses machine learning algorithms to identify and eliminate implicitly biased language in recruitment material.
  • Vanessa Gill (Los Angeles, California) of Social Cipher, a social-emotional learning platform offering games and curriculums designed to help neurodiverse youth develop learning skills and construct positive boundaries.

As 2GI looks to involve corporate partners to help us expand our offerings, it is critical we work with leaders who actually understand the impact people with disabilities have on the world. Whether it is by developing accessible products, partnering with community organizations, or hiring more people with disabilities, Google has continuously supported the disability community. I trust that Google's commitment to founders with disabilities will set a precedent for greater inclusion in the startup world.

Learn more about 2GI and Google for Startups on disability rights activist Judy Heumann’s podcast The Heumann Perspective, and stay tuned for updates from our group of founders over the next three months as they build and grow not only their companies, but also the perception of disabled founders around the world.

Check out Chromebook’s new accessibility features

With accessibility features on Chromebooks, we want everyone to have a good experience on their computer – so people can get things done, families can play together, students and teachers can learn together, and employees can work productively and efficiently, wherever they are. October is National Disability Employment Awareness Month, so we wanted to share a few recent and new Chromebook features that help people access information in a way that works for them.

New enhanced voices for Select-to-speak

People spend a lot of time reading on their laptop, doing things like reading news articles or reviewing school textbooks. Reading on a screen can be less than ideal for many, including people with dyslexia (an estimated 10-20% of the population), low vision, those learning a new language or people who have a hard time focusing on busy text.

With a few clicks, Select-to-speak on Chromebooks allows you to hear selected text on your screen spoken out loud. Earlier this year we added new features like controls to speed up, slow down or pause the reading voice, and to easily jump to different parts of text. Plus, you can choose to highlight the words being spoken while shading background text to help focus your attention.

Lines of a shopping list are outlined in a magenta square, while individual words are highlighted, insinuating they are being read aloud by the Select-to-speak tool.

Today, we’re announcing new, more human sounding voices for Select-to-speak, to help spoken text be more fluid and easier to understand. Natural voices are currently available in various accents in 25 languages with more to come.

To develop this feature, we worked with educators who specialize in dyslexia, as well as individuals with dyslexia. They shared that hearing text read out loud enhances comprehension – especially in an educational setting. By bringing natural-sounding voices to the feature, for example a local accent you’re used to, it’s also easier to follow along with the content being read and highlighted on screen.

Try it out by enabling Select-to-speak in Chromebook settings, and picking your preferred voice. Then select the text you want read out loud and press the Everything Button or Launcher Key + S.

A screen with Select-to speak being used on the Google Accessibility website.

I'm dyslexic and have ADHD and have trouble with reading/learning. You have no idea the amount of knowledge I've had to “let go of” because I simply can't navigate through the words and my attention just would not stick. I'm a great audio learner and have just discovered text-to-speech features. I’m so excited to use this tool!

- Chromebook user with dyslexia

Making Chromebooks more accessible

Over the past year, we’ve also made it easier to use, discover and customize Chromebook’s built-in accessibility features. This includes updates to the screen magnifier, like keyboard panning and shortcuts. We have also developed new in-product tutorials for ChromeVox, and we’ve introduced point scanning to make the selection process for switch users more efficient.

A young boy wearing glasses is lying on a bed looking at a Chromebook, with his mother next to him.

As a public middle school Reading & Dyslexia Specialist, accessibility tools are crucial to student success in education… stop, fast forward, and rewind help build metacognition and reading comprehension skills. Thank you for adapting to the accessibility needs of children.

- Sharon McMichael, Structured Literacy Dyslexia Interventionist (C.E.R.I.)

Become a certified Chromebook

accessibility expert

For assistive tech trainers, educators and users with a disability who want to learn more about Chromebook’s accessibility features, this summer we launched an online training program in conjunction with The Academy for Certification of Vision Rehabilitation & Education Professionals (ACVREP). This eight-module course covers Chromebook and Google Workspace accessibility features. After completing the free course and final exam, you’ll receive a digital badge as a Chromebook Accessibility expert.

We’ll be back later this year to share more new Chromebook features.

Check out Chromebook’s new accessibility features

With accessibility features on Chromebooks, we want everyone to have a good experience on their computer – so people can get things done, families can play together, students and teachers can learn together, and employees can work productively and efficiently, wherever they are. October is National Disability Employment Awareness Month, so we wanted to share a few recent and new Chromebook features that help people access information in a way that works for them.

New enhanced voices for Select-to-speak

People spend a lot of time reading on their laptop, doing things like reading news articles or reviewing school textbooks. Reading on a screen can be less than ideal for many, including people with dyslexia (an estimated 10-20% of the population), low vision, those learning a new language or people who have a hard time focusing on busy text.

With a few clicks, Select-to-speak on Chromebooks allows you to hear selected text on your screen spoken out loud. Earlier this year we added new features like controls to speed up, slow down or pause the reading voice, and to easily jump to different parts of text. Plus, you can choose to highlight the words being spoken while shading background text to help focus your attention.

Lines of a shopping list are outlined in a magenta square, while individual words are highlighted, insinuating they are being read aloud by the Select-to-speak tool.

Today, we’re announcing new, more human sounding voices for Select-to-speak, to help spoken text be more fluid and easier to understand. Natural voices are currently available in various accents in 25 languages with more to come.

To develop this feature, we worked with educators who specialize in dyslexia, as well as individuals with dyslexia. They shared that hearing text read out loud enhances comprehension – especially in an educational setting. By bringing natural-sounding voices to the feature, for example a local accent you’re used to, it’s also easier to follow along with the content being read and highlighted on screen.

Try it out by enabling Select-to-speak in Chromebook settings, and picking your preferred voice. Then select the text you want read out loud and press the Everything Button or Launcher Key + S.

A screen with Select-to speak being used on the Google Accessibility website.

I'm dyslexic and have ADHD and have trouble with reading/learning. You have no idea the amount of knowledge I've had to “let go of” because I simply can't navigate through the words and my attention just would not stick. I'm a great audio learner and have just discovered text-to-speech features. I’m so excited to use this tool!

- Chromebook user with dyslexia

Making Chromebooks more accessible

Over the past year, we’ve also made it easier to use, discover and customize Chromebook’s built-in accessibility features. This includes updates to the screen magnifier, like keyboard panning and shortcuts. We have also developed new in-product tutorials for ChromeVox, and we’ve introduced point scanning to make the selection process for switch users more efficient.

A young boy wearing glasses is lying on a bed looking at a Chromebook, with his mother next to him.

As a public middle school Reading & Dyslexia Specialist, accessibility tools are crucial to student success in education… stop, fast forward, and rewind help build metacognition and reading comprehension skills. Thank you for adapting to the accessibility needs of children.

- Sharon McMichael, Structured Literacy Dyslexia Interventionist (C.E.R.I.)

Become a certified Chromebook

accessibility expert

For assistive tech trainers, educators and users with a disability who want to learn more about Chromebook’s accessibility features, this summer we launched an online training program in conjunction with The Academy for Certification of Vision Rehabilitation & Education Professionals (ACVREP). This eight-module course covers Chromebook and Google Workspace accessibility features. After completing the free course and final exam, you’ll receive a digital badge as a Chromebook Accessibility expert.

We’ll be back later this year to share more new Chromebook features.

Why we should rethink accessibility as customization

As a Technical Writer for Google Cloud who’s worked in this industry for more than 20 years, technology has had a big impact on my life. It led me to a job that I love, and it keeps me connected to co-workers, friends and family scattered around the world.

But it also helps me to accomplish everyday tasks in ways many people might not realize. I have aniridia, a rare eye condition where the eyes are underdeveloped. Among other things, I’m light sensitive, have about 20/200 vision that isn’t correctable with lenses or surgery, and my eyes move around involuntarily.

Most people don’t realize the extent of my disability because I’m largely independent. The challenges I face on a regular basis are little things that most people take for granted — for example, I don’t experience eye contact, which means I often miss non-verbal cues. And for me, crossing the street is like a real world game of Frogger. Reading menus and shopping can be difficult. Navigating airports or locating my rideshare car can be stressful.

But I’ve used tech to create my own set of “life hacks.” I adjust the magnification of my view of a Google Doc during a meeting, which doesn’t change anyone else’s view of it. I zoom in on instructors during virtual dance classes. I regularly use keyboard shortcuts and predefined text snippets to work more productively. I do lots of planning before trips and save key navigational info in Google Maps. I take photos of menus and labels so I can read them more closely on my phone.

The technologies that help to mitigate the kinds of challenges I face don’t just benefit me, though — they benefit everyone. Features like Dark mode, Assistant, Live Caption — these benefit everyone and make their individual experiences using certain products better. And they can also support people with permanent, situational, or temporary disabilities.

The positive effect of disability-friendly design on a wider population is known as the curb-cut effect. A curb cut is a ramp built into a sidewalk that slopes down to a street. Their primary purpose is to provide access for wheelchairs, but curb cuts actually help many others, including people riding bikes, skateboards or scooters, people pushing strollers or pulling wheeled luggage, and people walking with canes or crutches. So while they were made to help people with disabilities, they actually help so many others.

There’s an important lesson to learn from the curb-cut effect, one that I think about when we are creating new technologies here at Google: If you are involved in designing, creating, selling, or supporting products and services, I challenge you to reframe accessibility as customization. Many people typically view accessibility as an extra feature of a product that is specifically for someone with a disability. But features like Dark mode or captions are really a way to customize your user experience, and these customizations are beneficial to everyone. We all find ourselves in different contexts where we need to adjust how we interact with our devices and the people around us. Design that provides a range of ways to interact with people and our world results in products and services that are more usable — by everyone.

Why we should rethink accessibility as customization

As a Technical Writer for Google Cloud who’s worked in this industry for more than 20 years, technology has had a big impact on my life. It led me to a job that I love, and it keeps me connected to co-workers, friends and family scattered around the world.

But it also helps me to accomplish everyday tasks in ways many people might not realize. I have aniridia, a rare eye condition where the eyes are underdeveloped. Among other things, I’m light sensitive, have about 20/200 vision that isn’t correctable with lenses or surgery, and my eyes move around involuntarily.

Most people don’t realize the extent of my disability because I’m largely independent. The challenges I face on a regular basis are little things that most people take for granted — for example, I don’t experience eye contact, which means I often miss non-verbal cues. And for me, crossing the street is like a real world game of Frogger. Reading menus and shopping can be difficult. Navigating airports or locating my rideshare car can be stressful.

But I’ve used tech to create my own set of “life hacks.” I adjust the magnification of my view of a Google Doc during a meeting, which doesn’t change anyone else’s view of it. I zoom in on instructors during virtual dance classes. I regularly use keyboard shortcuts and predefined text snippets to work more productively. I do lots of planning before trips and save key navigational info in Google Maps. I take photos of menus and labels so I can read them more closely on my phone.

The technologies that help to mitigate the kinds of challenges I face don’t just benefit me, though — they benefit everyone. Features like Dark mode, Assistant, Live Caption — these benefit everyone and make their individual experiences using certain products better. And they can also support people with permanent, situational, or temporary disabilities.

The positive effect of disability-friendly design on a wider population is known as the curb-cut effect. A curb cut is a ramp built into a sidewalk that slopes down to a street. Their primary purpose is to provide access for wheelchairs, but curb cuts actually help many others, including people riding bikes, skateboards or scooters, people pushing strollers or pulling wheeled luggage, and people walking with canes or crutches. So while they were made to help people with disabilities, they actually help so many others.

There’s an important lesson to learn from the curb-cut effect, one that I think about when we are creating new technologies here at Google: If you are involved in designing, creating, selling, or supporting products and services, I challenge you to reframe accessibility as customization. Many people typically view accessibility as an extra feature of a product that is specifically for someone with a disability. But features like Dark mode or captions are really a way to customize your user experience, and these customizations are beneficial to everyone. We all find ourselves in different contexts where we need to adjust how we interact with our devices and the people around us. Design that provides a range of ways to interact with people and our world results in products and services that are more usable — by everyone.

Comment size increasing in Google Docs

Quick launch summary 

You can now make more efficient use of your screen space in Google Docs. Currently, comments in Docs are 35 characters wide in the sidebar, regardless of how much space is available. Now, we've increased the comment width to a maximum of 50 characters, a +43% increase in width. Comment width will intelligently scale based on your browser window to maximize the use of available screen space. While screen time may increase in remote and hybrid work environments, this update makes more efficient use of the space by fitting more content on a single line and enhancing readability.