Tag Archives: hardware

Meet the new Nest Hub

Introducing the second-generation Nest Hub! Since we launched Google’s first smart display two years ago, it’s brought help to thousands of homes and we’ve been dedicated to exploring ways to make our devices even more helpful. 

The Nest Hub you love, but better 
The new Nest Hub’s speaker is based on the same audio technology as Nest Audio and has 50 percent more bass than the original Hub for a bigger, richer sound to fill any room with music, podcasts or audiobooks from services like YouTube Music and Spotify — or enjoy your favourite TV shows and movies with a subscription from providers like Netflix, Disney+ and Stan. With Quick Gestures, you can pause or play content at any time by tapping the air in front of your display. 
The new Nest Hub shows all your compatible connected devices in one place so you can control them with one tap. And with a built-in Thread radio, Nest Hub will work with the new connectivity standard being created by the Project Connected Home over IP working group, making it even simpler to control your connected home. 

Nest Hub is also full of help for your busy family. See your calendar, set timers, and create reminders with Family Notes, digital sticky notes to share chores and to-dos so everyone stays on track. 


New sleep features for better rest 
The Nest Hub has always helped you tackle the day; now, it can help you rest well at night. Many of us don’t get enough sleep, which is becoming the number one concern for adults when it comes to health and wellness. 
As people have started to recognise the need for better sleep, sleep trackers have continued to become a popular solution. But we wanted to offer an alternative way for people who may not want to wear something to bed to understand their sleep. 
We dug into the data, and because we also knew people felt comfortable with Nest Hub at their bedsides thanks to its camera-free design, we went to work. The result is Sleep Sensing, an opt-in feature to help you understand and improve your sleep — and is available as a free preview until next year. 
Sleep Sensing is completely optional with privacy safeguards in place so you’re in control: You choose if you want to enable it and there's a visual indicator on the display to let you know when it’s on. Motion Sense only detects motion, not specific bodies or faces, and your coughing and snoring audio data is only processed on the device — it isn’t sent to Google servers. You have multiple controls to disable Sleep Sensing features, including a hardware switch that physically disables the microphone. You can review or delete your sleep data at any time, and consistent with our privacy commitments, it isn't used for personalised ads. 
Even if you choose not to enable Sleep Sensing, you can still fall asleep and wake up easier with Nest Hub. The display dims to make your bedroom more sleep-friendly, and the “Your evening” page helps you wind down at night with relaxing sounds. When it’s time to wake up, Nest Hub’s Sunrise Alarm gradually brightens the display and increases the alarm volume. If you need a few more ZZZs, use Motion Sense to wave your hand and snooze the alarm. 


Sustainable design that matches any room 
The new Nest Hub will be available to Australians in two colours, to complement most rooms in the house: Chalk and Charcoal. It features an edgeless glass display that’s easy to clean and makes your Nest Hub an even more beautiful digital photo frame. And continuing our commitment to sustainability, Nest Hub is designed with recycled materials with its plastic mechanical parts containing 54 percent recycled post-consumer plastic. 

The second-generation Nest Hub is $149. It can be preordered online in Australia at the Google Store and other retailers from today.

Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Contactless Sleep Sensing in Nest Hub

People often turn to technology to manage their health and wellbeing, whether it is to record their daily exercise, measure their heart rate, or increasingly, to understand their sleep patterns. Sleep is foundational to a person’s everyday wellbeing and can be impacted by (and in turn, have an impact on) other aspects of one’s life — mood, energy, diet, productivity, and more.

As part of our ongoing efforts to support people’s health and happiness, today we announced Sleep Sensing in the new Nest Hub, which uses radar-based sleep tracking in addition to an algorithm for cough and snore detection. While not intended for medical purposes1, Sleep Sensing is an opt-in feature that can help users better understand their nighttime wellness using a contactless bedside setup. Here we describe the technologies behind Sleep Sensing and discuss how we leverage on-device signal processing to enable sleep monitoring (comparable to other clinical- and consumer-grade devices) in a way that protects user privacy.

Soli for Sleep Tracking
Sleep Sensing in Nest Hub demonstrates the first wellness application of Soli, a miniature radar sensor that can be used for gesture sensing at various scales, from a finger tap to movements of a person’s body. In Pixel 4, Soli powers Motion Sense, enabling touchless interactions with the phone to skip songs, snooze alarms, and silence phone calls. We extended this technology and developed an embedded Soli-based algorithm that could be implemented in Nest Hub for sleep tracking.

Soli consists of a millimeter-wave frequency-modulated continuous wave (FMCW) radar transceiver that emits an ultra-low power radio wave and measures the reflected signal from the scene of interest. The frequency spectrum of the reflected signal contains an aggregate representation of the distance and velocity of objects within the scene. This signal can be processed to isolate a specified range of interest, such as a user’s sleeping area, and to detect and characterize a wide range of motions within this region, ranging from large body movements to sub-centimeter respiration.

Soli spectrogram illustrating its ability to detect a wide range of motions, characterized as (a) an empty room (no variation in the reflected signal demonstrated by the black space), (b) large pose changes, (c) brief limb movements, and (d) sub-centimeter chest and torso displacements from respiration while at rest.

In order to make use of this signal for Sleep Sensing, it was necessary to design an algorithm that could determine whether a person is present in the specified sleeping area and, if so, whether the person is asleep or awake. We designed a custom machine-learning (ML) model to efficiently process a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) and automatically classify each feature into one of three possible states: absent, awake, and asleep.

To train and evaluate the model, we recorded more than a million hours of radar data from thousands of individuals, along with thousands of sleep diaries, reference sensor recordings, and external annotations. We then leveraged the TensorFlow Extended framework to construct a training pipeline to process this data and produce an efficient TensorFlow Lite embedded model. In addition, we created an automatic calibration algorithm that runs during setup to configure the part of the scene on which the classifier will focus. This ensures that the algorithm ignores motion from a person on the other side of the bed or from other areas of the room, such as ceiling fans and swaying curtains.

The custom ML model efficiently processes a continuous stream of 3D radar tensors (summarizing activity over a range of distances, frequencies, and time) to automatically compute probabilities for the likelihood of user presence and wakefulness (awake or asleep).

To validate the accuracy of the algorithm, we compared it to the gold-standard of sleep-wake determination, the polysomnogram sleep study, in a cohort of 33 “healthy sleepers” (those without significant sleep issues, like sleep apnea or insomnia) across a broad age range (19-78 years of age). Sleep studies are typically conducted in clinical and research laboratories in order to collect various body signals (brain waves, muscle activity, respiratory and heart rate measurements, body movement and position, and snoring), which can then be interpreted by trained sleep experts to determine stages of sleep and identify relevant events. To account for variability in how different scorers apply the American Academy of Sleep Medicine’s staging and scoring rules, our study used two board-certified sleep technologists to independently annotate each night of sleep and establish a definitive groundtruth.

We compared our Sleep Sensing algorithm’s outputs to the corresponding groundtruth sleep and wake labels for every 30-second epoch of time to compute standard performance metrics (e.g., sensitivity and specificity). While not a true head-to-head comparison, this study’s results can be compared against previously published studies in similar cohorts with comparable methodologies in order to get a rough estimate of performance. In “Sleep-wake detection with a contactless, bedside radar sleep sensing system”, we share the full details of these validation results, demonstrating sleep-wake estimation equivalent to or, in some cases, better than current clinical and consumer sleep tracking devices.

Aggregate performance from previously published accuracies for detection of sleep (sensitivity) and wake (specificity) of a variety of sleep trackers against polysomnography in a variety of different studies, accounting for 3,990 nights in total. While this is not a head-to-head comparison, the performance of Sleep Sensing on Nest Hub in a population of healthy sleepers who simultaneously underwent polysomnography is added to the figure for rough comparison. The size of each circle is a reflection of the number of nights and the inset illustrates the mean±standard deviation for the performance metrics.

Understanding Sleep Quality with Audio Sensing
The Soli-based sleep tracking algorithm described above gives users a convenient and reliable way to see how much sleep they are getting and when sleep disruptions occur. However, to understand and improve their sleep, users also need to understand why their sleep is disrupted. To assist with this, Nest Hub uses its array of sensors to track common sleep disturbances, such as light level changes or uncomfortable room temperature. In addition to these, respiratory events like coughing and snoring are also frequent sources of disturbance, but people are often unaware of these events.

As with other audio-processing applications like speech or music recognition, coughing and snoring exhibit distinctive temporal patterns in the audio frequency spectrum, and with sufficient data an ML model can be trained to reliably recognize these patterns while simultaneously ignoring a wide variety of background noises, from a humming fan to passing cars. The model uses entirely on-device audio processing with privacy-preserving analysis, with no raw audio data sent to Google’s servers. A user can then opt to save the outputs of the processing (sound occurrences, such as the number of coughs and snore minutes) in Google Fit, in order to view personal insights and summaries of their night time wellness over time.

The Nest Hub displays when snoring and coughing may have disturbed a user’s sleep (top) and can track weekly trends (bottom).

To train the model, we assembled a large, hand-labeled dataset, drawing examples from the publicly available AudioSet research dataset as well as hundreds of thousands of additional real-world audio clips contributed by thousands of individuals.

Log-Mel spectrogram inputs comparing cough (left) and snore (right) audio snippets.

When a user opts in to cough and snore tracking on their bedside Nest Hub, the device first uses its Soli-based sleep algorithms to detect when a user goes to bed. Once it detects that a user has fallen asleep, it then activates its on-device sound sensing model and begins processing audio. The model works by continuously extracting spectrogram-like features from the audio input and feeding them through a convolutional neural network classifier in order to estimate the probability that coughing or snoring is happening at a given instant in time. These estimates are analyzed over the course of the night to produce a report of the overall cough count and snoring duration and highlight exactly when these events occurred.

Conclusion
The new Nest Hub, with its underlying Sleep Sensing features, is a first step in empowering users to understand their nighttime wellness using privacy-preserving radar and audio signals. We continue to research additional ways that ambient sensing and the predictive ability of consumer devices could help people better understand their daily health and wellness in a privacy-preserving way.

Acknowledgements
This work involved collaborative efforts from a multidisciplinary team of software engineers, researchers, clinicians, and cross-functional contributors. Special thanks to D. Shin for his significant contributions to this technology and blogpost, and Dr. Logan Schneider, visiting sleep neurologist affiliated with the Stanford/VA Alzheimer’s Center and Stanford Sleep Center, whose clinical expertise and contributions were invaluable to continuously guide this research. In addition to the authors, key contributors to this research from Google Health include Jeffrey Yu, Allen Jiang, Arno Charton, Jake Garrison, Navreet Gill, Sinan Hersek, Yijie Hong, Jonathan Hsu, Andi Janti, Ajay Kannan, Mukil Kesavan, Linda Lei, Kunal Okhandiar‎, Xiaojun Ping, Jo Schaeffer, Neil Smith, Siddhant Swaroop, Bhavana Koka, Anupam Pathak, Dr. Jim Taylor, and the extended team. Another special thanks to Ken Mixter for his support and contributions to the development and integration of this technology into Nest Hub. Thanks to Mark Malhotra and Shwetak Patel for their ongoing leadership, as well as the Nest, Fit, Soli, and Assistant teams we collaborated with to build and validate Sleep Sensing on Nest Hub.


1 Not intended to diagnose, cure, mitigate, prevent or treat any disease or condition. 

Source: Google AI Blog


Haptics with Input: Using Linear Resonant Actuators for Sensing

As wearables and handheld devices decrease in size, haptics become an increasingly vital channel for feedback, be it through silent alerts or a subtle "click" sensation when pressing buttons on a touch screen. Haptic feedback, ubiquitous in nearly all wearable devices and mobile phones, is commonly enabled by a linear resonant actuator (LRA), a small linear motor that leverages resonance to provide a strong haptic signal in a small package. However, the touch and pressure sensing needed to activate the haptic feedback tend to depend on additional, separate hardware which increases the price, size and complexity of the system.

In “Haptics with Input: Back-EMF in Linear Resonant Actuators to Enable Touch, Pressure and Environmental Awareness”, presented at ACM UIST 2020, we demonstrate that widely available LRAs can sense a wide range of external information, such as touch, tap and pressure, in addition to being able to relay information about contact with the skin, objects and surfaces. We achieve this with off-the-shelf LRAs by multiplexing the actuation with short pulses of custom waveforms that are designed to enable sensing using the back-EMF voltage. We demonstrate the potential of this approach to enable expressive discrete buttons and vibrotactile interfaces and show how the approach could bring rich sensing opportunities to integrated haptics modules in mobile devices, increasing sensing capabilities with fewer components. Our technique is potentially compatible with many existing LRA drivers, as they already employ back-EMF sensing for autotuning of the vibration frequency.

Different off-the-shelf LRAs that work using this technique.

Back-EMF Principle in an LRA
Inside the LRA enclosure is a magnet attached to a small mass, both moving freely on a spring. The magnet moves in response to the excitation voltage introduced by the voice coil. The motion of the oscillating mass produces a counter-electromotive force, or back-EMF, which is a voltage proportional to the rate of change of magnetic flux. A greater oscillation speed creates a larger back-EMF voltage, while a stationary mass generates zero back-EMF voltage.

Anatomy of the LRA.

Active Back-EMF for Sensing
Touching or making contact with the LRA during vibration changes the velocity of the interior mass, as energy is dissipated into the contact object. This works well with soft materials that deform under pressure, such as the human body. A finger, for example, absorbs different amounts of energy depending on the contact force as it flattens against the LRA. By driving the LRA with small amounts of energy, we can measure this phenomenon using the back-EMF voltage. Because leveraging the back-EMF behavior for sensing is an active process, the key insight that enabled this work was the design of a custom, off-resonance driver waveform that allows continuous sensing while minimizing vibrations, sound and power consumption.

Touch and pressure sensing on the LRA.

We measure back-EMF from the floating voltage between the two LRA leads, which requires disconnecting the motor driver briefly to avoid disturbances. While the driver is disconnected, the mass is still oscillating inside the LRA, producing an oscillating back-EMF voltage. Because commercial back-EMF sensing LRA drivers do not provide the raw data, we designed a custom circuit that is able to pick up and amplify small back-EMF voltage. We also generated custom drive pulses that minimize vibrations and energy consumption.

Simplified schematic of the LRA driver and the back-EMF measurement circuit for active sensing.
After exciting the LRA with a short drive pulse, the back-EMF voltage fluctuates due to the continued oscillations of the mass on the spring (top, red line). The change in the back-EMF signal when subject to a finger press depends on the pressure applied (middle/bottom, green/blue lines).

Applications
The behavior of the LRAs used in mobile phones is the same, whether they are on a table, on a soft surface, or hand held. This may cause problems, as a vibrating phone could slide off a glass table or emit loud and unnecessary vibrating sounds. Ideally, the LRA on a phone would automatically adjust based on its environment. We demonstrate our approach for sensing using the LRA back-EMF technique by wiring directly to a Pixel 4’s LRA, and then classifying whether the phone is held in hand, placed on a soft surface (foam), or placed on a table.

Sensing phone surroundings.

We also present a prototype that demonstrates how LRAs could be used as combined input/output devices in portable electronics. We attached two LRAs, one on the left and one on the right side of a phone. The buttons provide tap, touch, and pressure sensing. They are also programmed to provide haptic feedback, once the touch is detected.

Pressure-sensitive side buttons.

There are a number of wearable tactile aid devices, such as sleeves, vests, and bracelets. To transmit tactile feedback to the skin with consistent force, the tactor has to apply the right pressure; it can not be too loose or too tight. Currently, the typical way to do so is through manual adjustment, which can be inconsistent and lacks measurable feedback. We show how the LRA back-EMF technique can be used to continuously monitor the fit bracelet device and prompt the user if it's too tight, too loose, or just right.

Fit sensing bracelet.

Evaluating an LRA as a Sensor
The LRA works well as a pressure sensor, because it has a quadratic response to the force magnitude during touch. Our method works for all five off-the-shelf LRA types that we evaluated. Because the typical power consumption is only 4.27 mA, all-day sensing would only reduce the battery life of a Pixel 4 phone from 25 to 24 hours. The power consumption can be greatly reduced by using low-power amplifiers and employing active sensing only when needed, such as when the phone is active and interacting with the user.

Back-EMF voltage changes when pressure is applied with a finger.

The challenge with active sensing is to minimize vibrations, so they are not perceptible when touching and do not produce audible sound. We optimize the active sensing to produce only 2 dB of sound and 0.45 m/s2 peak-to-peak acceleration, which is just barely perceptible by finger and is quiet, in contrast to the regular 8.49 m/s2 .

Future Work and Conclusion
To see the work presented here in action, please see the video below.

In the future, we plan to explore other sensing techniques, perhaps measuring the current could be an alternative approach. Also, using machine learning could potentially improve the sensing and provide more accurate classification of the complex back-EMF patterns. Our method could be developed further to enable closed-loop feedback with the actuator and sensor, which would allow the actuator to provide the same force, regardless of external conditions.

We believe that this work opens up new opportunities for leveraging existing ubiquitous hardware to provide rich interactions and closed-loop feedback haptic actuators.

Acknowledgments
This work was done by Artem Dementyev, Alex Olwal, and Richard Lyon. Thanks to Mathieu Le Goc and Thad Starner for feedback on the paper.

Source: Google AI Blog


Our best Chromecast yet, now with Google TV


Chromecast changed the way we enjoy our favourite movies, TV shows and YouTube videos by making it easy and inexpensive to bring your online entertainment to your TV—a revolutionary idea in 2013. Today, we have more content choices than ever, sprinkled across an ever-expanding variety of apps, which can make it difficult to find what to watch. This inspired us to rethink what simple and easy content discovery on your TV should look like. So today, we're making our biggest leap yet to help you navigate your entertainment choices, bringing together the best of local and global content into one convenient location, with the all-new Chromecast with Google TV. 
Best Chromecast yet 
Chromecast with Google TV has your favourite Chromecast features and now comes with the all-new Google TV entertainment experience. Google TV experience brings together movies, shows and more from across your apps and subscriptions and organises them just for you. We're also bringing our most requested feature—a remote—to Chromecast. 

A new look, inside and out 
The new Chromecast with Google TV comes in a compact and thin design and is packed with the latest technology to give you the best viewing experience. It neatly plugs into your TV's HDMI port and tucks behind your screen. Power it on and you'll be streaming crystal clear video in up to 4K HDR at up to 60 frames per second in no time. With Dolby Vision, you’ll get extraordinary colour, contrast and brightness on your TV. We also support HDMI pass-through of Dolby audio content. 

More power in your hand 
The new Chromecast voice remote is comfortable to hold, easy to use and full of new features. It has a dedicated Google Assistant button that can help you find something to watch, answer everyday questions like “how's the weather?” or play your favourite artist on YouTube Music all with just your voice. And when it's time to cozy up on the couch for movie night, you can control your smart home lights to set the mood or check your front door with Nest Camera to keep tabs on your pizza delivery. We also have dedicated buttons for popular streaming services, YouTube and Netflix, to give you instant access to the content you love. Best of all, you won't have to juggle multiple remotes thanks to our programmable TV controls for power, volume and input. 

TV just for you 
In need of some good movie or TV recommendations? Google TV's For You tab gives you personalised watch suggestions from across your subscriptions organised based on what you like to watch—even your guilty pleasure reality dramas. Google TV’s Watchlist lets you bookmark movies and shows you want to save for later. You can add to your Watchlist from your phone or laptop, and it will be waiting on your TV when you get home. 
Best of all, you'll also have access to thousands of apps and the ability to browse 400,000+ movies and TV shows sorted and optimised for what you like—ask Google Assistant to see results from across your favourite apps, like YouTube, Netflix, Disney+, Stan, 9Now and ABC iview, among others. 

Starting today Chromecast with Google TV is available for pre-order in Australia for $99 in three fun colours to match your decor or personality: Snow, Sunrise and Sky, and will be available from the Google Store as well as other retailers like JB Hi-Fi, Harvey Norman, OfficeWorks, and The Good Guys starting from October 15. Sunrise and Sky will be exclusively available on Google Store. 


Made for music, the new Nest Audio is here

This year, we’ve all spent a lot of time exploring things to do at home. Some of us gardened, and others baked. We tried at-home workouts, or redecorated the house, took up art projects. But one thing that many—maybe all of us—did? Enjoy a lot of music at home. Personally, I have spent so much more time listening to music during quarantine—bossa nova is my go to soundtrack for doing the dishes and Lil Baby has become one of my favourite artists. 
So, in a time when we’re all listening to more music than ever, we’re especially excited to introduce Nest Audio, our latest smart speaker that is made for music lovers. 

A music machine 
Nest Audio is 75 percent louder and has 50 percent stronger bass than the original Google Home—measurements of both devices were taken in an anechoic chamber at maximum volume, on-axis. With a 19mm tweeter for consistent high frequency coverage and clear vocals and a 75mm mid-woofer that really brings the bass, this smart speaker is a music lover’s dream. 
Nest Audio’s sound is full, clear and natural. We completed more than 500 hours of tuning to ensure balanced lows, mids and highs so that nothing is lacking or overbearing. The bass is significant and the vocals have depth, which makes Nest Audio sound great across genres: classical, R&B, pop and more. The custom-designed tweeter allows each musical detail to come through, and we optimised the grill, fabric and materials so that you can enjoy the audio without distortion. 
Our goal was to ensure that Nest Audio stayed faithful to what the artist intended when they were in the recording studio. We minimised the use of compressors to preserve dynamic range, so that the auditory contrast in the original production is preserved—the quiet parts are delicate and subtle, and the loud parts are more dramatic and powerful. 
Nest Audio also adapts to your home. Our Media EQ feature enables Nest Audio to automatically tune itself to whatever you’re listening to: music, podcasts, audiobooks or hearing a response from Google Assistant. And Ambient IQ lets Nest Audio also adjust the volume of Assistant, news, podcasts, and audiobooks based on the background noise in the home, so you can hear the weather forecast over a noisy dishwasher. 

Whole home audio 
If you have a Google Home, Nest Mini or even a Nest Hub, you can easily make Nest Audio the centre of your whole home sound system. In my living room, I’ve connected two Nest Audio speakers as a stereo pair for left and right channel separation. I also have a Nest Hub Max in my kitchen, a Nest Mini in my bedroom and a Nest Hub in the entryway. These devices are grouped so that I can blast the same song on all of them when I have my daily dance party. 
With our stream transfer feature, I can move music from one device to the other with just my voice. Just last month, we launched multi-room control, which allows you to dynamically group multiple cast-enabled Nest devices in real-time. 

An even faster Assistant 
When we launched Nest Mini last year, we embedded a dedicated machine learning chip with up to one TeraOPS of processing power, which let us move some Google Assistant experiences from our data centres directly onto the device. We’ve leveraged the same ML chip in Nest Audio too.
Google Assistant helps you tackle your day, enjoy your entertainment and control compatible smart home brands like Philips Hue, TP-Link and more. In fact, our users have already set up more than 100 million devices to work with Google Assistant. Plus, if you’re a YouTube Music or Spotify Premium subscriber, you can say, “Hey Google, recommend some music” and Google Assistant will offer a variety of choices from artists and genres that you like, and others like them to choose from.

Differentiated by design 
Typically, a bigger speaker equals bigger sound, but Nest Audio has a really slim profile—so it fits anywhere in the home. In order to maximise audio output, we custom-designed quality drivers and housed them in an enclosure that helps it squeeze out every bit of sound possible. 
Nest Audio is available in two colours in Australia: Chalk and Charcoal. Its soft, rounded edges blend in with your home’s decor, and its minimal footprint doesn't take up too much space on your shelf or countertop. 
We’re continuing our commitment to sustainability with Nest Audio. It’s covered in the same sustainable fabric that we first introduced with Nest Mini last year, and the enclosure (meaning the fabric, housing, foot, and a few smaller parts) is made from 70 percent recycled plastic. 

Starting today Nest Audio is available for pre-order in Australia for $149 at the Google Store and other retailers, including JB Hi-Fi, Harvey Norman, and The Good Guys. It will be on-sale from October 15 through these same retailers, as well as Officeworks and Vodafone. 

Pixel 4a (5G) and Pixel 5 pack 5G speeds and so much more

Today, we hosted Launch Night In, a virtual event introducing new products from across Google that will offer a little joy, entertainment and connection for people. These products bring together the best of Google’s hardware, software and AI to deliver helpful experiences built around you. Not only are these products more helpful; they’re more affordable too. 
Our new smartphones, Pixel 4a with 5G and Pixel 5 offer more helpful Google features backed by the power and speeds of 5G.1 From Google’s latest AI and Assistant features, to the biggest ever batteries we’ve put in a Pixel, to industry-leading camera features, Pixel 4a with 5G and Pixel 5 join our much loved Pixel 4a in providing more help at a more helpful price. 

5G speeds at affordable prices 
5G is the latest in mobile technology, bringing fast download and streaming speeds to users around the world. Whether you’re downloading the latest movie2, listening to your favourite music on YouTube Music, catching up on podcasts with Google Podcast or downloading a game Pixel 4a with 5G and Pixel 5 can provide you with fast speeds at a helpful price.1 Starting at just $799 for Pixel 4a with 5G.

New camera, new lenses—same great photos 
Ask any Pixel owner and they’ll tell you: Pixels take great photos. Pixel 4a with 5G and Pixel 5 are no exception. These phones bring Pixel’s industry-leading photography features to the next level. 
  • Better videos with Cinematic Pan: Pixel 4a with 5G and Pixel 5 come with Cinematic Pan, which gives your videos a professional look with ultrasmooth panning that’s inspired by the equipment Hollywood directors use. 
  • Night Sight in Portrait Mode: Night Sight currently gives you the ability to capture amazing low-light photos—and even the Milky Way with astrophotography. Now, these phones bring the power of Night Sight into Portrait Mode to capture beautifully blurred backgrounds in Portraits even in extremely low light. 
Night Sight in Portrait Mode, captured on Pixel 
  • Portrait Light: Portrait Mode on the Pixel 4a with 5G and Pixel 5 lets you capture beautiful portraits that focus on your subject as the background fades into an artful blur. If the lighting isn’t right, your Pixel can drop in extra light to illuminate your subjects
  • Ultrawide lens for ultra awesome shots: With an ultrawide lens alongside the standard rear camera, you’ll be able to capture the whole scene. And thanks to Google’s software magic, the latest Pixels still get our Super Res Zoom. So whether you’re zooming in or zooming out, you get sharp details and breathtaking images. 
Ultrawide, captured on Pixel 
  • New editor in Google Photos: Even after you’ve captured your portrait, Google Photos can help you add studio-quality light to your portraits of people with Portrait Light, in the new, more helpful Google Photos editor
Stay connected and entertained with Duo 
To make it easier and more enjoyable to stay connected to the most important people in your life, the new HD screen sharing in Duo video calls lets you and a friend watch the same video, cheer on sports with a friend and even plan activities – no matter how far apart you are.3 And with features like Duo Family mode, you will be able to keep kids entertained and engaged with new interactive tools, like colouring over backgrounds, while you video chat. 

A smarter way to record and share audio 
Last year, Recorder made audio recording smarter, with real-time transcriptions and the power of search.4 Now, Recorder makes it even easier to share your favourite audio moments. Since Recorder automatically transcribes every recording, now you can use those transcripts to edit the audio too. Just highlight a sentence to crop or remove its corresponding audio. Once you have something you want others to hear—say a quote from an interview or a new song idea—you can generate a video clip to make sharing your audio easier and more visual than ever. 
Editing in Recorder is easy

To improve searching through your transcripts, smart scrolling will automatically mark important words in longer transcripts so you can quickly jump to the sections you’re looking for as you scroll. But most helpful of all? Recorder still works without an internet connection, so you can transcribe, search and edit from anywhere, anytime. 

The biggest Pixel batteries ever 
Pixel 4a with 5G and Pixel 5 also have all-day batteries that can last up to 48 hours with Extreme Battery Saver.5 This mode automatically limits active apps to just the essentials and lets you choose additional apps you want to keep on. 

And now, the specs 
Like all Pixel devices, security and safety are paramount in Pixel 4a with 5G and Pixel 5. Both devices come with our TitanTM M security chip to help keep your on-device data safe and secure, and both phones will get three years of software and security updates. Your Pixel also has built-in safety features like car crash detection6 and Safety Check.7
Plus, Pixel 5 is designed with the environment in mind; we used 100% recycled aluminium in the back housing enclosure to reduce its carbon footprint. You can charge your Pixel 5 wirelessly8 and even use it to wirelessly charge other Qi-certified devices using Battery Share.9 Pixel 5 also doesn’t mind a little water or dust. The metal unibody can handle being submerged in 1.5 metres of fresh water for 30 minutes.10
When you buy the Google phone, you get more from Google. Pixel 5 and Pixel 4a with 5G come with trial subscriptions to Google’s entertainment, security and storage services for new users.11 If you’re a new user you’ll get a YouTube Premium trial for 3 months, 100 GB of storage with Google One for 3 months and 3 months of Google Play Pass and Gold/Silver Status on Play Points. See g.co/pixel/4a5Goffers or g.co/pixel/5offers, as applicable, for more details.11 
In Australia, Pixel 5 will range in two colours, Just Black and Sorta Sage (selected retailers). It will retail for $999 and can be pre-ordered today from Google Store, Telstra, Optus, Vodafone, JB Hi-Fi, Officeworks and Harvey Norman, and will be available starting October 15. Pixel 4a with 5G will retail for $799 and can be pre-ordered today from JB Hi-Fi, Officeworks and Harvey Norman, and will be available from these retailers in addition to Google Store and Telstra in November ranging in Just Black. 


Looking for the Pixel’s that’s right for you? Head to the Google Store now. 

1 Requires a 5G data plan (sold separately). 5G service and roaming not available on all carrier networks or in all areas. Contact carrier for details about current 5G network performance, compatibility, and availability. Phone connects to 5G networks but, 5G service, speed and performance depend on many factors including, but not limited to, carrier network capabilities, device configuration and capabilities, network traffic, location, signal strength and signal obstruction. Actual results may vary. Some features not available in all areas. Data rates may apply. See g.co/pixel/networkinfo for info. 
2 Download speed claims based on testing videos from three streaming platforms. Average download time was less than sixty seconds. File sizes varied between 449MB and 1.3GB. Download speed depends upon many factors, such as file size, content provider and network connection. Testing conducted in an internal 5G network lab and on pre-production hardware in California in July/August 2020. Actual download speeds may be slower. Australian results may vary. 
3 Screen sharing not available on group calls. Requires Wi-Fi or 5G internet connection. Not available on all apps and content. Data rates may apply. 5G service, speed and performance depend on many factors including, but not limited to, carrier network capabilities, device configuration and capabilities, network traffic, location, signal strength, and signal obstruction. 
4 Transcription and search are available in English only. 
5 For “all day”: Maximum battery life based on testing using a mix of talk, data, standby, and use of other features. Testing conducted on two major US carrier networks using Sub-6 GHz non-standalone 5G (ENDC) connectivity. For “Up to 48 hours”: Maximum battery life based on testing using a mix of talk, data, standby, and use of limited other features that are default in Extreme Battery Saver mode (which disables various features including 5G connectivity). Testing conducted on two major US carrier networks. For both claims: Pixel 4a (5G) and Pixel 5 battery testing conducted by a third party in California in mid 2020 on pre-production hardware and software using default settings, except that, for the “up to 48 hour claim” only, Extreme Battery Saver mode was enabled. Battery life depends upon many factors and usage of certain features will decrease battery life. Actual battery life may be lower.
6 Not available in all languages or countries. Car crash detection may not detect all accidents. High-impact activities may trigger calls to emergency services. This feature is dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas. For country and language availability and more information see g.co/pixel/carcrashdetection. 
7 Personal Safety app features are dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas. For more information, see g.co/pixel/personalsafety. 
8 Qi-compatible. Wireless charger sold separately. 
9 Designed to charge Qi-certified devices. Use of Battery Share significantly reduces Pixel battery life. Cases may interfere with charging and will reduce charging speed. Charge speeds may vary. See g.co/pixel/wirelesscharging for more information. 
10 Pixel 5 has a dust and water protection rating of IP68 under IEC standard 60529. Charger and accessories are not water-resistant or dust-resistant. Water and dust resistance are not permanent conditions and may be compromised due to normal wear and tear, repair, disassembly or damage. 
11 The Google One, Google Play Pass, Google Play Points, and YouTube Premium offers are available to eligible new users with the purchase of Pixel 4a (5G) or Pixel 5. Offer expires April 30, 2021 at 11:59pm PT. See g.co/pixel/4a5Goffers or g.co/pixel/5offers, as applicable, for more details.

Made for music, the new Nest Audio is here


This year, we’ve all spent a lot of time exploring things to do at home. Some of us gardened, and others baked. We tried at-home workouts, or took up art projects. But one thing that many—maybe all of us—did? Enjoyed a lot of music at home. I’ve spent so much more time listening to music during quarantine—bossa nova is my go-to soundtrack for doing the dishes and Lil Baby has become one of my favorite artists. But you might even prefer Mohammed Rafi or Ilayaraja.


To help provide a richer soundtrack to your time at home, we’re especially excited to introduce Nest Audio, our latest smart speaker made for music lovers.


A music machine

Nest Audio is 75 percent louder and has 50 percent stronger bass than the original Google Home—measurements of both devices were taken in controlled conditions. With a 19mm tweeter for consistent high frequency coverage and clear vocals and a 75mm mid-woofer that really brings the bass, this smart speaker is built to deliver a rich musical experience. 


Nest Audio’s sound is full, clear, and natural. We completed more than 500 hours of tuning to ensure balanced lows, mids and highs so  nothing is lacking or overbearing. The bass is significant and the vocals have depth, which makes Nest Audio sound great across genres: classical, R&B, pop and more. The custom-designed tweeter allows each musical detail to come through, and we optimized the grill, fabric and materials so that you can enjoy the audio without distortion. 


Our goal was to ensure that Nest Audio stayed faithful to what the artist intended when they were in the recording studio. We minimized the use of compressors to preserve dynamic range, so the auditory contrast in the original production is preserved—the quiet parts are delicate and subtle, and the loud ones are more dramatic and powerful. 


Nest Audio also adapts to your home. Our Media EQ feature enables Nest Audio to automatically tune itself to whatever you’re listening to: music, podcasts, audiobooks or even a response from Google Assistant. And Ambient IQ lets Nest Audio also adjust the volume of Assistant, news, podcasts and audiobooks based on the background noise in your home, so you can hear the weather forecast over a noisy vacuum cleaner.


Whole home audio

If you have a Google Home, Nest Mini or even a Nest Hub, you can easily make Nest Audio the center of your whole home sound system. In my living room, I’ve connected two Nest Audio speakers as a stereo pair for left and right channel separation. I also have a Nest Mini in my bedroom and a Nest Hub in the entryway. These devices are grouped so that I can blast the same song on all of them when I have my daily dance party. 


With our stream transfer feature, I can move music from one device to the other with just my voice*. I can even transfer music or podcasts from my phone when I walk in the door. Just last month, we launched multi-room control, which allows you to dynamically group multiple cast-enabled Nest devices in real time. 


The Google Assistant you love

Google Assistant, available in Hindi and English, helps you tackle your day, enjoy your entertainment and control compatible smart home brands like Philips Hue, TP-Link and more. In fact, people have already set up more than 100 million devices to work with Google Assistant. Plus, if you’re a YouTube Music or Spotify Premium subscriber, you can say, “Ok Google, recommend some music” and Google Assistant will offer a variety of choices from artists and genres that you like as well as others that are similar.


Differentiated by design

Typically, a bigger speaker equals bigger sound, but Nest Audio has a really slim profile—so it  fits anywhere in the home. In order to maximize audio output, we custom-designed quality drivers and housed them in an enclosure that helps it squeeze out every bit of sound possible. 


Nest Audio will be available in India in two colors: Chalk and Charcoal. Its soft, rounded edges blend in with your home’s decor, and its minimal footprint doesn't take up too much space on your shelf or countertop. 


We’re continuing our commitment to sustainability with Nest Audio. It’s covered in the same sustainable fabric that we first introduced with Nest Mini last year, and the enclosure (meaning the fabric, housing, foot, and a few smaller parts) is made from 70 percent recycled plastic. 


Nest Audio will be available in India on Flipkart and at other retail outlets later this month. Stay tuned for more information on pricing and offers, which will be announced closer to the sale date.


Posted by Mark Spates, Product Manager, Google Nest 


*currently only available in English in India


An update on Fitbit

Last year, we announced that Google entered into an agreement to acquire Fitbit to help spur innovation in wearable devices and build products that help people lead healthier lives. As we continue to work with regulators to answer their questions, we wanted to share more about how we believe this deal will increase choice, and create engaging products and helpful experiences for consumers.

There's vibrant competition when it comes to smartwatches and fitness trackers, with Apple, Samsung, Garmin, Fossil, Huawei, Xiaomi and many others offering numerous products at a range of prices. We don’t currently make or sell wearable devices like these today. We believe the combination of Google and Fitbit's hardware efforts will increase competition in the sector, making the next generation of devices better and more affordable. 

This deal is about devices, not data. We’ve been clear from the beginning that we will not use Fitbit health and wellness data for Google ads. We recently offered to make a legally binding commitment to the European Commission regarding our use of Fitbit data. As we do with all our products, we will give Fitbit users the choice to review, move or delete their data. And we’ll continue to support wide connectivity and interoperability across our and other companies’ products. 

We appreciate the opportunity to work with the European Commission on an approach that addresses consumers' expectations of their wearable devices. We’re confident that by working closely with Fitbit’s team of experts, and bringing together our experience in AI, software and hardware, we can build compelling devices for people around the world.

Enabling E-Textile Microinteractions: Gestures and Light through Helical Structures



Textiles have the potential to help technology blend into our everyday environments and objects by improving aesthetics, comfort, and ergonomics. Consumer devices have started to leverage these opportunities through fabric-covered smart speakers and braided headphone cords, while advances in materials and flexible electronics have enabled the incorporation of sensing and display into soft form factors, such as jackets, dresses, and blankets.
A scalable interactive E-textile architecture with embedded touch sensing, gesture recognition and visual feedback.
In “E-textile Microinteractions” (Proceedings of ACM CHI 2020), we bring interactivity to soft devices and demonstrate how machine learning (ML) combined with an interactive textile topology enables parallel use of discrete and continuous gestures. This work extends our previously introduced E-textile architecture (Proceedings of ACM UIST 2018). This research focuses on cords, due to their modular use as drawstrings in garments, and as wired connections for data and power across consumer devices. By exploiting techniques from textile braiding, we integrate both gesture sensing and visual feedback along the surface through a repeating matrix topology.

For insight into how this works, please see this video about E-textile microinteractions and this video about the E-textile architecture.
E-textile microinteractions combining continuous sensing with discrete motion and grasps.
The Helical Sensing Matrix (HSM)
Braiding generally refers to the diagonal interweaving of three or more material strands. While braids are traditionally used for aesthetics and structural integrity, they can also be used to enable new sensing and display capabilities.

Whereas cords can be made to detect basic touch gestures through capacitive sensing, we developed a helical sensing matrix (HSM) that enables a larger gesture space. The HSM is a braid consisting of electrically insulated conductive textile yarns and passive support yarns,where conductive yarns in opposite directions take the role of transmit and receive electrodes to enable mutual capacitive sensing. The capacitive coupling at their intersections is modulated by the user’s fingers, and these interactions can be sensed anywhere on the cord since the braided pattern repeats along the length.
Left: A Helical Sensing Matrix based on a 4×4 braid (8 conductive threads spiraled around the core). Magenta/cyan are conductive yarns, used as receive/transmit lines. Grey are passive yarns (cotton). Center: Flattened matrix, that illustrates the infinite number of 4×4 matrices (colored circles 0-F), which repeat along the length of the cord. Right: Yellow are fiber optic lines, which provide visual feedback.
Rotation Detection
A key insight is that the two axial columns in an HSM that share a common set of electrodes (and color in the diagram of the flattened matrix) are 180º opposite each other. Thus, pinching and rolling the cord activates a set of electrodes and allows us to track relative motion across these columns. Rotation detection identifies the current phase with respect to the set of time-varying sinusoidal signals that are offset by 90º. The braid allows the user to initiate rotation anywhere, and is scalable with a small set of electrodes.
Rotation is deduced from horizontal finger motion across the columns. The plots below show the relative capacitive signal strengths, which change with finger proximity.
Interaction Techniques and Design Guidelines
This e-textile architecture makes the cord touch-sensitive, but its softness and malleability limit suitable interactions compared to rigid touch surfaces. With the unique material in mind, our design guidelines emphasize:
  • Simple gestures. We design for short interactions where the user either makes a single discrete gesture or performs a continuous manipulation.

  • Closed-loop feedback. We want to help the user discover functionality and get continuous feedback on their actions. Where possible, we provide visual, tactile, and audio feedback integrated in the device.
Based on these principles, we leverage our e-textile architecture to enable interaction techniques based on our ability to sense proximity, area, contact time, roll and pressure.
Our e-textile enables interaction based on capacitive sensing of proximity, contact area, contact time, roll, and pressure.
The inclusion of fiber optic strands that can display color of varying intensity enable dynamic real-time feedback to the user.
Braided fiber optics strands create the illusion of directional motion.
Motion Gestures (Flicks and Slides) and Grasping Styles (Pinch, Grab, Pinch)
We conducted a gesture elicitation study, which showed opportunities for an expanded gesture set. Inspired by these results, we decided to investigate five motion gestures based on flicks and slides, along with single­-touch gestures (pinch, grab and pat).
Gesture elicitation study with imagined touch sensing.
We collected data from 12 new participants, which resulted in 864 gesture samples (12 participants performed eight gestures each, repeating nine times), each having 16 features linearly interpolated to 80 observations over time. Participants performed the eight gestures in their own style without feedback as we wanted to accommodate individual differences since the classification is highly dependent on user style (“contact”), preference (“how to pinch/grab”) and anatomy (e.g., hand size). Our pipeline was thus designed for user-dependent training to enable individual styles with differences across participants, such as the inconsistent use of clockwise/counterclockwise, overlap between temporal gestures (e.g., flick vs. flick and hold, and similar pinch and grab gestures.) For a user-independent system, we would need to address such differences, for example with stricter instructions for consistency, data from a larger population, and in more diverse settings. Real-time feedback during training will also help mitigate differences as the user learns to adjust their behavior.
Twelve participants (horizontal axis) performed 9 repetitions (animation) for the eight gestures (vertical axis). Each sub-image shows 16 overlaid feature vectors, interpolated to 80 observations over time.
We performed cross-validation for each user across the gestures by training on eight repetitions and testing on one, through nine permutations, and achieved a gesture recognition accuracy of ~94%. This result is encouraging, especially given the expressivity enabled by such a low-resolution sensor matrix (eight electrodes).

Notable here is that inherent relationships in the repeated sensing matrices are well-suited for machine learning classification. The ML classifier used in our research enables quick training with limited data, which makes a user-dependent interaction system reasonable. In our experience, training for a typical gesture takes less than 30s, which is comparable to the amount of time required to train a fingerprint sensor.

User-Independent, Continuous Twist: Quantifying Precision and Speed
The per-user trained gesture recognition enabled eight new discrete gestures. For continuous interactions, we also wanted to quantify how well user-independent, continuous twist performs for precision tasks. We compared our e-textile with two baselines, a capacitive multi-touch trackpad (“Scroll”) and the familiar headphone cord remote control (“Buttons”). We designed a lab study where the three devices controlled 1D movement in a targeting task.

We analysed three dependent variables for the 1800 trials, covering 12 participants and three techniques: time on task (milliseconds), total motion, and motion during end-of-trial. Participants also provided qualitative feedback through rankings and comments.

Our quantitative analysis suggests that our e-textile’s twisting is faster than existing headphone button controls and comparable in speed to a touch surface. Qualitative feedback also indicated a preference for e-textile interaction over headphone controls.
Left: Weighted average subjective feedback. We mapped the 7-point Likert scale to a score in the range [-3, 3] and multiplied by the number of times the technique received that rating, and computed an average for all the scores. Right: Mean completion times for target distances show that Buttons were consistently slower.
These results are particularly interesting given that our e-textile was more sensitive, compared to the rigid input devices. One explanation might be its expressiveness — users can twist quickly or slowly anywhere on the cord, and the actions are symmetric and reversible. Conventional buttons on headphones require users to find their location and change grips for actions, which adds a high cost to pressing the wrong button. We use a high-pass filter to limit accidental skin contact, but further work is needed to characterize robustness and evaluate long-term performance in actual contexts of use.

Gesture Prototypes: Headphones, Hoodie Drawstrings, and Speaker Cord
We developed different prototypes to demonstrate the capabilities of our e-textile architecture: e-textile USB-C headphones to control media playback on the phone, a hoodie drawstring to invisibly add music control to clothing, and an interactive cord for gesture controls of smart speakers.
Left: Tap = Play/Pause; Center: Double-tap = Next track; Right: Roll = Volume +/-
Interactive speaker cord for simultaneous use of continuous (twisting/rolling) and discrete gestures (pinch/pat) to control music playback.
Conclusions and Future Directions
We introduce an interactive e-textile architecture for embedded sensing and visual feedback, which can enable both precise small-scale and large-scale motion in a compact cord form factor. With this work, we hope to advance textile user interfaces and inspire the use of microinteractions for future wearable interfaces and smart fabrics, where eyes-free access and casual, compact and efficient input is beneficial. We hope that our e-textile will inspire others to augment physical objects with scalable techniques, while preserving industrial design and aesthetics.

Acknowledgements
This work is a collaboration across multiple teams at Google. Key contributors to the project include Alex Olwal, Thad Starner, Jon Moeller, Greg Priest-Dorman, Ben Carroll, and Gowa Mainini. We thank the Google ATAP Jacquard team for our collaboration, especially Shiho Fukuhara, Munehiko Sato, and Ivan Poupyrev. We thank Google Wearables, and Kenneth Albanowski and Karissa Sawyer, in particular. Finally, we would like to thank Mark Zarich for illustrations, Bryan Allen for videography, Frank Li for data processing, Mathieu Le Goc for valuable discussions, and Carolyn Priest-Dorman for textile advice.

Source: Google AI Blog