Audioplethysmography for cardiac monitoring with hearable devices

The market for true wireless stereo (TWS) active noise canceling (ANC) hearables (headphones and earbuds) has been soaring in recent years, and the global shipment volume will nearly double that of smart wristbands and watches in 2023. The on-head time for hearables has extended significantly due to the recent advances in ANC, transparency mode, and artificial intelligence. Users frequently wear hearables not just for music listening, but also for exercising, focusing, or simply mood adjustment. However, hearable health is still mostly uncharted territory for the consumer market.

In “APG: Audioplethysmography for Cardiac Monitoring in Hearables,” presented at MobiCom 2023, we introduce a novel active in-ear health sensing modality. Audioplethysmography (APG) enables ANC hearables to monitor a user's physiological signals, such as heart rate and heart rate variability, without adding extra sensors or compromising battery life. APG exhibits high resilience to motion artifacts, adheres to safety regulations with an 80 dB margin below the limit, remains unaffected by seal conditions, and is inclusive of all skin tones.

APG sends a low intensity ultrasound transmitting wave (TX wave) using an ANC headphone's speakers and collects the receiving wave (RX wave) via the on-board feedback microphones. The APG signal is a pulse-like waveform that synchronizes with heartbeat and reveals rich cardiac information, such as dicrotic notches.


Health sensing in the ear canal

The auditory canal receives its blood supply from the arteria auricularis profunda, also known as the deep ear artery. This artery forms an intricate network of smaller vessels that extensively permeate the auditory canal. Slight variations in blood vessel shape caused by the heartbeat (and blood pressure) can lead to subtle changes in the volume and pressure of the ear canals, making the ear canal an ideal location for health sensing.

Recent research has explored using hearables for health sensing by packaging together a plethora of sensors — e.g., photoplethysmograms (PPG) and electrocardiograms (ECG) — with a microcontroller to enable health applications, such as sleep monitoring, heart rate and blood pressure tracking. However, this sensor mounting paradigm inevitably adds cost, weight, power consumption, acoustic design complexity, and form factor challenges to hearables, constituting a strong barrier to its wide adoption.

Existing ANC hearables deploy feedback and feedforward microphones to navigate the ANC function. These microphones create new opportunities for various sensing applications as they can detect or record many bio-signals inside and outside the ear canal. For example, feedback microphones can be used to listen to heartbeats and feedforward microphones can hear respirations. Academic research on this passive sensing paradigm has prompted many mobile applications, including heart rate monitoring, ear disease diagnosis, respiration monitoring, and body activity recognition. However, microphones in consumer-grade ANC headphones come with built-in high-pass filters to prevent saturation from body motions or strong wind noise. The signal quality of passive listening in the ear canal also heavily relies on the earbud seal conditions. As such, it is challenging to embed health features that rely on the passive listening of low frequency signals (≤ 50 Hz) on commercial ANC headphones.


Measuring tiny physiological signals

APG bypasses the aforementioned ANC headphone hardware constraints by sending a low intensity ultrasound probing signal through an ANC headphone's speakers. This signal triggers echoes, which are received via on-board feedback microphones. We observe that the tiny ear canal skin displacement and heartbeat vibrations modulate these ultrasound echoes.

We build a cylindrical resonance model to understand APG’s underlying physics. This phenomenon happens at an extremely small scale, which makes the raw pulse signal invisible in the raw received ultrasound. We adopt coherent detection to retrieve this micro physiological modulation under the noise floor (we term this retrieved signal as mixed-down signal, see the paper for more details). The final APG waveform looks strikingly similar to a PPG waveform, but provides an improved view of cardiac activities with more pronounced dicrotic notches (i.e., pressure waveforms that provide rich insights about the central artery system, such as blood pressure).

A cylindrical model with cardiac activities ℎ(𝑡) that modulates both the phase and amplitude of the mixed-down signal. Based on the simulation from our analytical model, the amplitude 𝑅(𝑡) and phase Φ(𝑡) of the mixed-down APG signals both reflect the cardiac activities ℎ(𝑡).


APG sensing in practice

During our initial experiments, we observed that APG works robustly with bad earbuds seals and with music playing. However, we noticed the APG signal can sometimes be very noisy and could be heavily disturbed by body motion. At that point, we determined that in order to make APG useful, we had to make it more robust to compete with more than 80 years of PPG development.

While PPGs are widely used and highly advanced, they do have some limitations. For example, PPGs sensors typically use two to four diodes to send and receive light frequencies for sensing. However, due to the ultra high-frequency nature (hundreds of Terahertz) of the light, it's difficult for a single diode to send multiple colors with different frequencies. On the other hand, we can easily design a low-cost and low-power system that generates and receives more than ten audio tones (frequencies). We leverage channel diversity, a physical phenomenon that describes how wireless signals (e.g., light and audio) at different frequencies have different characters (e.g., different attenuation and reflection coefficients) when the signal propagates in a medium, to enable a higher quality APG signal and motion resilience.

Next, we experimentally demonstrate the effectiveness of using multiple frequencies in the APG signaling. We transmit three probing signals concurrently with their frequencies spanning evenly from 30 KHz to 32 KHz. A participant was asked to shake their head four times during the experiment to introduce interference. The figure below shows that different frequencies can be transmitted simultaneously to gather various information with coherent detection, a unique advantage to APG.

The 30 kHz phase shows the four head movements and the magnitude (amplitude) of 31 kHz shows the pulse wave signal. This observation shows that some ultrasound frequencies might be sensitive to cardiac activities while others might be sensitive to motion. Therefore, we can use the multi-tone APG as a calibration signal to find the best frequency that measures heart rate, and use only the best frequency to get high-quality pulse waveform.

The mixed-down amplitude (upper row) and phase (bottom row) for a customized multi-tone APG signal that spans from 30 kHz to 32 kHz. With channel diversity, the cardiac activities are captured in some frequencies (e.g., magnitude of 31 kHz) and head movements are captured in other frequencies (e.g., magnitude of 30 kHz, 30 kHz, and phase of 31 kHz).

After choosing the best frequency to measure heart rate, the APG pulse waveform becomes more visible with pronounced dicrotic notches , and enables accurate heart rate variability measurement.

The final APG signal used in the measurement phase (left) and chest ECG signal (right).

Multi-tone translates to multiple simultaneous observations, which enable the development of array signal processing techniques. We demonstrate the spectrogram of a running session APG experiment before and after applying blind source separation (see the paper for more details). We also show the ground truth heart rate measurement in the same running experiment using a Polar ECG chest strap. In the raw APG, we see the running cadence (around 3.3 Hz) as well as two dim lines (around 2 Hz and 4 Hz) that indicate the user’s heart rate frequency and its harmonics. The heart rate frequencies are significantly enhanced in signal to noise ratio (SNR) after the blind source separation, which align with the ground truth heart rate frequencies. We also show the calculated heart rate and running cadence from APG and ECG. We can see that APG tracks the growth of heart rate during the running session accurately.

APG tracks the heart rate accurately during the running session and also measures the running cadence.


Field study and closing thoughts

We conducted two rounds of user experience (UX) studies with 153 participants. Our results demonstrate that APG achieves consistently accurate heart rate (3.21% median error across participants in all activity scenarios) and heart rate variability (2.70% median error in inter-beat interval) measurements. Unlike PPG, which exhibits variable performance across skin tones, our study shows that APG is resilient to variation in: skin tone, sub-optimal seal conditions, and ear canal size. More detailed evaluations can be found in the paper.

APG transforms any TWS ANC headphones into smart sensing headphones with a simple software upgrade, and works robustly across various user activities. The sensing carrier signal is completely inaudible and not impacted by music playing. More importantly, APG represents new knowledge in biomedical and mobile research and unlocks new possibilities for low-cost health sensing.


Acknowledgements

APG is the result of collaboration across Google Health, product, UX and legal teams. We would like to thank David Pearl, Jesper Ramsgaard, Cody Wortham, Octavio Ponce, Patrick Amihood, Sam Sheng, Michael Pate, Leonardo Kusumo, Simon Tong, Tim Gladwin, Russ Mirov, Kason Walker, Govind Kannan, Jayvon Timmons, Dennis Rauschmayer, Chiong Lai, Shwetak Patel, Jake Garrison, Anran Wang, Shiva Rajagopal, Shelten Yuen, Seobin Jung, Yun Liu, John Hernandez, Issac Galatzer-Levy, Isaiah Fischer-Brown, Jamie Rogers, Pramod Rudrapatna, Andrew Barakat, Jason Guss, Ethan Grabau, Pol Peiffer, Bill Park, Helen O'Connor, Mia Cheng, Keiichiro Yumiba, Felix Bors, Priyanka Jantre, Luzhou Xu, Jian Wang, Jaime Lien, Gerry Pallipuram, Nicholas Gillian, Michal Matuszak, Jakub Wojciechowski, Bryan Allen, Jane Hilario, and Phil Carmack for their invaluable insights and support. Thanks to external collaborators Longfei Shangguan and Rich Howard, Rutgers University and University of Pittsburgh.

Source: Google AI Blog


Google Workspace Updates Weekly Recap – October 27, 2023

1 New update

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.


Adding a two-page view for PDFs in the Google Drive app on large screen Android devices 
You can now flip between a single page width view or a newly added two-page width view when scrolling through a PDF using the Google Drive app on your Android tablet or foldable device. This side by side view of the PDF will resemble a book, providing a better viewing experience on large screen Android devices. | Available now to all Google Workspace customers and users with personal Google Accounts.
Adding a two-page view for PDFs in the Google Drive app on large screen Android devices


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.



Changes to YouTube player embedded within Google Workspace for Education services 
We are removing a previous feature while we evaluate performance and quality. As an immediate next step, if you updated your allowlist or blocklist to include "www.youtube.com", you’ll need to revert your allowlist/blocklist to include "www.youtube.com" as soon as possible to preserve the way your organization uses YouTube videos within Google Workspace for Education services. | This impacts Education Fundamentals, Education Standard, Education Plus, and the Teaching and Learning Upgrade | Learn more about changes to YouTube player embedded within Google Workspace for Education services


Introducing an updated and more inclusive emoji picker in Gmail 
You can now access a more complete emoji selection and set emoji skin tone and gender preferences with the modernized emoji picker in the web version of Gmail. | Learn more about emojis in Gmail


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 

Scheduled Release Domains: 

Rapid and Scheduled Release Domains: 

Chrome Dev for Desktop Update

The Dev channel has been updated to 120.0.6090.0 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Srinivas Sista
Google Chrome

Mesa pursues digital equity and literacy

The City of Mesa’s strong commitment to digital equity played a key role in Google Fiber coming to town. The city is working towards connecting their residents to quality internet and the digital skills they need to navigate the world we live in, and their efforts are garnering national attention. Mesa was recently named a BroadLAND City USA by INCOMPAS. The award recognizes cities dedicated to speeding up deployment of new broadband networks in their local community. The city aims to create opportunities and promote competition- which bridges the digital divide for families and businesses in the community. 

thumbnail

This commitment is exactly why GFiber recently invested in furthering these goals with a $25,000 donation to the Mesa College Promise program. The money will go to students working to obtain an associate’s degree in the STEM field of their choice from Mesa Community College (MCC). Graduates from eligible schools can go to MCC for two years with Arizona resident tuition and fees fully funded.The program supports students who have not received enough financial aid or other scholarships to completely cover the cost of college. 



Nearly 600 Mesa high school students have benefited from the Mesa College Promise program since its inception in 2021. College Promise students also receive a stipend each semester for up to two years to use for books, transportation or other educational expenses and have a dedicated MCC advisor to provide academic and personal support. Those wanting to learn more about the Mesa College Promise can visit mesacc.edu/mesa-promise. 

GFiber is intentional in making investments that positively impact the communities we serve- supporting STEM, digital equity and digital literacy programs and initiatives. Find out more information about the local alliances and partnerships we have and projects we support here

Posted by Will Novak, Government and Community Affairs Manager 


Mesa pursues digital equity and literacy

The City of Mesa’s strong commitment to digital equity played a key role in Google Fiber coming to town. The city is working towards connecting their residents to quality internet and the digital skills they need to navigate the world we live in, and their efforts are garnering national attention. Mesa was recently named a BroadLAND City USA by INCOMPAS. The award recognizes cities dedicated to speeding up deployment of new broadband networks in their local community. The city aims to create opportunities and promote competition- which bridges the digital divide for families and businesses in the community. 

thumbnail

This commitment is exactly why GFiber recently invested in furthering these goals with a $25,000 donation to the Mesa College Promise program. The money will go to students working to obtain an associate’s degree in the STEM field of their choice from Mesa Community College (MCC). Graduates from eligible schools can go to MCC for two years with Arizona resident tuition and fees fully funded.The program supports students who have not received enough financial aid or other scholarships to completely cover the cost of college. 



Nearly 600 Mesa high school students have benefited from the Mesa College Promise program since its inception in 2021. College Promise students also receive a stipend each semester for up to two years to use for books, transportation or other educational expenses and have a dedicated MCC advisor to provide academic and personal support. Those wanting to learn more about the Mesa College Promise can visit mesacc.edu/mesa-promise. 

GFiber is intentional in making investments that positively impact the communities we serve- supporting STEM, digital equity and digital literacy programs and initiatives. Find out more information about the local alliances and partnerships we have and projects we support here

Posted by Will Novak, Government and Community Affairs Manager 


Meta built threads in only 5 months using Jetpack Compose

Posted by Yasmine Evjen – Product Manager, and Florina Muntenescu – Developer Relations Engineer

Following its release in July of 2023, Meta’s Threads became the most rapidly downloaded app ever with over 100 million downloads in its first week. Meta created the new text-based social media platform as a place to build connections and have meaningful conversations. To ensure the app was set up for success at its release and into the future, Threads developers used Jetpack Compose, Android’s modern declarative toolkit for building UI.

An easier way to build UI with Jetpack Compose

Threads is built on top of existing code from its sister app Instagram, which uses Views for its UI development. After positive reports from other Android developers about Compose, and following internal testing and an assessment of the toolkit’s benefits, Threads engineers opted to build the all-new app from scratch using Compose. By using Compose, the team could move faster and better prepare the app for any future updates.

“We decided Jetpack Compose would be our target UI framework going forward,” said Richard Zadorozny, a software engineer at Threads. “We wanted to build the new app UI from scratch using Compose because it would enable us to move faster than refactoring a large application like Instagram.”

Even though most of Threads’ engineers had no prior experience using Compose, they found it easy to get started and learn the new toolkit. With Compose, Threads engineers built and shipped the app in only five months. This greatly exceeded the team’s speed expectations for developing a high-quality Android application — especially of this complexity and scale. The team attributes much of this speed to the flexibility and decoupling Compose provided.

Compose helped Threads engineers streamline the development of new product features. The modular nature of the toolkit let Threads developers iterate on the app as it evolved and teed up the app’s architecture for future development. Compose also helped engineers build user-friendly features that adhered to Material Design guidelines.

Threads was built and shipped in five months, exceeding our speed expectations for building a high-quality Android app of this complexity and scale.”  — Richard Zadorozny, software engineer at Threads

Going all in with Compose

Threads engineers developed almost all of the app’s surfaces using Compose. In the end, they built over 90% of Threads using Compose, including the app’s activity feed, navigation, search, profiles, onboarding page, shared element transitions, media viewer, settings, and more.

While Compose did mostly everything Threads engineers needed it to, it was still easy for them to interoperate with Views as necessary. They used Views for Threads’ videos and the media picker that’s available when creating a new post.

Compose provides modern APIs that ship directly with an app. Because of this, Threads engineers spent less time worrying about backward compatibility, missing features, or differing functionality between different versions of Android. Instead, they could focus their energy on developing a high-quality application.

“Compose’s design encourages a modular, plug-in approach to development,” said Richard. “Modifiers make all sorts of functionality inherently reusable, so you no longer have to subclass complicated ViewGroups or lump all sorts of logic into one place.”

Moving image shows Jetpack Compose/Modifiers code appears on screen

The Threads team used Modifiers for the app’s custom click behaviors and its thread line illustration that appears on the left side of posts. Modifiers also allowed Threads developers to easily add the app’s branding to any elements and ensured they were properly aligned on-screen.

Threads engineers also ensured the app was ready for users across platforms at launch. That meant making sure Threads resizes to work on different devices, like large screens and foldables. The adaptive layouts Compose offers ensure an app responds properly to different screen sizes, orientations, and form factors. This made it easier for the Threads app to “just work” for configuration changes, according to Richard.

For developers who are building new apps, I would definitely recommend using Compose. Not only is it enjoyable… it sets your team up for success in the future.” — Richard Zadorozny, software engineer at Threads

Compose is the ‘future’ of Android UI

Compose offered Threads developers an easier way to design and create UI while preparing the app’s architecture for the future. With its intuitive composables and modern declarative framework, Compose made end-to-end development smooth and gave Threads developers confidence that updating the app would be easy.

Given the positive results the team saw with the release of Threads, Meta plans to expand its use of Compose to some of Instagram’s most important surfaces, like the app’s main feed.

“It’s reached a point where Jetpack Compose can do almost everything you’ll need, and its modular nature makes it easy to make most of the changes you would need to fill the gaps,” said Richard. “I believe Compose is the future of Android UI development, and it’s just fun!”

Get started

Optimize your UI development with Jetpack Compose.

Chrome Dev for Android Update

Hi everyone! We've just released Chrome Dev 120 (120.0.6087.2) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Harry Souders
Google Chrome

Supporting benchmarks for AI safety with MLCommons

Standard benchmarks are agreed upon ways of measuring important product qualities, and they exist in many fields. Some standard benchmarks measure safety: for example, when a car manufacturer touts a “five-star overall safety rating,” they’re citing a benchmark. Standard benchmarks already exist in machine learning (ML) and AI technologies: for instance, the MLCommons Association operates the MLPerf benchmarks that measure the speed of cutting edge AI hardware such as Google’s TPUs. However, though there has been significant work done on AI safety, there are as yet no similar standard benchmarks for AI safety.

We are excited to support a new effort by the non-profit MLCommons Association to develop standard AI safety benchmarks. Developing benchmarks that are effective and trusted is going to require advancing AI safety testing technology and incorporating a broad range of perspectives. The MLCommons effort aims to bring together expert researchers across academia and industry to develop standard benchmarks for measuring the safety of AI systems into scores that everyone can understand. We encourage the whole community, from AI researchers to policy experts, to join us in contributing to the effort.


Why AI safety benchmarks?

Like most advanced technologies, AI has the potential for tremendous benefits but could also lead to negative outcomes without appropriate care. For example, AI technology can boost human productivity in a wide range of activities (e.g., improve health diagnostics and research into diseases, analyze energy usage, and more). However, without sufficient precautions, AI could also be used to support harmful or malicious activities and respond in biased or offensive ways.

By providing standard measures of safety across categories such as harmful use, out-of-scope responses, AI-control risks, etc., standard AI safety benchmarks could help society reap the benefits of AI while ensuring that sufficient precautions are being taken to mitigate these risks. Initially, nascent safety benchmarks could help drive AI safety research and inform responsible AI development. With time and maturity, they could help inform users and purchasers of AI systems. Eventually, they could be a valuable tool for policy makers.

In computer hardware, benchmarks (e.g., SPEC, TPC) have shown an amazing ability to align research, engineering, and even marketing across an entire industry in pursuit of progress, and we believe standard AI safety benchmarks could help do the same in this vital area.


What are standard AI safety benchmarks?

Academic and corporate research efforts have experimented with a range of AI safety tests (e.g., RealToxicityPrompts, Stanford HELM fairness, bias, toxicity measurements, and Google’s guardrails for generative AI). However, most of these tests focus on providing a prompt to an AI system and algorithmically scoring the output, which is a useful start but limited to the scope of the test prompts. Further, they usually use open datasets for the prompts and responses, which may already have been (often inadvertently) incorporated into training data.

MLCommons proposes a multi-stakeholder process for selecting tests and grouping them into subsets to measure safety for particular AI use-cases, and translating the highly technical results of those tests into scores that everyone can understand. MLCommons is proposing to create a platform that brings these existing tests together in one place and encourages the creation of more rigorous tests that move the state of the art forward. Users will be able to access these tests both through online testing where they can generate and review scores and offline testing with an engine for private testing.


AI safety benchmarks should be a collective effort

Responsible AI developers use a diverse range of safety measures, including automatic testing, manual testing, red teaming (in which human testers attempt to produce adversarial outcomes), software-imposed restrictions, data and model best-practices, and auditing. However, determining that sufficient precautions have been taken can be challenging, especially as the community of companies providing AI systems grows and diversifies. Standard AI benchmarks could provide a powerful tool for helping the community grow responsibly, both by helping vendors and users measure AI safety and by encouraging an ecosystem of resources and specialist providers focused on improving AI safety.

At the same time, development of mature AI safety benchmarks that are both effective and trusted is not possible without the involvement of the community. This effort will need researchers and engineers to come together and provide innovative yet practical improvements to safety testing technology that make testing both more rigorous and more efficient. Similarly, companies will need to come together and provide test data, engineering support, and financial support. Some aspects of AI safety can be subjective, and building trusted benchmarks supported by a broad consensus will require incorporating multiple perspectives, including those of public advocates, policy makers, academics, engineers, data workers, business leaders, and entrepreneurs.


Google’s support for MLCommons

Grounded in our AI Principles that were announced in 2018, Google is committed to specific practices for the safe, secure, and trustworthy development and use of AI (see our 2019, 2020, 2021, 2022 updates). We’ve also made significant progress on key commitments, which will help ensure AI is developed boldly and responsibly, for the benefit of everyone.

Google is supporting the MLCommons Association's efforts to develop AI safety benchmarks in a number of ways.

  1. Testing platform: We are joining with other companies in providing funding to support the development of a testing platform.
  2. Technical expertise and resources: We are providing technical expertise and resources, such as the Monk Skin Tone Examples Dataset, to help ensure that the benchmarks are well-designed and effective.
  3. Datasets: We are contributing an internal dataset for multilingual representational bias, as well as already externalized tests for stereotyping harms, such as SeeGULL and SPICE. Moreover, we are sharing our datasets that focus on collecting human annotations responsibly and inclusively, like DICES and SRP.

Future direction

We believe that these benchmarks will be very useful for advancing research in AI safety and ensuring that AI systems are developed and deployed in a responsible manner. AI safety is a collective-action problem. Groups like the Frontier Model Forum and Partnership on AI are also leading important standardization initiatives. We’re pleased to have been part of these groups and MLCommons since their beginning. We look forward to additional collective efforts to promote the responsible development of new generative AI tools.


Acknowledgements

Many thanks to the Google team that contributed to this work: Peter Mattson, Lora Aroyo, Chris Welty, Kathy Meier-Hellstern, Parker Barnes, Tulsee Doshi, Manvinder Singh, Brian Goldman, Nitesh Goyal, Alice Friend, Nicole Delange, Kerry Barker, Madeleine Elish, Shruti Sheth, Dawn Bloxwich, William Isaac, Christina Butterfield.

Source: Google AI Blog