Dev Channel Update for Desktop

The Dev channel has been updated to 95.0.4636.4 for Windows, Linux and Mac

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana

Google Chrome

Material You, a new look and feel for Google Workspace apps, is rolling out now for Android

What’s changing

Beginning today, we’re rolling out Material You: a new design system for Google Workspace apps on Android devices. Material You features an updated, fresh look and feel for your apps, along with additional options for personalization. 

Some changes you’ll notice are:

  • updated navigation bars,
  • improved floating action buttons, and
  • use of Google Sans text for better readability in smaller font sizes



Who’s impacted

End users



Additional details

On Pixel devices with Android 12 or newer, you’ll have the option to match the colors of your apps to your device wallpaper for a more dynamic, personalized look.


To expand upon our existing accessibility support, Material You will automatically adjust contrast, size, and line width based on user preferences and app context. Pre-existing  color schemes, for example color-coded file types, folder colors, or for in-app warnings, will remain unchanged.



Availability across Google Workspace apps:

Gmail
These changes are available on Gmail version 2021.08.24 and newer.





Google Meet
These changes will be available on Meet version 2021.09.19 and newer starting September 19.




Google Drive
These changes are available on Drive version 2.21.330 and newer starting September 9.






Google Docs, Sheets, Slides
These changes are available on Docs, Sheets and Slides version 1.21.342 and newer starting September 1.




Google Calendar
These changes are available on Google Calendar version 2021.37 and newer starting September 20.



Getting started

  • Admins: There is no admin control for this feature.
  • End users: On Android 12 and Pixel devices, you can view and select themes based on wallpaper colors applied by going to Settings > Wallpaper & style.

Rollout pace

  • Extended rollout (potentially longer than 15 days for feature visibility). 


Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers
  • Available to users with personal Google accounts


New effects settings panel in Google Meet

 

Quick launch summary 
We’re introducing a new settings panel in Google Meet for quick access to effects such as background blur, background images and styles during Meet calls. This panel will also be available before joining a call in the green room self-check. In the green room, you can try out various effects to see how they work before joining a call with others. 


Open the panel from the [three dot overflow] menu by selecting "Apply visual effects



Selecting effects in a Meet call
Selecting effects in a Meet call



Selecting effects in the green room before a Meet call
Selecting effects in the green room before a Meet call

Getting started 
Rollout pace
Availability 
  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers
Resources 

Search Terms Report Improvements

With the updates listed below, we're improving the search term reports returned from both the Google Ads API and the AdWords API across all active versions.

Starting Sep 9, 2021, you'll be able to see more queries that meet our privacy standards in the search terms report for Search and Dynamic Search Ads campaigns. This new data will return for all searches on or after February 1st, 2021 when using the following reports and resources:

This update can help you identify more relevant keyword themes, making it easier to optimize your ads, landing pages, and more. Metric totals from search terms reports will now be consistent with other reports, such as campaign, ad group, and ad reports in Google Ads.

As part of our ongoing commitment to privacy, we’re working to make our privacy thresholds consistent across Google. Over the next few months, you’ll see more changes across our other tools–including how we handle historical data. In Google Ads, this means that historical query data in your account that was collected prior to September 1st, 2020 will be available until February 1st, 2022. At that point, any historical queries that no longer meet our current privacy thresholds will be removed from your search terms report.

If you have any questions about this change or any other API feature, please contact us via the forum.


View Google Classroom activity with new audit logs, view adoption and other metrics with BigQuery activity logs

What’s changing

We’re making two enhancements for Google Classroom, which will help Google Workspace for Education admins surface information about how Classroom is being used in their organization. Specifically, we’re introducing:

  • Classroom audit logs in the Admin console
  • Classroom activity logs in BigQuery, and Data Studio dashboards.

See below for more information and availability.






Who’s impacted

Admins



Why it’s important

By surfacing Classroom audit logs, Admins can quickly pinpoint who did what in their domain, such as: who removed a student from a class, who archived a class on a certain date, and more. 

For Education Standard and Plus customers, admins can export the Classroom audit log data from the Admin console to BigQuery, which allows them to query the data as needed. As a starting point, we’ve provided a Data Studio report template, which surfaces your data in an easily digestible format. Admins can copy this template and further customize it using Data Studio.

We hope this makes it easier for admins to quickly look up common activities in their organization and quickly act on scenarios where support may be needed.



Getting started


Rollout pace


Availability

Classroom audit logs
  • Available to Google Workspace Education Fundamentals, Education Plus, Teaching and Learning Upgrade, Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus Frontline, and Nonprofits, as well as G Suite Basic and Business customers

BigQuery Logs + Data Studio Templates
  • Available to Google Workspace Education Standard and Education Plus customers
  • Not available to Google Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, Education Fundamentals, the Teaching and Learning Upgrade, and Nonprofits, as well as G Suite Basic and Business customers


Collaborating with the UN to accelerate crisis response

In remarks to the UN's High-Level Humanitarian Event on Anticipatory Action, Google SVP for Global Affairs, Kent Walker, discusses collaboration to accelerate crisis preparedness and predict crises before they happen. Read the full remarks below.


Mr. Secretary General, your excellencies, ladies and gentlemen - it’s an honor to join you as we come together to discuss these critical humanitarian issues.

As you know, technology is already raising living standards around the world—leveraging science to double life spans over the last 100 years, helping a billion people emerge from poverty in the last 30 years alone. And innovation will help drive environmental sustainability, raise living standards, improve healthcare, and enhance crisis response.

But addressing global needs in a meaningful way requires strong collaborations between technologists, governments, humanitarian organizations, and those most directly affected.

That’s why we are pleased to announce a $1.5 million commitment to OCHA’s Center for Humanitarian Data. Over the next two years, Google.org will support the Center in scaling up the use of forecasts and predictive models to anticipate humanitarian crises and trigger the release of funds before conditions escalate.

From the earliest days of efforts like Hans Rosling’s GapMinder, it’s been a dream that rather than waiting for a crisis to occur, data and technology could help predict events like droughts or food shortages weeks ahead of time, allowing agencies to provide alerts and deliver supplies to avert the crisis. That technology exists now, today—and we need to put it to work.

With the signs of climate change all around us, it’s essential that we improve our collective preparedness, and protect our most vulnerable populations.

Google is honored to support the critical work led by OCHA and the Center for Humanitarian Data, and we’re committed to combining funding, innovation, and technical expertise to support underserved communities and expand opportunity for everyone.

We hope others will join us in the important work of getting ahead of crises before they happen.

Thank you.

Introducing Android’s Private Compute Services

We introduced Android’s Private Compute Core in Android 12 Beta. Today, we're excited to announce a new suite of services that provide a privacy-preserving bridge between Private Compute Core and the cloud.

Recap: What is Private Compute Core?

Android’s Private Compute Core is an open source, secure environment that is isolated from the rest of the operating system and apps. With each new Android release we’ll add more privacy-preserving features to the Private Compute Core. Today, these include:

  • Live Caption, which adds captions to any media using Google’s on-device speech recognition
  • Now Playing, which recognizes music playing nearby and displays the song title and artist name on your device’s lock screen
  • Smart Reply, which suggests relevant responses based on the conversation you’re having in messaging apps

For these features to be private, they must:

  1. Keep the information on your device private. Android ensures that the sensitive data processed in the Private Compute Core is not shared to any apps without you taking an action. For instance, until you tap a Smart Reply, the OS keeps your reply hidden from both your keyboard and the app you’re typing into.
  2. Let your device use the cloud (to download new song catalogs or speech-recognition models) without compromising your privacy. This is where Private Compute Services comes in.

Introducing Android’s Private Compute Services

Machine learning features often improve by updating models, and Private Compute Services helps features get these updates over a private path. Android prevents any feature inside the Private Compute Core from having direct access to the network. Instead, features communicate over a small set of purposeful open-source APIs to Private Compute Services, which strips out identifying information and uses a set of privacy technologies, including Federated Learning, Federated Analytics, and Private information retrieval.

We will publicly publish the source code for Private Compute Services, so it can be audited by security researchers and other teams outside of Google. This means it can go through the same rigorous security programs that ensure the safety of the Android platform.

We’re enthusiastic about the potential for machine learning to power more helpful features inside Android, and Android’s Private Compute Core will help users benefit from these features while strengthening privacy protections via the new Private Compute Services. Android is the first open source mobile OS to include this kind of externally verifiable privacy; Private Compute Services helps the Android OS continue to innovate in machine learning, while also maintaining the highest standards of privacy and security.

Bringing richer navigation, charging, parking apps to more Android Auto users

Posted by Madan Ankapura, Product Manager

Illustration of car interior with map, parking and gas symbols

Today, we are releasing the beta of Android for Cars App Library version 1.1. Your Android Auto apps using features that require Car App API level 2+ like map interactivity, vehicle’s hardware data, multiple-length text, long message and sign-in templates, can now be used in cars with Android Auto 6.7+ (which were previously limited to Desktop Head Unit only).

Two Android Auto GIF examples. Left GIF is 2GIS and right GIF is TomTom

With this announcement, we are also completing the transition to Jetpack and will no longer be accepting submissions built with the closed source library (com.google.android.libraries.car.app). If you haven’t already, we encourage you to migrate to the AndroidX library now.

For the entire list of changes in beta01, please see the release notes. To start building your app for the car, check out our updated developer documentation, car quality guidelines and design guidelines.

If you’re interested in joining our Early Access Program to get access to new features early in the future, please fill out this interest form. You can get started with the Android for Cars App Library today, by visiting g.co/androidforcars.

Personalized ASR Models from a Large and Diverse Disordered Speech Dataset

Speech impairments affect millions of people, with underlying causes ranging from neurological or genetic conditions to physical impairment, brain damage or hearing loss. Similarly, the resulting speech patterns are diverse, including stuttering, dysarthria, apraxia, etc., and can have a detrimental impact on self-expression, participation in society and access to voice-enabled technologies. Automatic speech recognition (ASR) technologies have the potential to help individuals with such speech impairments by improving access to dictation and home automation and by enhancing communication. However, while the increased computational power of deep learning systems and the availability of large training datasets has improved the accuracy of ASR systems, their performance is still insufficient for many people with speech disorders, rendering the technology unusable for many of the speakers who could benefit the most.

In 2019, we introduced Project Euphonia and discussed how we could use personalized ASR models of disordered speech to achieve accuracies on par with non-personalized ASR on typical speech. Today we share the results of two studies, presented at Interspeech 2021, that aim to expand the availability of personalized ASR models to more users. In “Disordered Speech Data Collection: Lessons Learned at 1 Million Utterances from Project Euphonia”, we present a greatly expanded collection of disordered speech data, composed of over 1 million utterances. Then, in “Automatic Speech Recognition of Disordered Speech: Personalized models outperforming human listeners on short phrases”, we discuss our efforts to generate personalized ASR models based on this corpus. This approach leads to highly accurate models that can achieve up to 85% improvement to the word error rate in select domains compared to out-of-the-box speech models trained on typical speech.

Impaired Speech Data Collection
Since 2019, speakers with speech impairments of varying degrees of severity across a variety of conditions have provided voice samples to support Project Euphonia’s research mission. This effort has grown Euphonia’s corpus to over 1 million utterances, comprising over 1400 hours from 1330 speakers (as of August 2021).

Distribution of severity of speech disorder and condition across all speakers with more than 300 utterances recorded. For conditions, only those with > 5 speakers are shown (all others aggregated into “OTHER” for k-anonymity).
ALS = amyotrophic lateral sclerosis; DS = Down syndrome; PD = Parkinson’s disease; CP = cerebral palsy; HI = hearing impaired; MD = muscular dystrophy; MS = multiple sclerosis

To simplify the data collection, participants used an at-home recording system on their personal hardware (laptop or phone, with and without headphones), instead of an idealized lab-based setting that would collect studio quality recordings.

To reduce transcription cost, while still maintaining high transcript conformity, we prioritized scripted speech. Participants read prompts shown on a browser-based recording tool. Phrase prompts covered use-cases like home automation (“Turn on the TV.”), caregiver conversations (“I am hungry.”) and informal conversations (“How are you doing? Did you have a nice day?”). Most participants received a list of 1500 phrases, which included 1100 unique phrases along with 100 phrases that were each repeated four more times.

Speech professionals conducted a comprehensive auditory-perceptual speech assessment while listening to a subset of utterances for every speaker providing the following speaker-level metadata: speech disorder type (e.g., stuttering, dysarthria, apraxia), rating of 24 features of abnormal speech (e.g., hypernasality, articulatory imprecision, dysprosody), as well as recording quality assessments of both technical (e.g., signal dropouts, segmentation problems) and acoustic (e.g., environmental noise, secondary speaker crosstalk) features.

Personalized ASR Models
This expanded impaired speech dataset is the foundation of our new approach to personalized ASR models for disordered speech. Each personalized model uses a standard end-to-end, RNN-Transducer (RNN-T) ASR model that is fine-tuned using data from the target speaker only.

Architecture of RNN-Transducer. In our case, the encoder network consists of 8 layers and the predictor network consists of 2 layers of uni-directional LSTM cells.

To accomplish this, we focus on adapting the encoder network, i.e. the part of the model dealing with the specific acoustics of a given speaker, as speech sound disorders were most common in our corpus. We found that only updating the bottom five (out of eight) encoder layers while freezing the top three encoder layers (as well as the joint layer and decoder layers) led to the best results and effectively avoided overfitting. To make these models more robust against background noise and other acoustic effects, we employ a configuration of SpecAugment specifically tuned to the prevailing characteristics of disordered speech. Further, we found that the choice of the pre-trained base model was critical. A base model trained on a large and diverse corpus of typical speech (multiple domains and acoustic conditions) proved to work best for our scenario.

Results
We trained personalized ASR models for ~430 speakers who recorded at least 300 utterances. 10% of utterances were held out as a test set (with no phrase overlap) on which we calculated the word error rate (WER) for the personalized model and the unadapted base model.

Overall, our personalization approach yields significant improvements across all severity levels and conditions. Even for severely impaired speech, the median WER for short phrases from the home automation domain dropped from around 89% to 13%. Substantial accuracy improvements were also seen across other domains such as conversational and caregiver.

WER of unadapted and personalized ASR models on home automation phrases.

To understand when personalization does not work well, we analyzed several subgroups:

  • HighWER and LowWER: Speakers with high and low personalized model WERs based on the 1st and 5th quintiles of the WER distribution.
  • SurpHighWER: Speakers with a surprisingly high WER (participants with typical speech or mild speech impairment of the HighWER group).

Different pathologies and speech disorder presentations are expected to impact ASR non-uniformly. The distribution of speech disorder types within the HighWER group indicates that dysarthria due to cerebral palsy was particularly difficult to model. Not surprisingly, median severity was also higher in this group.

To identify the speaker-specific and technical factors that impact ASR accuracy, we examined the differences (Cohen's D) in the metadata between the participants that had poor (HighWER) and excellent (LowWER) ASR performance. As expected, overall speech severity was significantly lower in the LowWER group than in the HighWER group (p < 0.01). Intelligibility and severity were the most prominent atypical speech features in the HighWER group; however, other speech features also emerged, including abnormal prosody, articulation, and phonation. These speech features are known to degrade overall speech intelligibility.

The SurpHighWER group had fewer training utterances and lower SNR compared with the LowWER group (p < 0.01) resulting in large (negative) effect sizes, with all other factors having small effect sizes, except fastness. In contrast, the HighWER group exhibited medium to large differences across all factors.

Speech disorder and technical metadata effect sizes for the HighWER-vs-LowWER and SurpHighWER-vs-LowWER pairs. Positive effects indicated that the group values of the HighWER group were greater than LowWER groups.

We then compared personalized ASR models to human listeners. Three speech professionals independently transcribed 30 utterances per speaker. We found that WERs were, on average, lower for personalized ASR models compared to the WERs of human listeners, with gains increasing by severity.

Delta between the WERs of the personalized ASR models and the human listeners. Negative values indicate that personalized ASR performs better than human (expert) listeners.

Conclusions
With over 1 million utterances, Euphonia’s corpus is one of the largest and most diversely disordered speech corpora (in terms of disorder types and severities) and has enabled significant advances in ASR accuracy for these types of atypical speech. Our results demonstrate the efficacy of personalized ASR models for recognizing a wide range of speech impairments and severities, with potential for making ASR available to a wider population of users.

Acknowledgements
Key contributors to this project include Michael Brenner, Julie Cattiau, Richard Cave, Jordan Green, Rus Heywood, Pan-Pan Jiang, Anton Kast, Marilyn Ladewig, Bob MacDonald, Phil Nelson, Katie Seaver, Jimmy Tobin, and Katrin Tomanek. We gratefully acknowledge the support Project Euphonia received from members of many speech research teams across Google, including Françoise Beaufays, Fadi Biadsy, Dotan Emanuel, Khe Chai Sim, Pedro Moreno Mengibar, Arun Narayanan, Hasim Sak, Suzan Schwartz, Joel Shor, and many others. And most importantly, we wanted to say a huge thank you to the over 1300 participants who recorded speech samples and the many advocacy groups who helped us connect with these participants.

Source: Google AI Blog


Our commitment to water stewardship

I grew up in Muir Beach, California, and was fortunate to spend my childhood exploring its beautiful forests and streams. Today, these delicate ecosystems are threatened as the entire west coast of the U.S. is experiencing one of the worst droughts in recorded history. Unfortunately, this problem extends beyond the stretch of coastline I call home. Climate change is exacerbating water scarcity challenges around the world as places suffer from diminished rainfall — from Brazil's semi-arid region to Sub-saharan Africa. At the same time, we’ve seen strong storms bring devastating floods to places like the eastern U.S., central China, and western Germany.

Last September, we announced our third and most ambitious decade of climate action and laid out our plan toward a carbon-free future. Building on this commitment, we are pledging to a water stewardship target to replenish more water than we consume by 2030 and support water security in communities where we operate. This means Google will replenish 120% of the water we consume, on average, across our offices and data centers. We’re focusing on three areas: enhancing our stewardship of water resources across Google office campuses and data centers, replenishing our water use and improving watershed health and ecosystems in water-stressed communities; sharing technology and tools that help everyone predict, prevent and recover from water stress.


Managing the water we use responsibly

We use water to cool the data centers that make products like Gmail, YouTube, Google Maps and Search possible. Over the years, we've taken steps to address and improve our operational water sustainability. For example, we deployed technology that uses reclaimed wastewater to cool our data center in Douglas County, Georgia. At our office campuses in the San Francisco Bay Area, we worked with ecologists and landscape architects to develop an ecological design strategy and habitat guidelines to improve the resiliency of landscapes and nearby watershed health. This included implementing drip irrigation, using watering systems that adjust to local weather conditions, and fostering diverse landscapes on our campuses that can withstand the stresses of climate change. 

Our water stewardship journey will involve continuously enhancing our water use and consumption. At our data centers, we’ll identify opportunities to use freshwater alternatives where possible — whether that's seawater or reclaimed wastewater. When it comes to our office campuses, we’re looking to use more on-site water sources — such as collected stormwater and treated wastewater — to meet our non-potable water needs like landscape irrigation, cooling and toilet flushing.


Investing in community water security and healthy ecosystems

Water security is an issue that goes beyond our operations, and it’s not something we can solve alone. In partnership with others, we’ll invest in community projects that replenish 120% of the water we consume, on average, across all Google offices and data centers, and that improve the health of the local watersheds where our office campuses and data centers are located. 

Typically, the water we all use every day comes from local watersheds — areas of land where local precipitation collects and drains off into a common outlet, such as a river, bay or other receiving body of water. There are several ways to determine whether a watershed is sustainable including measuring water quality and availability and community access to the water. 

We’ll focus on solutions that address local water and watershed challenges. For example, we’re working with the Colorado River Indian Tribes project to reduce the amount of water that is withdrawn from Lake Mead reservoir on the Colorado River in Nevada and Arizona. In Dublin, Ireland, we’re installing rainwater harvesting systems to reduce stormwater flows to improve water quality in the River Liffey and the Dublin Bay. And in Los Angeles, we’re investing in efforts to remove water-thirsty invasive species to help the nearby ecosystem in the San Gabriel mountains.


Using data tools to predict and prevent water stress

Communities, policymakers and planners all need tools to measure and predict water availability and water needs. We’re dedicated to working with partners to make those tools and technologies universally accessible. To that end, we’ve recently worked with others on these water management efforts: 


  • Partnered with the United Nations Environment Programme and the European Commission’s Joint Research Centre (JRC) to create the Freshwater Ecosystems Explorer. This tool tracks surface water changes over time on a national and local scale. 

  • Co-developed the web application OpenET with academic and government researchers to make satellite-based data that shows how and where water moves when it evaporates available to farmers, landowners and water managers.

  • Provided Google.org funding for Global Water Watch and Windward Fund’s BlueConduit. Global Water Watch provides real-time indicators for current and future water management needs, and was built in partnership with Google.org, WRI, WWF and Deltares. BlueConduit quantifies and maps hazardous lead service lines, making it easier to replace water infrastructure in vulnerable communities.

When it comes to protecting the future of our planet and the resources we rely on, there’s a lot to be done. We’ll keep looking for ways we can use our products and expertise to be good water stewards and partner with others to address these critical and shared water challenges.