A Scalable Approach for Partially Local Federated Learning

Federated learning enables users to train a model without sending raw data to a central server, thus avoiding the collection of privacy-sensitive data. Often this is done by learning a single global model for all users, even though the users may differ in their data distributions. For example, users of a mobile keyboard application may collaborate to train a suggestion model but have different preferences for the suggestions. This heterogeneity has motivated algorithms that can personalize a global model for each user.

However, in some settings privacy considerations may prohibit learning a fully global model. Consider models with user-specific embeddings, such as matrix factorization models for recommender systems. Training a fully global federated model would involve sending user embedding updates to a central server, which could potentially reveal the preferences encoded in the embeddings. Even for models without user-specific embeddings, having some parameters be completely local to user devices would reduce server-client communication and responsibly personalize those parameters to each user.

Left: A matrix factorization model with a user matrix P and items matrix Q. The user embedding for a user u (Pu) and item embedding for item i (Qi) are trained to predict the user’s rating for that item (Rui). Right: Applying federated learning approaches to learn a global model can involve sending updates for Pu to a central server, potentially leaking individual user preferences.

In “Federated Reconstruction: Partially Local Federated Learning”, presented at NeurIPS 2021, we introduce an approach that enables scalable partially local federated learning, where some model parameters are never aggregated on the server. For matrix factorization, this approach trains a recommender model while keeping user embeddings local to each user device. For other models, this approach trains a portion of the model to be completely personal for each user while avoiding communication of these parameters. We successfully deployed partially local federated learning to Gboard, resulting in better recommendations for hundreds of millions of keyboard users. We’re also releasing a TensorFlow Federated tutorial demonstrating how to use Federated Reconstruction.

Federated Reconstruction
Previous approaches for partially local federated learning used stateful algorithms, which require user devices to store a state across rounds of federated training. Specifically, these approaches required devices to store local parameters across rounds. However, these algorithms tend to degrade in large-scale federated learning settings. In these cases, the majority of users do not participate in training, and users who do participate likely only do so once, resulting in a state that is rarely available and can get stale across rounds. Also, all users who do not participate are left without trained local parameters, preventing practical applications.

Federated Reconstruction is stateless and avoids the need for user devices to store local parameters by reconstructing them whenever needed. When a user participates in training, before updating any globally aggregated model parameters, they randomly initialize and train their local parameters using gradient descent on local data with global parameters frozen. They can then calculate updates to global parameters with local parameters frozen. A round of Federated Reconstruction training is depicted below.

Models are partitioned into global and local parameters. For each round of Federated Reconstruction training: (1) The server sends the current global parameters g to each user i; (2) Each user i freezes g and reconstructs their local parameters li; (3) Each user i freezes li and updates g to produce gi; (4) Users’ gi are averaged to produce the global parameters for the next round. Steps (2) and (3) generally use distinct parts of the local data.

This simple approach avoids the challenges of previous methods. It does not assume users have a state from previous rounds of training, enabling large-scale training, and local parameters are always freshly reconstructed, preventing staleness. Users unseen during training can still get trained models and perform inference by simply reconstructing local parameters using local data.

Federated Reconstruction trains better performing models for unseen users compared to other approaches. For a matrix factorization task with unseen users, the approach significantly outperforms both centralized training and baseline Federated Averaging.

RMSE ↓ Accuracy ↑
Centralized 1.36 40.8%
FedAvg .934 40.0%
FedRecon (this work) .907 43.3%
Root-mean-square-error (lower is better) and accuracy for a matrix factorization task with unseen users. Centralized training and Federated Averaging (FedAvg) both reveal privacy-sensitive user embeddings to a central server, while Federated Reconstruction (FedRecon) avoids this.

These results can be explained via a connection to meta learning (i.e., learning to learn); Federated Reconstruction trains global parameters that lead to fast and accurate reconstruction of local parameters for unseen users. That is, Federated Reconstruction is learning to learn local parameters. In practice, we observe that just one gradient descent step can yield successful reconstruction, even for models with about one million local parameters.

Federated Reconstruction also provides a way to personalize models for heterogeneous users while reducing communication of model parameters — even for models without user-specific embeddings. To evaluate this, we apply Federated Reconstruction to personalize a next word prediction language model and observe a substantial increase in performance, attaining accuracy on par with other personalization methods despite reduced communication. Federated Reconstruction also outperforms other personalization methods when executed at a fixed communication level.

Accuracy ↑ Communication ↓
FedYogi 24.3% Whole Model
FedYogi + Finetuning 30.8% Whole Model
FedRecon (this work) 30.7% Partial Model
Accuracy and server-client communication for a next word prediction task without user-specific embeddings. FedYogi communicates all model parameters, while FedRecon avoids this.

Real-World Deployment in Gboard
To validate the practicality of Federated Reconstruction in large-scale settings, we deployed the algorithm to Gboard, a mobile keyboard application with hundreds of millions of users. Gboard users use expressions (e.g., GIFs, stickers) to communicate with others. Users have highly heterogeneous preferences for these expressions, making the setting a good fit for using matrix factorization to predict new expressions a user might want to share.

Gboard users can communicate with expressions, preferences for which are highly personal.

We trained a matrix factorization model over user-expression co-occurrences using Federated Reconstruction, keeping user embeddings local to each Gboard user. We then deployed the model to Gboard users, leading to a 29.3% increase in click-through-rate for expression recommendations. Since most Gboard users were unseen during federated training, Federated Reconstruction played a key role in this deployment.

Further Explorations
We’ve presented Federated Reconstruction, a method for partially local federated learning. Federated Reconstruction enables personalization to heterogeneous users while reducing communication of privacy-sensitive parameters. We scaled the approach to Gboard in alignment with our AI Principles, improving recommendations for hundreds of millions of users.

For a technical walkthrough of Federated Reconstruction for matrix factorization, check out the TensorFlow Federated tutorial. We’ve also released general-purpose TensorFlow Federated libraries and open-source code for running experiments.

Acknowledgements
Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, and Sushant Prakash co-authored the paper. Thanks to Wei Li, Matt Newton, and Yang Lu for their partnership on Gboard deployment. We’d also like to thank Brendan McMahan, Lin Ning, Zachary Charles, Warren Morningstar, Daniel Ramage, Jakub Konecný, Alex Ingerman, Blaise Agüera y Arcas, Jay Yagnik, Bradley Green, and Ewa Dominowska for their helpful comments and support.

Source: Google AI Blog


Shared budget for Smart Shopping Campaigns

Starting February 15, 2022, all existing and future Smart Shopping Campaigns (SSC) will use a shared budget type. Although shared, the assigned budget will only be used by the SSC and will behave like a standard, non-shared campaign budget. New campaigns cannot be added to the shared budget. This change will not have any impact on campaign performance. In reports and queries, existing and future SSC budgets will be returned as explicitly_shared = true (isExplicitlyShared in AdWords API).

Note: the AdWords API will sunset on April 27, 2022. Developers must migrate to the Google Ads API before then.

If you have any questions or need additional help, contact us via the forum.

Our third GNI Startups Boot Camp will support 16 Canadian news entrepreneurs

Editor's Note: This blogpost is cross posted from the Lion Publisher's website.

These aspiring journalism publishers will serve communities from Nova Scotia to Vancouver 



From upper left: Shauna Rae, Jordan Maxwell, Ashleigh-Rae Thomas, Flavian DeLima, Yona Harvey, Camila Castaneda, Charles Mandel, Ayesha Ghaffar, Seyedmostafa Raziei, Sandra Hannebohm, Cara Fox and Kelly-Anne Riess 

For the past two years, our GNI Startups Boot Camp has helped nearly 50 founders launch independent new businesses across the U.S. and Canada. But as we supported those aspiring entrepreneurs to validate and execute their ideas, we kept coming back to one question: the Canadian news landscape has some of its own unique challenges and opportunities. Why not create a program specifically to serve this burgeoning ecosystem? 


That’s why we’re thrilled to announce, in partnership with the Google News Initiative, the inaugural cohort for our GNI Startups Boot Camp Canada. 



These 16 startups will embark on an intensive eight-week program that includes the training and coaching that will help them launch sustainably and meet their communities’ information needs. 



“We’re looking forward to working with this passionate, diverse and all-Canadian cohort of emerging news entrepreneurs, and supporting them on their path to launching an independent news business,” said Andrew Wicken, Head of News Partnerships at Google Canada. “The curriculum has been adapted specifically to address the realities of operating in Canada and supports our mission of helping to build a thriving, diverse and innovative Canadian news ecosystem. We’ve seen how this program can accelerate their progress, build connections and community, and set them on a path to sustainability.” 



The cohort members were selected by an independent panel of judges based on their compelling ideas, potential to make a strong impact and commitment to making their publications financially sustainable. They’ll be guided by the following team of exceptional industry experts: 




  • Boot Camp Co-Producer (and 2020 Boot Camp graduate!) Eva Voinigescu is a freelance journalist and audio producer based in Toronto. She currently produces the Energy vs Climate podcast. 
  • Boot Camp Coach Natasha Grzincic (Gur-zin-sitch) is the deputy editor at VICE Canada and the force behind Tipping Point, VICE’s series on environmental justice. She’s also a co-founder of Canadian Journalists of Colour, “a networking and resource-sharing group for racialized journalists that’s now over 1,300 strong.” 
  • Boot Camp Coach Hannah Sung is a journalist and co-founder of Media Girlfriends, a podcast production company focused on inclusivity in media. She also writes the newsletter At The End Of the Day
  • Boot Camp Director Phillip Smith is a veteran consultant and coach. His passion is helping newsrooms to make more money, helping news startups grow their audience, and helping journalists succeed as entrepreneurs. The boot camp curriculum was developed during his time as a John S. Knight fellow at Stanford University. 


“As someone born in Toronto General Hospital, who spent childhood summers with family in Quebec and Nova Scotia, as well as having spent a decade working in a startup newsroom in Vancouver, the opportunity to make the Boot Camp available to fellow Canadians is a real honour,” Smith said. “The individuals in this cohort are determined and their initiatives are very exciting — I can’t wait to get started.” 


About the 2021 cohort: 

  • 10 publications will focus on a local or regional audience; 6 will focus on a demographic or identity-based audience 
  • 14 publications will explicitly serve an underrepresented or marginalized audience that doesn’t often see itself reflected in the media 
  • 56% of participants identify as a person of colour or as coming from an underrepresented or marginalized background 
  • The publications will serve audiences in 7 Canadian provinces/territories 
  • 14 publications will be run by solopreneurs; 2 will operate as teams of two 
  • 10 publications haven’t launched yet; 6 are in the process of testing their idea or have very recently launched 



Meet the 16 teams in the cohort: 

*There are three additional startups not listed below that are remaining in stealth mode for now. 

Clearing a New Path 

Serving: Dorchester, Ontario 
Description: Amplifying the underrepresented voices of women entrepreneurs in rural Canada 
Founder: Shauna Rae 


From a Coloured Lens 

Serving: Vancouver, British Columbia 
Description: A podcast designed by and for BIPOC people, allowing them to share how mainstream issues impact their communities 
Founder: Ayesha Ghaffar 



Latitodo 

Serving: Vancouver, British Columbia 
Description: A publication focused on the Latinx diaspora, specifically young adults who are interested in connecting with their heritage whilst adapting to life in Canada. 
Founder: Camila Castaneda 



Island Edition 

Serving: Prince Edward Island 
Description: An independent journalism platform to report in-depth on the issues affecting Prince Edward Island, and to share the stories and experiences of the people living in Canada’s smallest province. 



Mabuhay Canada 

Serving: Belleville, Ontario 
Description: Mabuhay Canada is a one-stop website featuring news, local and international Filipino newsmakers, services, shopping, travel and immigration needs for Filipinos living in Canada 
Founder: Yona Harvey 



Mostafa 

Serving: Vancouver, British Columbia 
Description: Online new media focused on bringing newcomer and immigrant points of view to a greater audience 
Founder: Seyedmostafa Raziei 



North Star Press 

Serving: Toronto, Ontario 
Description: A magazine/online paper for Black leftists, named North Star Press after Frederick Douglass’ anti slavery newspaper 
Founder: Ashleigh-Rae Thomas 



Parles-on

Serving: Montreal, Quebec 
Description: A bilingual podcast about current events in Canada and beyond 
Founder: Cara Fox 



South Shore Lines 

Serving: Nova Scotia’s South Shore 
Description: A rural, alternative digital magazine focused on news, arts, culture, and more for Nova Scotia’s South Shore 
Founder: Charles Mandel 



Spinning Forward 

Serving: Toronto, Ontario 
Description: Collectively and collaboratively, we want to build and grow a more inclusive and strong online BIPOC creator community in Toronto on their own terms 
Founder: Flavian DeLima 



The Flatlander 

Serving: Regina, Saskatchewan and Winnipeg, Manitoba 
Description: An email newsletter about important issues that impact Manitoba and Saskatchewan 
Founder: Kelly-Anne Riess 



To Be Announced 

Serving: Toronto, Ontario 
Description: To be announced 
Founder: Jordan Maxwell 



Twice as Good Media (2G) 

Serving: Halifax, Nova Scotia 
Description: A hub for compelling narratives and black talent in media 
Founder: Sandra Hannebohm 



The GNI and LION would like to thank our panel of judges who were instrumental in selecting the cohort. They are: Gina Uppal from the On Canada Project, Colleen Kimmett from the Google News Initiative, Adam Chen from Talk Media, Jordan MacInnis from Journalists for Human Rights, Nkem Kalu from The Northpine Foundation, Julie Sobowale from the Canadian Association of Black Journalists, Sadiya Ansari from Canadian Journalists of Colour and Samanta Krishnapillai from the On Canada Project. 

2021 Assistant Recap

Posted by Jessica Dene Earley-Cha, Mike Bifulco and Toni Klopfenstein, Developer Relations Engineers for Google Assistant

We've reached the end of the year - and what a year it's been! Between all of our live (virtual) events including I/O, developer summits, meetups and more, there are a lot of highlights for App Actions, Smart Home Actions and Conversational Actions. Let's dive in and take a look.

App Actions

App Actions allows developers to extend their Android App to Google Assistant. App Actions integrates more cleanly with Android using new Android platform features. With the introduction of the beta shortcuts.xml configuration resource, expanding existing Android features and our latest Google Assistant Plug App Actions is moving closer to the Android platform.

App Actions Benefits:

  • Display app information on Google surfaces. Provide Android widgets for Assistant to display, offering inline answers, simple confirmations and brief interactions to users without changing context.
  • Launch features from Assistant. Connect your app's capabilities to user queries that match predefined semantic patterns (BII).
  • Suggest voice shortcuts from Assistant. Use Assistant to proactively suggest tasks for users to discover or replay, in the right context.

Core Integration

Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating a shortcuts.xml resource and defining your capabilities. Capabilities specify two things: how it's triggered and what to do when it's triggered. To add a capability, you’ll need to select a Built-In intent (BII), which are pre-built language models that provide all the Natural Language Understanding to map the user's input to individual fields. When a BII is matched by the user’s request, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.

To support a user query like “Hey Google, Find waterfall hikes on ExampleApp,” you can use the GET_THING BII. This BII supports queries that request an “item” and extracts the “item” from the user query as the parameter thing.name. The best use case for the GET_THING BII is to search for things in the app. Below is an example of a capability that uses the GET_THING BII:

<!-- This is a sample shortcuts.xml -->
<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<capability android:name="actions.intent.GET_THING">
<intent
android:action="android.intent.action.VIEW"
android:targetPackage="YOUR_UNIQUE_APPLICATION_ID"
android:targetClass="YOUR_TARGET_CLASS">
<!-- Eg. name = "waterfall hikes" -->
<parameter
android:name="thing.name"
android:key="name"/>
</intent>
</capability>
</shortcuts>

This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.

Learn how to add your first capability with this codelab.

Voice shortcuts

Google Assistant suggests relevant shortcuts to users during contextually relevant times. Users can see what shortcuts they have by saying “Hey Google, shortcuts.”

Shortcut for Google Assistant

You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.

Learn how to push your dynamic shortcuts to Assistant with our dynamic shortcuts codelab.

Example of App using Dynamic Shortcuts CodeLab Tool

Simple Answers, Hands Free & Android Auto

During situations where users need a hand free experience, like on Android Auto, Assistant can display widgets to provide simple answers, brief confirmations and quick interactive experience as a response to a user’s inquiry. These widgets are displayed within the Assistant UI, and in order to implement a fully voice-forward interaction with your app, you can arrange for Assistant to speak a response with your widget, which is safe and natural for use in automobiles. A great re-engagement feature with widgets, is that a “Add this widget” chip can be included too!

Example of App using Dynamic Shortcuts CodeLab Tool

Re Engagement

Another re-engagement tool is In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.

New Tooling

To support testing Capabilities, the Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.

New App Actions resources

Learn more with new or updated content:


Smart Home Actions

A big focus of this year's Smart Home launches were new and updated tools. At events like I/O, Works With: SiLabs, and the Google Smart Home Developer Summit, we shared these new resources to help you quickly build a high quality smart home integration.

New Resources

To make implementing new features even easier for developers, we released many new tools to help you get your Smart Home Action up and running.

To help consumers discover Google-compatible smart home devices and associated routines, we released the smart home directory, accessible on the web and through the Google Home app.

We heard your requests for more ways to localize your integrations, so we added sample utterances in English (en-US), German (de-DE), and French (fr-FR) to several device guides. Additionally, we also rolled out Chinese (zh-TW) as one of the supported languages for the overall platform. To make our documentation more accessible, we added a Japanese translation of our developer guides.

We also released several new device types and traits, along with new features to support your integrations, including proactive and follow-up responses, app discovery and deep linking.

Quality Improvements

For general onboarding, we've added three new codelabs to enable you to dive deeper into debugging and monitoring your projects. You can now walk through debugging smart home Actions, debugging local fulfillment Actions, and dig deeper into your log-based metrics for your Actions.

When you're actively developing your integration, the Google Home Playground can simulate a virtual home with configurable device types and traits. Here you can view the types and traits in Home Graph, modify device attributes, and share device configurations.

If you discover issues with your configuration, we've continued upgrading the monitoring and logging dashboards to show you detailed views of events with your integrations, as well as better guidance on how to handle errors and exceptions.

The WebRTC Validator Tool acts as a WebRTC peer to stream to or from, and generally emulates the WebRTC player on smart displays with Google Assistant. If you're specifically working with a smart camera, WebRTC is now supported on the CameraStream trait.

Local Home

In order to continue striving towards quality responses to user queries, we also added support to the Local Home SDK to support local queries and responses. Additionally, to help users onboard new devices in their homes quickly and use Google Nest devices as local hubs, we launched BLE Seamless Setup.

Matter

The new Google Home IDE enables you to improve your development process by enabling in-IDE access to Google Assistant Simulator, Cloud Logging, and more for each of your projects. This plugin is available for VSCode.

Finally, as we get closer to the official launch of the Matter protocol, we're working hard to unify all of our smart home ecosystem tools together under a single name - Google Home. The Google Home Developer Center will enable you to quickly find resources for integrating your Matter-compatible smart devices and platforms with Nest, Android, Google Home app, and Google Assistant.

Conversational Actions

Way back in January of 2021, we rolled up an updated Actions for Families program, which provides guidelines for teams building actions meant for kids. Conversational Actions which are approved for the Actions for Families program get a special badge in the Assistant Directory, which lets parents know that your Action is family-friendly.

During the What's New in Google Assistant keynote at Google I/O, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions. This included the launch of a Developer Preview for a new client-side fulfillment model for Interactive Canvas. Client-side fulfillment changes the implementation strategy for Interactive Canvas apps, removing the need for a webhook relaying information between the Assistant NLU and their web application. This simplifies the infrastructure needed to deploy an action that uses Interactive Canvas. Since the release of this Developer Preview, we’ve been listening closely to developers to get feedback on client-side fulfillment.

Interactive Canvas Developer Tools

We also released Interactive Canvas Developer tools - a Chrome extension which can help dev teams mock and debug the web app side of Interactive Canvas apps and games. Best of all, it’s open source! You can install the dev tools from the Chrome Web Store, or compile them from source yourself on GitHub at actions-on-google/interactive-canvas-dev-tools.

Updates to SSML

Earlier this year we announced support for new SSML features in Conversational Actions. This expanded support lets you build more detailed and nuanced features using text to speech. We produced a short demonstration of SSML Features on YouTube, and you can find more in our docs on SSML if you’re ready to dive in and start building already

Updates to Transaction UX for Smart Displays

Also announced at I/O for Conversational Actions - we released an updated workflow for completing transactions on smart displays. The new transaction process lets users complete transactions from their smart screens, by confirming the CVC code from their chosen payment method, rather than using a phone to enter a CVC code. If you’d like to get an idea of what the new process looks like, check out our demo video showing new transaction features on smart devices.

Tips on Launching your Conversational Action

Driving a successful launch for Conversational Actions contains helpful information to help you think through some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.

Looking forward to 2022

We're looking forward to another exciting year in 2022. To stay connected, sign up for our new App Actions email series or Google Home newsletter, or for the general Assistant newsletter.

As always, you can also join us on Reddit or follow us on Twitter. Happy Holidays!

5 tips to finish your holiday shopping with Chrome

We’re coming down to the wire with holiday shopping, and many of us are frantically searching online for last-minute stocking stuffers. Luckily, a few new features are coming to Chrome that will make these final rounds of shopping easier — helping you keep track of what you want to buy and finally hit "order."

Here are five ways to use Chrome for a stress-free shopping experience.

1. Keep track of price drops: Are you waiting for a good deal on that pair of headphones, but don’t have time to constantly refresh the page? A new mobile feature, available this week on Chrome for Android in the U.S., will show an item’s updated price right in your open tabs grid so you can easily see if and when the price has dropped. This same feature will launch on iOS in the coming weeks.

Screenshot showing a grid of four tabs in Chrome. Two tabs are product pages and show a price drop on top of the tab preview, highlighted in green.

2. Search with a snapshot from the address bar: If something catches your eye while you’re out window shopping, you can now search your surroundings with Google Lens in Chrome for Android. From the address bar, tap the Lens icon and start searching with your camera.

Coming soon, you’ll also be able to use Lens while you’re browsing in Chrome on your desktop. If you come across a product in an image and want to find out what it is, just right-click and select the “Search images with Google Lens” option.

3. Rediscover what’s in your shopping cart: You know you have items in your shopping cart, but you can't remember where exactly. No need to search all over again. Starting with Chrome on Windows and Mac in the U.S., you can now open up a new tab and scroll to the “Your carts” card to quickly see any site where you’ve added items to a shopping cart. Some retailers, like Zazzle, iHerb, Electronic Express and Homesquare, might even offer a discount when you come back to check out.

4. Get passwords off your plate: Don’t worry about setting up and remembering your account details for your favorite shopping sites. Chrome can help create unique, secure passwords and save your login details for future visits.

5. Simplify the checkout process: By saving your address and payment information with Autofill, Chrome can automatically fill out your billing and shipping details. And when you enter info into a new form, Chrome will ask if you’d like to save it.

Making dynamic groups more powerful with custom user attributes and OrgUnit queries

What’s changing 

Google Groups are a convenient way for Workspaces users to collaborate and a powerful tool for admins to apply consistent security and access policies to sets of users or devices. Dynamic groups further enhance this functionality by allowing group membership to be automatically updated based on parameters such as location, department, or job title. 

Today we are further extending the functionality of dynamic groups in two important ways: 
  • First, dynamic groups can now be defined by querying custom user attributes. This functionality is available as an open beta (no sign up required). 
  • Second, dynamic groups can also be defined based on users’ membership in Organizational Units (OUs). This feature is now generally available. 

Who’s impacted 

Admins only 


Why you’d use it 

Dynamic groups can be used for email distribution lists, access control, group based policy, and more. Compared to regular Google Groups they have the added benefit that memberships are automatically kept up-to-date. Automating membership management increases security, reduces errors, and alleviates user frustration while minimizing the burden on admins. 

These new features expand the utility of dynamic groups for organizations that take advantage of custom user attributes and organizational units. They can further tailor dynamic groups to meet the specific needs of their organization. For example these organizations could now: 
  • Create a dynamic group for all users of a subsidiary (an organizational unit) based in a particular city or state. 
  • Create a dynamic group with all users with a custom attribute of a “job_skill” or “speciality”. 

Getting started 

  • Admins: To take advantage of this new dynamic group functionality, you will need to have already defined custom user fields or organizational units
    • Once this is in place you can test membership queries and then create / update dynamic groups to take advantage of them. 
      • To query a customer attribute “EmployeeNumber” (based on this sample schema): user.custom_schemas.employmentData.EmployeeNumber == '123456789' 
      • To query all direct members of an organizational unit: user.org_unit_id==orgUnitId('03ph8a2z1enx4lx') 
      • To query all direct and indirect members of an organizational unit: user.org_units.exists(org_unit, org_unit.org_unit_id==orgUnitId('03ph8a2z1khexns')) 
  • End users: Not available to end users. 

Rollout pace 

  • Custom user attribute queries are available now for all users in open beta (no sign up required) 
  • Organizational unit based dynamic group queries are now generally available for all users. 

Availability 

  • Available to Google Workspace Enterprise Standard, Enterprise Plus, and Education Plus customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Education Fundamentals, Frontline, and Nonprofits, as well as G Suite Basic and Business customers 

Resources 

Highlights from Women’s Online Safety Week 2021

In 2020, Google community manager Merve Isler, who lives in Turkey and leads Women Techmakers efforts in Turkey, Central Asia and the Caucasus region, organized the first-ever Women’s Online Safety Hackathon.

“It was the first online safety digital hackathon in the world and was a pilot for everyone,” she says. “We tried it, and it worked well, so we planned a second one, a new version that would be even more inclusive.”

Isler and Women Techmakers ambassadors in Turkey met online almost every day for two months to plan the event.

“I met with UN Turkish activist Zeynep Dilruba Tasdemir right before starting the program planning, and she inspired me to connect the WTM ambassadors with the United Nations Populations Fund (UNFPA),” says Isler.

That led to partnerships with three major nonprofit organizations: the Habitat Association, TurkishWIN and UNFPA Turkey, which provided speakers for the event, mentors for the ideathon and social media marketing support. UNFPA’s youngest ambassador, 19-year-old Selin Özünaldım, spoke at the event.

Twenty-three teams competed in the ideathon, including the jury special award winners, two 12-year-old students. “They were so passionate about solving this important issue,” says Isler.

One project to emerge from the ideathon was BlueX, which uses a text blocker integrated into browsers and social media to read incoming messages, detect harassing or violent language, and block the message.

The event also expanded to an entire week: Women’s Online Safety Week 2021 spanned 10 sessions, held online in Turkish. Attendees had the opportunity to participate in four webinars, two keynotes, four trainings and one ideathon, a hackathon in which teams of women created technical solutions to the problem of violence against women online. More than 2,000 people viewed the online webinars, taught by online security experts from organizations that conduct research on digital security. Facilitators from #IamRemarkable, a Google initiative that empowers women and other underrepresented groups to celebrate their success at work and beyond, also facilitated virtual workshops.

Amid the keynotes and tech talk, Isler says the event also served as a supportive place to share experiences of online harassment and abuse.

“We feel empowered to support each other, and if we see online violence, doxxing, stalking, we should speak up,” she says.

As a champion of developer communities in her professional role, Isler encourages others to find a community that feels like the right place for them.

“At the end of the event, I was doing a final speech, and I said that joining communities to share your experiences is critical, to highlight the issue and get support from each other,” she says. “Joining a community is for career development — and also to feel safe and thrive in technology.”

This engineer creates community for Indigenous Googlers

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

Today’s post is all about Tamina Pitt, a Google Maps Software Engineer from our Sydney office and a founding member of the Google Aboriginal and Indigenous Network chapter in Australia.

What do you work on at Google?

As a Software Engineer for the Directions Platform team, I build the directions experience on Google Maps. I code for anyone who needs help finding their way. I love working on a feature that benefits so many people every day.

I'm a Wuthathi and Meriam woman, meaning that my ancestors are Aboriginal from Far North Queensland and the Torres Strait Islands in Australia. I was born and grew up on the ancestral lands of the Gadigal and Bidjigal people in Sydney, where I still live today. When I came to Google, I wanted to create a community for Indigenous Googlers like me to come together and build a sense of belonging at work. So I co-founded the Australian chapter of the Google Aboriginal and Indigenous Network (GAIN), an Employee Resource Group (ERG) for Googlers from, or passionate about, Indigenous and Aboriginal people. I also contribute to the Reconciliation Action Plan (RAP), Google's commitment to empower and create equitable opportunities for Aboriginal and Torres Strait Islander people. As part of this work, I run events featuring Aboriginal and Torres Strait Islander speakers to help Googlers learn more about Indigenous culture.

Why did you apply to Google?

I first applied to Google when I was a student at the University of New South Wales in Sydney. I was in my second year and still unsure about my future in engineering. I hadn't been applying for internships because I didn't think I was good enough, but my parents pushed me to apply for one at Google — and it turned out to be one of the best decisions I ever made.

Tamina stands outdoors in front of a wall of greenery, tossing a graduation cap. She is wearing a red outfit with a shrimp pattern, a black graduation robe, a red, yellow and black sash (the colors of the Aboriginal flag) and a blue and green sash (the colors of the Torres Strait Islander flag).

Tamina at her University of New South Wales graduation, wearing sashes representing the colors of the Aboriginal and Torres Strait Islander flags.

Describe your path to your current role.

I studied electrical engineering for a year or so, where I took a computing course that I really enjoyed. I eventually transferred to study computer engineering and discovered that I was interested in the software side.

Interning at Google helped me officially try software engineering out for size. My confidence grew once I got some hands-on experience — and now, I’ve been working at Google for two years as a full-time software engineer.

What inspires you to come in (or log in) every day?

I'm inspired by my community of Indigenous people in and outside of work, including the Indigenous activists and Elders who fought and continue to fight for our rights to be recognized. I'm also inspired by the growing interest I see in young Indigenous people and women to work in science, technology, engineering and mathematics (STEM). It makes me really excited for the future.

I really enjoy working on Google Maps, too. Every time I meet a new person, they share their love of Google Maps or send me feature requests. I like knowing that the product I work on is useful for so many people and that I’m part of the team that can make it even better.

Tamina stands in front of a wall of leaves. She is wearing a red shirt with a black-and-white floral skirt. She holds cardboard signs of the Google Maps red logo and the Google “I’m Feeling Lucky” search button. She also wears a Noogler hat — a green, blue, red and yellow hat with a propeller.

Tamina at her Google orientation in Singapore.

What was your interview experience like?

I was very nervous for each interview, because I felt like I didn’t have enough coding experience. I was surprised by how friendly the interviewers were and even found myself having fun. As a new graduate, I was relieved that they didn’t expect me to perform at the same level as someone who's been working for many years.

What advice would you share with your past self?

When I was a student, I didn’t feel like I belonged — I was one of few women and the only Indigenous person in my class. Today, I know that so many people feel the same way. I would tell my past self to stay strong in my identity and feel proud of my achievements. I feel so supported by my community and I want to help other women, Indigenous people and anyone historically underrepresented in tech see their potential in this field.

Google Fiber is hiring!

Google Fiber is launching a new careers site (fiber.google.com/careers) to make it easier for candidates to find the right role and to provide more information about what it’s like working here. Browse the latest job listings for both local and remote roles, along with information about our workplace culture and perspectives from our employees

Finding the right people is one of the most important tasks any company takes on, and that’s true at Google Fiber. We’re committed to ensuring we can provide our customers with an incredible product paired with exceptional service, and having a great team is essential to doing that. And it doesn’t hurt that we like each other quite a bit, too. But don’t take our word for it, check out the video below to hear from team members across the country about what makes Google Fiber the right place for them (and potentially for you, too!)

(((video)))

~~~~~

category: company_news

videourl: https://storage.googleapis.com/fiber/blog/GF-employer-video.mp4



20 years of Google in Canada



It’s 2001 and Nickelback’s “How You Remind Me” is blaring from car radios, Drake made his debut on “Degrassi: The Next Generation”, we were all recovering from the shock of 9/11. And Harry Potter first appeared on movie screens giving us the license to believe in magic. Chris Hadfield was the first Canadian to do a space walk. And yet, we could only take fuzzy, grainy photos with our cell phones. 

Twenty years later, the world has changed, online and off. The Harry Potter crew are no longer children and we’ve moved from Nickelback to The Weeknd. The hit musical Come from Away is returning to the stage to remind us that human connection and kindness still define us as Canadians. And wow, can we ever take a high quality photo with our new cell phones (especially with the Pixel, naturally). 

Google Canada has changed, too. This month marks the 20th anniversary of Google’s arrival in Canada. And if you don’t remember the pomp and circumstance around the event, it’s because there wasn’t any. 

Google Canada began with a single hire in a small workspace in Toronto in 2001 and a few short years later, Google opened its Montréal office. In 2005, Google set up shop in Canada’s technology hub, Waterloo, Ontario, and over the years we have become a part of the Waterloo region technology community, contributing volunteer hours to STEM education programs and hiring engineers to build Google products that Canadians and people around the world use every day. And now, Google Canada is home to more than 2,500 employees. 

We’ve had our share of adventures - bringing maps to the north, helping Canadian businesses tap into the digital economy, opening Cloud regions in Montréal and Toronto to serve Canadian businesses, introducing the world to Canadian creators on YouTube and building new offices in Waterloo, Toronto and Montréal. For the past twenty years, we’ve been fortunate enough to help Canadians search, grow and connect to the world around them using Google products and services. 

Creating opportunity for all Canadians 
During our time here, Google Canada has been investing in the communities where we live and work, through Google.org Community Grants, Google for Startups Accelerator programs, and investments in digital skills training. Over the last 20 years, Google has invested $25 million in Canadian non profits, looking to expand economic opportunity and to help Canadians learn new skills, through commitments to organizations like NPower Canada and ComIT

And we have a long history of working closely with community partners and organizations across Canada to make STEM programs accessible to all students. In 2021, we reached more than 200,000 Canadian learners through STEM outreach and we trained approximately 4,000 educators in CS First. Our STEM and CS First outreach is orchestrated by Google Canada in conjunction with The Cobblestone Collective and supplemented by our Google Canada volunteers to support their local communities. 

 A home for Canadian engineering excellence 

Canada has been synonymous with top notch computer science and engineering for more than 50 years. And most recently, AI research and advancement. It’s for this reason that in 2013 Google welcomed Geoffrey Hinton, a pioneer in the field of deep learning, to the Google Toronto office. And in 2016, Google Research started a Canadian centre of AI excellence by bringing Google Brain to Montréal. Google Brain is a deep learning artificial intelligence research team dedicated to artificial intelligence and every day, these world-leading teams tackle some of the biggest technological challenges of our time. 

Google Canada engineers have conceived of, developed, and implemented innovative products that many Canadians might take for granted: 
  • In 2011, Google Canada engineers played a key role in the development of the first Gmail app for iOS, bringing a Gmail app to the world for iPhone, iPad and iPod touch
  • The Cloud Healthcare API, developed by the Cloud Healthcare & Life Sciences team in Waterloo since 2017, allows healthcare customers to organize and analyze their healthcare data in a scalable, compliant and privacy sensitive way. In 2020, the Cloud Healthcare API became widely available to healthcare organizations around the world. 
  • Our Safe Browsing team in Montréal protects over 4 billion devices worldwide each year, delivering millions of warnings a month about phishing scams and other online threats. 
Providing platforms for Canadian success stories: 
For over 20 years, Google has helped Canadian businesses of all sizes use our digital tools to grow and reach customers across the globe. Before COVID-19, making the transition to digital was aspirational for most business owners. When the pandemic upended all of our lives, it became essential. To better understand how Google products helped Canadian workers and businesses in 2020, Google commissioned independent consultancy Public First to take a look and they found that Google’s search and advertising tools helped provide an estimated $26 billion in economic activity for over 600,000 businesses in Canada. And in 2020 alone, the total economic impact of Google products and services in Canada is equivalent to 1.3% of total GDP, or the equivalent of supporting 235,000 jobs. 

 And YouTube has facilitated the rise of the Canadian creator economy, helping content creators build sustainable businesses on the platform. A report by Oxford Economics estimates that in 2020, YouTube’s creative ecosystem contributed approximately $923 million to Canada’s GDP. In that same period YouTube supported the equivalent of 34,100 full-time employment jobs across Canada. Access to YouTube’s open platform continues to create a real and positive impact on the wider Canadian economy, and we can’t wait to watch the next generation of Canadian creators grow, create and connect on the platform. 

This month, Google Canada is officially 20 years old and more than 2,500 Googlers strong. We’re working, living, and growing in communities across this country. We’re delivering innovations that are helping people through the toughest times of their lives. And we’re doing all of that as we stay committed to the same goal we were founded on: to organize the world’s information and make it universally accessible and useful. 

It only seems fitting that to celebrate the past 20 years, we take a look back at the most interesting searches over these two decades, to reflect back what Canadians have been curious about. And it turns out, what we’re most curious about is us. At its heart, Google is a place for people to ask questions – about ourselves, current events, and the world we are striving to create. So, after 20 years of Googling, we just wanted to say to all Canadians: thanks for asking. 

Here’s to everything that comes next.