Making Google Ads API Client Libraries More Inclusive

As part of Google’s effort to foster inclusivity in our products and communities, we will be changing the name of the default branch to main in the Google Ads API client libraries. This change is a small but important step towards making diversity, equity, and inclusion central to our work on the Google Ads API.

This change won’t affect most client library users who access the libraries through a distribution platform like PyPi for Python or Maven for Java. Users who access the libraries directly from the default branch on GitHub should start using the new main branch once it’s available. We encourage contributors who own forks of these client libraries to update them accordingly.

If you have any questions about this change, please file an issue on the issues tracker for your client library of choice (like Python). As always, please feel free to contact us through the forum or at [email protected] for additional help.

Easily chat with meeting participants from a Google Calendar event

What’s changing 

We’re adding an option that makes it easy to chat with meeting attendees directly from Google Calendar. Within the Calendar event on web or mobile, you’ll see a Chat icon next to the guest list — simply select this icon to create a group chat containing all event participants. Please note: this only applies to participants within your organization, external attendees are not included in the chat group.This makes it simple to chat with guests before, during, or after any meeting. 

Chat with event attendees directly from the Calendar event on mobile devices

Chat with event attendees directly from the Calendar event on mobile devices


Chat with event attendees directly from the Calendar event on web

Chat with event attendees directly from the Calendar event on web


Who’s impacted 

End users 

Why you’d use it 

Previously, the main way to communicate with Calendar event attendees was via email. However, there are times when Chat may be preferred to email for communication. For example, sending a message that you’re running late, or sharing resources with attendees not long before the meeting starts. Now, the email and chat options are side by side on the calendar event. This can help you quickly choose whichever form of communication you prefer, and start conversations with just a few taps. When combined with Chat suggestions, it’s always easy to communicate with event participants via chat. 


Getting started 

Rollout pace 

On the web: 

On mobile: 

Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers 

Resources

Helping travelers discover new things to do

While travel restrictions continue to vary across the globe, people are still dreaming of places to visit and things to do. Searches for “activities near me” have grown over the past 12 months, with specific queries like “ziplining” growing by 280% and “aquariums” by 115% globally. In response to this increasing interest, and to support the travel industry’s recovery, we’re introducing new ways to discover attractions, tours and activities on Search. 

Now, when people search on Google for attractions like the Tokyo Tower or the Statue of Liberty, they’ll see not just general information about the point of interest, but also booking links for basic admission and other ticket options where available. In the months ahead, we’ll also begin showing information and booking links for experiences in a destination, like wine tasting in Paris or bike tours in California. 

Ticketing options will show what rates each partner prices their tickets at.

Select ‘Tickets’ to see ticketing options available from partner websites.

There are a variety of partners that we’re working with, including online travel agencies and technology providers, to make this information available on Search. If you operate any attractions, tours or activities and want to participate, learn more in the Help Center.

Our goal is to help people find and compare all the best travel options, which is why partners can promote their ticket booking links at zero cost — similar to the free hotel booking links introduced earlier this year.

While it’s still early days, we’ve found that free hotel booking links result in increased engagement for both small and large partners. Hotels working with the booking engine WebHotelier saw more than $4.7M in additional revenue from free booking links this summer. With more than 6,000 active hotels, WebHotelier shared that they were "pleasantly surprised to receive reservations right from Google at no additional cost." This is one of the ways Google can support your business during recovery. 

Introducing a new ad format for things to do

We’re also introducing a new ad format for things to do that will help advertisers drive additional revenue and bookings as recovery continues. With more details like pricing, images and reviews, these new ads on Search will help partners stand out and expand their reach even further. Read more about how to get started in our Help Center.

This shows ads as the first search result and helps our paid partners get to the top of the page.

Ads to promote discovery of things to do and drive bookings.

It’s more important than ever to get the right insights, education and best practices you need as the travel landscape continues to evolve. In July, our team launched Travel Insights with Google in the U.S. to share Google’s travel demand insights with the world. And tomorrow — Thursday, September 23 — we’ll host a webinar to share tips and tricks for using Travel Insights with Google to help you better understand evolving travel demand. 

Across our new product updates and ongoing feature enhancements, we look forward to partnering closely on the travel recovery effort and preparing for the road ahead. 

Building a sustainable future for travel

There’s a lot to consider when it comes to booking travel: price, health and safety, environmental impact and more. Last year, we shared travel tools to help you find health and safety information. Now we want to make it easier for you to find sustainable options while traveling — no matter what you’re doing or where you’re going.  


To make that happen, we’ve created a new team of engineers, designers and researchers focused solely on travel sustainability. Already, this team is working to highlight sustainable options within our travel tools that people use every day. 


Beginning this week, when you search for hotels on Google, you’ll see information about their sustainability efforts. Hotels that are certified for meeting high standards of sustainability from certain independent organizations, like Green Key or EarthCheck, will have an eco-certified badge next to their name. Want to dive into a hotel’s specific sustainability practices? Click on the “About” tab to see a list of what they’re doing — from waste reduction efforts and sustainably sourced materials to energy efficiency and water conservation measures.

Someone searches for a hotel in San Francisco and checks the hotel's sustainability attributes.

We’re working with hotels around the world, including independent hotels and chains such as Hilton and Accor, to gather this information and make it easily accessible. If you’re a hotel owner with eco-certifications or sustainability practices you want to share with travelers, simply sign in to Google My Business to add the attributes to your Business Profile or contact Google My Business support


Making travel more sustainable isn’t something we can do alone, which is why we’re also joining the global Travalyst coalition. As part of this group, we’ll help develop a standardized way to calculate carbon emissions for air travel. This free, open impact model will provide an industry framework to estimate emissions for a given flight and share that information with potential travelers. We’ll also contribute to the coalition’s sustainability standards for accommodations and work to align our new hotel features with these broader efforts.


All these updates are part of our commitment over the next decade to invest in technologies that help our partners and people around the world make sustainable choices. Look out for more updates in the months ahead as our travel sustainability team works with experts and partners to create a more sustainable future for all.

Helping travelers discover new things to do

While travel restrictions continue to vary across the globe, people are still dreaming of places to visit and things to do. Searches for “activities near me” have grown over the past 12 months, with specific queries like “ziplining” growing by 280% and “aquariums” by 115% globally. In response to this increasing interest, and to support the travel industry’s recovery, we’re introducing new ways to discover attractions, tours and activities on Search. 

Now, when people search on Google for attractions like the Tokyo Tower or the Statue of Liberty, they’ll see not just general information about the point of interest, but also booking links for basic admission and other ticket options where available. In the months ahead, we’ll also begin showing information and booking links for experiences in a destination, like wine tasting in Paris or bike tours in California. 

Ticketing options will show what rates each partner prices their tickets at.

Select ‘Tickets’ to see ticketing options available from partner websites.

There are a variety of partners that we’re working with, including online travel agencies and technology providers, to make this information available on Search. If you operate any attractions, tours or activities and want to participate, learn more in the Help Center.

Our goal is to help people find and compare all the best travel options, which is why partners can promote their ticket booking links at zero cost — similar to the free hotel booking links introduced earlier this year.

While it’s still early days, we’ve found that free hotel booking links result in increased engagement for both small and large partners. Hotels working with the booking engine WebHotelier saw more than $4.7M in additional revenue from free booking links this summer. With more than 6,000 active hotels, WebHotelier shared that they were "pleasantly surprised to receive reservations right from Google at no additional cost." This is one of the ways Google can support your business during recovery. 

Introducing a new ad format for things to do

We’re also introducing a new ad format for things to do that will help advertisers drive additional revenue and bookings as recovery continues. With more details like pricing, images and reviews, these new ads on Search will help partners stand out and expand their reach even further. Read more about how to get started in our Help Center.

This shows ads as the first search result and helps our paid partners get to the top of the page.

Ads to promote discovery of things to do and drive bookings.

It’s more important than ever to get the right insights, education and best practices you need as the travel landscape continues to evolve. In July, our team launched Travel Insights with Google in the U.S. to share Google’s travel demand insights with the world. And tomorrow — Thursday, September 23 — we’ll host a webinar to share tips and tricks for using Travel Insights with Google to help you better understand evolving travel demand. 

Across our new product updates and ongoing feature enhancements, we look forward to partnering closely on the travel recovery effort and preparing for the road ahead. 

Announcing WIT: A Wikipedia-Based Image-Text Dataset

Multimodal visio-linguistic models rely on rich datasets in order to model the relationship between images and text. Traditionally, these datasets have been created by either manually captioning images, or crawling the web and extracting the alt-text as the caption. While the former approach tends to result in higher quality data, the intensive manual annotation process limits the amount of data that can be created. On the other hand, the automated extraction approach can lead to bigger datasets, but these require either heuristics and careful filtering to ensure data quality or scaling-up models to achieve strong performance. An additional shortcoming of existing datasets is the dearth of coverage in non-English languages. This naturally led us to ask: Can one overcome these limitations and create a high-quality, large-sized, multilingual dataset with a variety of content?

Today we introduce the Wikipedia-Based Image Text (WIT) Dataset, a large multimodal dataset, created by extracting multiple different text selections associated with an image from Wikipedia articles and Wikimedia image links. This was accompanied by rigorous filtering to only retain high quality image-text sets. As detailed in “WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning”, presented at SIGIR ‘21, this resulted in a curated set of 37.5 million entity-rich image-text examples with 11.5 million unique images across 108 languages. The WIT dataset is available for download and use under the Creative Commons license. We are also excited to announce that we are hosting a competition with the WIT dataset in Kaggle in collaboration with Wikimedia Research and other external collaborators.

Dataset   Images     Text     Contextual Text     Languages  
Flickr30K 32K 158K - < 8
SBU Captions     1M 1M - 1
MS-COCO 330K 1.5M - < 4; 7 (test only)
CC-3M
CC-12M
3.3M
12M
3.3M
12M
-
-
1
1
WIT 11.5M 37.5M ~119M 108
WIT’s increased language coverage and larger size relative to previous datasets.

The unique advantages of the WIT dataset are:

  1. Size: WIT is the largest multimodal dataset of image-text examples that is publicly available.
  2. Multilingual: With 108 languages, WIT has 10x or more languages than any other dataset.
  3. Contextual information: Unlike typical multimodal datasets, which have only one caption per image, WIT includes many page-level and section-level contextual information.
  4. Real world entities: Wikipedia, being a broad knowledge-base, is rich with real world entities that are represented in WIT.
  5. Challenging test set: In our recent work accepted at EMNLP, all state-of-the-art models demonstrated significantly lower performance on WIT vs. traditional evaluation sets (e.g., ~30 point drop in recall).

Generating the Dataset
The main goal of WIT was to create a large dataset without sacrificing on quality or coverage of concepts. Thus, we started by leveraging the largest online encyclopedia available today: Wikipedia.

For an example of the depth of information available, consider the Wikipedia page for Half Dome (Yosemite National Park, CA). As shown below, the article has numerous interesting text captions and relevant contextual information for the image, such as the page title, main page description, and other contextual information and metadata.

Example wikipedia page with various image-associated text selections and contexts we can extract. From the Wikipedia page for Half Dome : Photo by DAVID ILIFF. License: CC BY-SA 3.0.
Example of the Wikipedia page for this specific image of Half Dome. From the Wikipedia page for Half Dome : Photo by DAVID ILIFF. License: CC BY-SA 3.0.

We started by selecting Wikipedia pages that have images, then extracted various image-text associations and surrounding contexts. To further refine the data, we performed a rigorous filtering process to ensure data quality. This included text-based filtering to ensure caption availability, length and quality (e.g., by removing generic default filler text); image-based filtering to ensure each image is a certain size with permissible licensing; and finally, image-and-text-entity–based filtering to ensure suitability for research (e.g., excluding those classified as hate speech). We further randomly sampled image-caption sets for evaluation by human editors, who overwhelmingly agreed that 98% of the samples had good image-caption alignment.

Highly Multilingual
With data in 108 languages, WIT is the first large-scale, multilingual, multimodal dataset.

# of Image-Text Sets   Unique Languages   # of Images   Unique Languages  
> 1M 9 > 1M 6
500K - 1M 10 500K - 1M 12
  100K - 500K   36   100K - 500K   35
50K - 100K 15 50K - 100K 17
14K - 50K 38 13K - 50K 38
WIT: coverage statistics across languages.
Example of an image that is present in more than a dozen Wikipedia pages across >12 languages. From the Wikipedia page for Wolfgang Amadeus Mozart.

The First Contextual Image-Text Dataset
Most multimodal datasets only offer a single text caption (or multiple versions of a similar caption) for the given image. WIT is the first dataset to provide contextual information, which can help researchers model the effect of context on image captions as well as the choice of images.

WIT dataset example showing image-text data and additional contextual information.

In particular, key textual fields of WIT that may be useful for research include:

  • Text captions: WIT offers three different kinds of image captions. This includes the (potentially context influenced) “Reference description”, the (likely context independent) “Attribution description” and “Alt-text description”.
  • Contextual information: This includes the page title, page description, URL and local context about the Wikipedia section including the section title and text.

WIT has broad coverage across these different fields, as shown below.

Image-Text Fields of WIT     Train Val Test Total / Unique
Rows / Tuples   37.1M     261.8K     210.7K   37.6M
Unique Images 11.4M 58K 57K 11.5M
Reference Descriptions 16.9M 150K 104K   17.2M / 16.7M  
Attribution Descriptions 34.8M 193K 200K 35.2M / 10.9M
Alt-Text 5.3M 29K 29K 5.4M / 5.3M
Context Texts - - - 119.8M
Key fields of WIT include both text captions and contextual information.

A High-Quality Training Set and a Challenging Evaluation Benchmark
The broad coverage of diverse concepts in Wikipedia means that the WIT evaluation sets serve as a challenging benchmark, even for state-of-the-art models. We found that for image-text retrieval, the mean recall scores for traditional datasets were in the 80s, whereas for the WIT test set, it was in the 40s for well-resourced languages and in the 30s for the under-resourced languages. We hope this in turn can help researchers to build stronger, more robust models.

WIT Dataset and Competition with Wikimedia and Kaggle
Additionally, we are happy to announce that we are partnering with Wikimedia Research and a few external collaborators to organize a competition with the WIT test set. We are hosting this competition in Kaggle. The competition is an image-text retrieval task. Given a set of images and text captions, the task is to retrieve the appropriate caption(s) for each image.

To enable research in this area, Wikipedia has kindly made available images at 300-pixel resolution and a Resnet-50–based image embeddings for most of the training and the test dataset. Kaggle will be hosting all this image data in addition to the WIT dataset itself and will provide colab notebooks. Further, the competitors will have access to a discussion forum in Kaggle in order to share code and collaborate. This enables anyone interested in multimodality to get started and run experiments easily. We are excited and looking forward to what will result from the WIT dataset and the Wikipedia images in the Kaggle platform.

Conclusion
We believe that the WIT dataset will aid researchers in building better multimodal multilingual models and in identifying better learning and representation techniques, ultimately leading to improved Machine Learning models in real-world tasks over visio-linguistic data. For any questions, please contact [email protected]. We would love to hear about how you are using the WIT dataset.

Acknowledgements
We would like to thank our co-authors in Google Research: Jiecao Chen, Michael Bendersky and Marc Najork. We thank Beer Changpinyo, Corinna Cortes, Joshua Gang, Chao Jia, Ashwin Kakarla, Mike Lee, Zhen Li, Piyush Sharma, Radu Soricut, Ashish Vaswani, Yinfei Yang, and our reviewers for their insightful feedback and comments.

We thank Miriam Redi and Leila Zia from Wikimedia Research for collaborating with us on the competition and providing image pixels and image embedding data. We thank Addison Howard and Walter Reade for helping us host this competition in Kaggle. We also thank Diane Larlus (Naver Labs Europe (NLE)), Yannis Kalantidis (NLE), Stéphane Clinchant (NLE), Tiziano Piccardi Ph.D. student at EPFL, Lucie-Aimée Kaffee PhD student at University of Southampton and Yacine Jernite (Hugging Face) for their valuable contribution towards the competition.

Source: Google AI Blog


Chrome for Android Update

Hi, everyone! We've just released Chrome 94 (94.0.4606.50) for Android: it'll become available on Google Play over the next few days.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Perform refined email searches with new rich filters in Gmail on web

Quick launch summary 

When searching in Gmail on web, enhanced search chips will provide richer drop-down lists with more options that help you apply additional filters. For example, when you click on the “From” chip, you’ll now be able to quickly type a name, choose from a list of suggested senders, or search for emails from multiple senders. Available now for all users, search chips make it quicker and easier to find the specific email or information you’re looking for. 


A richer drop down list in search

Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: There is no end user setting for this feature, chips will appear automatically when you perform a search in Gmail on the web. Use the Help Center to learn more about search in Gmail. 

Rollout pace 

  • This feature is available now for all users. 

Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers. Also available to users with personal Google Accounts

Learn Kubernetes with Google: Join us live on October 6!

 

Graphic describing the Multi-cluster Services API functionalities

Kubernetes hasn’t stopped growing since it was released by Google as an open source project back in June 2014: from July 7, 2020 to a year later in 2021, there were 2,284 new contributors to the project1. And that’s not all: in 2020 alone, the Kubernetes project had 35 stable graduations2. These are 35 new features that are ready for production use in a Kubernetes environment. Looking at the CNCF Survey 2020, use of Kubernetes has increased to 83%, up from 78% in 2019. With these many new people joining the community, and the project gaining so much complexity: how can we make sure that Kubernetes remains accessible to everyone, including newcomers?

This is the question that inspired the creation of Learn Kubernetes with Google, a content program where we develop resources that explain how to make Kubernetes work best for you. At the Google Open Source Programs Office, we believe that increasing access for everyone starts by democratizing knowledge. This is why we started with a series of short videos that focus on specific Kubernetes topics, like the Gateway API, Migrating from Dockershim to Containerd, the Horizontal Pod Autoscaler, and many more topics!

Join us live

On October 6, 2021, we are launching a series of live events where you can interact live with Kubernetes experts from across the industry and ask questions—register now and join for free! “Think beyond the cluster: Multi-cluster support on Kubernetes” is a live panel that brings together the following experts:
  • Laura Lorenz - Software Engineer (Google) / Member of SIG Multicluster in the Kubernetes project
  • Tim Hockin - Software Engineer (Google) / Co-Chair of SIG Network in the Kubernetes project
  • Jeremy Olmsted-Thompson - Sr Staff software Engineer (Google) / Co-Chair of the SIG Multicluster in the Kubernetes project
  • Ricardo Rocha - Computing Engineer (CERN) / TOC Member at the CNCF
  • Paul Morie - Software Engineer (Apple) / Co-Chair of the SIG Multicluster in the Kubernetes project
Why is Multi-cluster support in Kubernetes important? Kubernetes has brought a unified method of managing applications and their infrastructure. Engineering your application to be a global service requires that you start thinking beyond a single cluster; yet, there are many challenges when deploying multiple clusters at a global scale. Multi-cluster has many advantages, it lets you minimize the latency and optimize it for the people consuming your application.

In this panel, we will review the history behind multi-cluster, why you should use it, how companies are deploying multi-cluster, and what are some efforts in upstream Kubernetes that are enabling it today. Check out the “Resources” tab on the event page to learn more about the Kubernetes MCS API and Join us on Oct 6!

By María Cruz, Program Manager – Google Open Source Programs Office

1 According to devstats

Kubernetes Community Annual Report 2020

Stable Channel Update for Desktop

The Chrome team is delighted to announce the promotion of Chrome 94 to the stable channel for Windows, Mac and Linux.Chrome 94 is also promoted to our new extended stable channel for Windows and Mac. This will roll out over the coming days/weeks.



Chrome 94.0.4606.54 contains a number of fixes and improvements -- a list of changes is available in the log. Watch out for upcoming Chrome and Chromium blog posts about new features and big efforts delivered in 94.


Security Fixes and Rewards

Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 19 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$15000][1243117] High CVE-2021-37956: Use after free in Offline use. Reported by Huyna at Viettel Cyber Security on 2021-08-24

[$7500][1242269] High CVE-2021-37957 : Use after free in WebGPU. Reported by Looben Yang on 2021-08-23

[$3000][1223290] High CVE-2021-37958 : Inappropriate implementation in Navigation. Reported by James Lee (@Windowsrcer) on 2021-06-24

[$1000][1229625] High CVE-2021-37959 : Use after free in Task Manager. Reported by raven (@raid_akame)  on 2021-07-15

[$TBD][1247196] High CVE-2021-37960 : Inappropriate implementation in Blink graphics. Reported by Atte Kettunen of OUSPG on 2021-09-07

[$10000][1228557] Medium CVE-2021-37961 : Use after free in Tab Strip. Reported by Khalil Zhani on 2021-07-13

[$10000][1231933] Medium CVE-2021-37962 : Use after free in Performance Manager. Reported by Sri on 2021-07-22

[$3000][1199865] Medium CVE-2021-37963 : Side-channel information leakage in DevTools. Reported by Daniel Genkin and Ayush Agarwal, University of Michigan, Eyal Ronen and Shaked Yehezkel, Tel Aviv University, Sioli O’Connell, University of Adelaide, and Jason Kim, Georgia Institute of Technology  on 2021-04-16

[$3000][1203612] Medium CVE-2021-37964 : Inappropriate implementation in ChromeOS Networking. Reported by Hugo Hue and Sze Yiu Chau of the Chinese University of Hong Kong on 2021-04-28

[$3000][1239709] Medium CVE-2021-37965 : Inappropriate implementation in Background Fetch API. Reported by Maurice Dauer  on 2021-08-13

[$TBD][1238944] Medium CVE-2021-37966 : Inappropriate implementation in Compositing. Reported by Mohit Raj (shadow2639)  on 2021-08-11

[$TBD][1243622] Medium CVE-2021-37967 : Inappropriate implementation in Background Fetch API. Reported by SorryMybad (@S0rryMybad) of Kunlun Lab on 2021-08-26

[$TBD][1245053] Medium CVE-2021-37968 : Inappropriate implementation in Background Fetch API. Reported by Maurice Dauer  on 2021-08-30

[$TBD][1245879] Medium CVE-2021-37969 : Inappropriate implementation in Google Updater. Reported by Abdelhamid Naceri (halov) on 2021-09-02

[$TBD][1248030] Medium CVE-2021-37970 : Use after free in File System API. Reported by SorryMybad (@S0rryMybad) of Kunlun Lab on 2021-09-09

[$1000][1219354] Low CVE-2021-37971 : Incorrect security UI in Web Browser UI. Reported by Rayyan Bijoora on 2021-06-13

[$TBD][1234259] Low CVE-2021-37972 : Out of bounds read in libjpeg-turbo. Reported by Xu Hanyu and Lu Yutao from Panguite-Forensics-Lab of Qianxin on 2021-07-29


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

  • [1251653] Various fixes from internal audits, fuzzing and other initiatives


Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.


Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.




Srinivas Sista
Google Chrome