Monthly Archives: June 2021

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 92 (92.0.4515.80) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Admins can view more information about apps in the Google Workspace Marketplace before deploying to their users

What’s changing

In the Google Workspace Marketplace, you’ll now see more information about apps available in the Marketplace. Specifically, you’ll see the following information:

  • Overview: Information about the app developer, including quick links to their site, privacy policy, and terms of service.
  • Permissions: App permissions to review to assess whether the app data access complies with your organization’s policies.
  • Reviews: Reviews and replies provided by other users who have installed the app.

Who’s impacted

Admins and end users

Why it’s important

Before installing Google Workspace Marketplace app to users in their domain, admins can review important information about the application, such as developer information, permissions the applications require, and more. This additional information will help you make a more informed decision when deploying apps to your users.

Getting started

Rollout pace


  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, and Nonprofits, as well as G Suite Basic and Business customers
  • Not available to Google Workspace Essentials and Frontline customers


Beta Channel Update for Desktop

The Beta channel has been updated to 92.0.4515.81 for Windows and 92.0.4515.80 for linux and Mac.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Srinivas Sista

Updating Bulk Upload in Google Ads Scripts

We have just launched a new backend for bulk uploads in Google Ads Scripts. Starting today, you can opt into the new experience by specifying the useLegacyUploads flag when creating a new bulk upload:

var upload = AdsApp.bulkUploads().newCsvUpload(columns,
{useLegacyUploads: false});
For now, if you don't specify a value, the default will be true, which will result in no change of behavior from before this feature. Starting on July 12, 2021 we will change the default to false, but you can still opt out for a time by explicitly passing useLegacyUploads as true. On February 1, 2022, we will disable the use of legacy uploads altogether.

We have tried to make the changes here as backwards-compatible as possible, so in most cases your existing uploads should continue to work fine even on the new backend. However, creating video ads which specify a media ID will cease working on the legacy uploads on or after July 12, when we change the default, so if those are important fields to you, make sure you do not manually override to use legacy uploads.

The new backend is exactly compatible with UI-based bulk uploads, which is an improvement over the existing bulk upload system where many columns have different names. This should make it significantly easier to set up bulk uploads by basing the format from the UI. Additionally, a few new features are available, including label manipulation and updating manager accounts.

If you have any questions, please leave a post on our forum so that we can help.

It’s a hot one: How heat waves have trended over time

I live in the Pacific Northwest, a part of North America known for cooler weather and notoriously gray skies. So imagine my surprise when temperatures hit 116 degrees Fahrenheit over the past few days. And did I mention that, like many other PNWers, I don’t have air conditioning? Every morning lately, my Google Assistant delivers the slightly worrying news that the temperature is ticking up.

The heat wave is all anyone here — and in other affected areas — can talk about. U.S. searches about heat waves and sunscreen reached all time record highs this month, and “air conditioner installation service” spiked more than 2,150% over the same period of time. (To little surprise, search interest in air conditioning peaks every summer — but you can see that they’ve been rising every year.) 

Graph showing search interest in "air conditioning."

Since many of us are searching for this kind of information, I decided to take a trip down Ngrams lane to see how exactly we’ve talked about (er, I suppose “written about”) extreme summer weather over time. As a quick refresher, Ngrams was launched in 2009 by the Google Books team. The tool shows how books and other pieces of writing have used certain words or phrases over time, so you can see how popular (or unpopular) they’ve been throughout the years. 

I started with the classic “heat wave,” which has steadily risen over time. I also tried “a hot one,” and given how that phrase could apply to so many different use cases (outdoor temperatures but also meals, items, etc.), it’s been relatively steady. 

Graph showing ngrams results for "heat wave" and "a hot one."

I decided to try “scorcher” and the more specific “summer scorcher.” 

Graph showing ngrams results for "scorcher."
Graph showing ngrams results for "summer scorcher."

The semi-consistent dips in “summer scorcher” suggest that the phrase was likely only used the summer. But what about that huge peak in just plain “scorcher” in 1896? Below the graph, there’s an option to choose the time period from 1892 to 1897 and see how the word was used in books that have been uploaded to Google Books vast digital library. To my surprise, “scorcher” at this point in time didn’t refer to a tortuously hot day: In many cases, it was used to talk about someone who raced bicycles. 

Google Books search result showing a use of the word 'scorcher' from 1897 where it describes a person writing a bike.

So now when you hear someone say “today’s going to be a scorcher,” I hope you’ll also tell them about the word’s past life. As for me, I’m heading back to the search bar to learn more about another trending search that really hits home: “how to stay cool without ac.”

An update on our progress in responsible AI innovation

Over the past year, responsibly developed AI has transformed health screenings, supportedfact-checking to battle misinformation and save lives, predicted Covid-19 cases to support public health, and protected wildlife after bushfires. Developing AI in a way that gets it right for everyone requires openness, transparency, and a clear focus on understanding the societal implications. That is why we were among the first companies to develop and publish AI Principles and why, each year, we share updates on our progress.

Building on previous AI Principles updates in 20182019, and 2020, today we’re providing our latest overview of what we’ve learned, and how we’re applying these learnings.

Internal Education

In the last year, to ensure our teams have clarity from day one, we’ve added an introduction to our AI Principles for engineers and incoming hires in technical roles. The course presents each of the Principles as well as the applications we will not pursue.

Integrating our Principles into the work we do with enterprise customers is key, so we’ve continued to make our AI Principles in Practice training mandatory for customer-facing Cloud employees. A version of this training is available to all Googlers.

There is no single way to apply the AI Principles to specific features and product development. Training must consider not only the technology and data, but also where and how AI is used. To offer a more comprehensive approach to implementing the AI Principles, we’ve been developing opportunities for Googlers to share their points of view on the responsible development of future technologies, such as the AI Principles Ethics Fellowship for Google’s Employee Resource Groups. Fellows receive AI Principles training and craft hypothetical case studies to inform how Google prioritizes socially beneficial applications. This inaugural year, 27 fellows selected from 191 applicants from around the world wrote and presented case studies on topics such as genome datasets and a Covid-19 content moderation workflow.

Other programs include a bi-weekly Responsible AI Tech Talk Series featuring external experts, such as the Brookings Institution’s Dr. Nicol Turner Lee presenting on detecting and mitigating algorithmic bias.

Tools and Research

To bring together multiple teams working on technical tools and research, this year we formed the Responsible AI and Human-Centered Technology organization. The basic and applied researchers in the organization are devoted to developing technology and best practices for the technical realization of the AI Principles guidance.

As discussed in our December 2020 End-of-Year report, we regularly release these tools to the public. Currently, researchers are developing Know Your Data (in beta) to help developers understand datasets with the goal of improving data quality, helping to mitigate fairness and bias issues.

Image of Know Your Data, a Responsible AI tool in beta

Know Your Data, a Responsible AI tool in beta

Product teams use these tools to evaluate their work’s alignment with the AI Principles. For example, the Portrait Light feature available in both Pixel’s Camera and Google Photos uses multiple machine learning components to instantly add realistic synthetic lighting to portraits. Using computational methods to achieve this effect, however, raised several responsible innovation challenges, including potentially reinforcing unfair bias (AI Principle #2) despite the goal of building a feature that works for all users. So the Portrait Light team generated a training dataset containing millions of photos based on an original set of photos of different people in a diversity of lighting environments, with their explicit consent. The engineering team used various Google Responsible AI tools to test proactively whether the ML models used in Portrait Light performed equitably across different audiences.

Our ongoing technical research related to responsible innovation in AI in the last 12 months has led to more than 200 research papers and articles that address AI Principles-related topics. These include exploring and mitigating data cascades; creating the first model-agnostic framework for partially local federated learning suitable for training and inference at scale; and analyzing the energy- and carbon-costs of training six recent ML models to reduce the carbon footprint of training an ML system by up to 99.9%.

Operationalizing the Principles

To help our teams put the AI Principles into practice, we deploy our decision-making process for reviewing new custom AI projects in development — from chatbots to newer fields such as affective technologies. These reviews are led by a multidisciplinary Responsible Innovation team, which draws on expertise in disciplines including trust and safety, human rights, public policy, law, sustainability, and product management. The team also advises product areas, on specific issues related to serving enterprise customers. Any Googler, at any level, is encouraged to apply for an AI Principles review of a project or planned product or service.

Teams can also request other Responsible Innovation services, such as informal consultations or product fairness testing with the Product Fairness (ProFair) team. ProFair tests products from the user perspective to investigate known issues and find new ones, similar to how an academic researcher would go about identifying fairness issues.

Our Google Cloud, Image Search, Next Billion Users and YouTube product teams have engaged with ProFair to test potential new projects for fairness. Consultations include collaborative testing scenarios, focus groups, and adversarial testing of ML models to address improving equity in data labels, fairness in language models, and bias in predictive text, among other issues. Recently, the ProFair team spent nine months consulting with Google researchers on creating an object recognition dataset for physical landmarks (such as buildings), developing criteria for how to choose which classes to recognize and how to determine the amount of training data per class in ways that would assign a fairer relevance score to each class.

Reviewers weigh the nature of the social benefit involved and whether it substantially exceeds potential challenges. For example, in the past year, reviewers decided not to publicly release ML models that can create photo-realistic synthetic faces made with generative adversarial networks (GANs), because of the risk of potential misuse by malicious actors to create “deepfakes” for misinformation purposes.

As another example, a research team requested a review of a new ML training dataset for computer vision fairness techniques that offered more specific attributes about people, such as perceived gender and age-range. The team worked with Open Images, an open data project containing ~9 million images spanning thousands of object categories and bounding box annotations for 600 classes. Reviewers weighed the risk of labeling the data with the sensitive labels of perceived gender presentation and age-range, and the societal benefit of these labels for fairness analysis and bias mitigation. Given these risks, reviewers required creation of a data card explaining the details and limitations of the project. We released the MIAP (More Inclusive Annotations for People) dataset in the Open Images Extended collection. The collection contains more complete bounding box annotations for the person class hierarchy in 100K images containing people. Each annotation is also labeled with fairness-related attributes. MIAP was accepted and presented at the 2021 Artificial Intelligence, Ethics and Society conference.

External Engagement

We remain committed to contributing to emerging international principles for responsible AI innovation. For example, we submitted a response to the European Commission consultation on the inception impact assessment on ethical and legal requirements for AI and feedback on NITI Aayog’s working document on Responsible Use of AI to guide national principles for India. We also supported the Singaporean government’s Guide to Job Redesign in the Age of AI, which outlines opportunities to optimize AI benefits by redesigning jobs and reskilling employees.

Our efforts to engage global audiences, from students to policymakers, center on educational programs and discussions to offer helpful and practical ML education, including:

  • A workshop on Federated Learning and Analytics, making all research talks and a TensorFlow Federated tutorial publicly available.
  • Machine Learning for Policy Leaders (ML4PL), a 2-hour virtual workshop on the basics of ML. To date, we’ve expanded this globally, reaching more than 350 policymakers across the EU, LatAm, APAC, and US.
  • A workshop co-hosted with the Indian Ministry of Electronics and Information Technology on the Responsible Use of AI for Social Empowerment, exploring the potential of AI adoption in the government to address COVID-19, agricultural and environmental crises.

To support these workshops and events with actionable and equitable programming designed for long-term collaboration, over the past year we’ve helped launch:

  • AI for Social Good workshops, bringing together NGOs applying AI to tough challenges in their communities with academic experts and Google researchers to encourage collaborative AI solutions. So far we’ve supported more than 30 projects in Asia Pacific and Sub-Saharan Africa with expertise, funding and Cloud Credits.
  • Two collaborations with the U.S. National Science Foundation: one to support the National AI Research Institute for Human-AI Interaction and Collaboration with $5 million in funding, along with AI expertise, research collaborations and Cloud support; another to join other industry partners and federal agencies as part of a combined $40 million investment in academic research for Resilient and Intelligent Next-Generation (NextG) Systems, in which Google will offer expertise, research collaborations, infrastructure and in-kind support to researchers.
  • Quarterly Equitable AI Research Roundtables (EARR), focused on the potential downstream harms of AI with experts from the Othering and Belonging Institute at UC Berkeley, PolicyLink, and Emory University School of Law.
  • MLCommons, a non-profit, which will administer MLPerf, a suite of benchmarks for Google and industry peers.

Committed to sharing tools and discoveries

In the three years since Google released our AI Principles, we’ve worked to integrate this ethical charter across our work — from the development of advanced technologies to business processes. As we learn and engage with people and organizations across society we’re committed to sharing tools, processes and discoveries with the global community. You’ll find many of these in our recently updated People + AI Research Guidebook and on the Google AI responsibilities site, which we update quarterly with case studies and other resources. 

Why this Google engineer is teaching students to code

San Francisco-based Googler Ernest Holmes first started coding when he was in high school. “From then on, I was hooked and knew I wanted to become an engineer,” he says. By the time he was a freshman at Morehouse College, Ernest was participating in the Google in Residence program (GIR). That program introduced him to the Google internship program which he took part in for three consecutive summers before joining us as a full-time engineer.

Early exposure to coding helped set Ernest up for success, but some of his classmates weren’t as lucky. During his first computer science course in college, he realized many of the students were only then getting their first coding experience.

“There were some students who, like me, had their interest piqued early on, while others had never coded before in their lives, and they just wanted to take a computer science class to figure it out,” Ernest says. “For that second group, it was like they were starting at a disadvantage because they’d never been exposed to the concepts, and they were entering into college life at the same time. That can be overwhelming.”

Ernest started tutoring sessions for his classmates and quickly learned that if they’d been exposed even just a few years earlier, it could have changed their paths. Inspired by this idea, in 2019 — at the same time Ernest began his career as a full-time engineer at Google — he founded the nonprofit CodeHouse to fulfill his personal goal of bringing the joy of coding to the next generation.

“CodeHouse is a nonprofit that partners with schools across the U.S. to introduce students to careers in tech through exposure to large tech companies, hands-on training and financial assistance,” Ernest says.

A group of people stand together on an orange rug.

The Codehouse team.

CodeHouse brings software engineers, product managers and designers from Google and other tech companies, as well as representatives from colleges and universities around the U.S., to meet with students and share their career stories.

“Throughout the year, we host Tech Exposure Days to make learning about careers and opportunities in tech a fun and engaging experience,” Ernest says. “We want students to leave with more knowledge about what’s out there in the tech industry as well as connect with role models who look like them in careers they hadn’t even considered.”

To date, CodeHouse has worked with more than 2,500 students through its events. With support from fellow Googlers Michelle Asamoah and William Bell, the CodeHouse team continues to grow and so does its mission.

“We started CodeHouse by hosting events to help expose students to tech while they’re in high school, but we want to be a long-term partner for them on their journey through college and into their professional careers,” Ernest says. “To do this, we kicked off our CodeHouse Scholar initiative last year where we’re offering $20,000 scholarships, mentorship, and a technical skills training session for incoming freshmen going to Historically Black Colleges and Universities (HBCUs) and majoring in computer science.”

In the first cohort of scholars, CodeHouse identified 30 students from across the U.S. to be sponsored and receive scholarships. These students will participate in a technical skills workshop that includes an introduction to basic coding languages like Python and they’ll learn about different careers in computer science. Ernest and the CodeHouse team hope to scale this program to additional career fields in tech so students can get even more exposure and skills training before college.

“I fell in love with computer science,” Ernest says. “As an engineer at Google, I know that I can create anything that I can imagine. I want to introduce as many people to that feeling and this field as possible.”

To learn more about CodeHouse, visit You can also follow them on Twitter, Instagram or their Facebook page.

Google updates Passes API to store COVID vaccination and testing information on Android devices

Posted by Irfan Faizullabhoy

Google has updated its Passes API to enable a simple and secure way to store and access COVID vaccination and test cards on Android devices. Starting today, developers from healthcare organizations, government agencies and organizations authorized by public health authorities to distribute COVID vaccines and/or tests will have access to these APIs to create a digital version of COVID vaccination or test information. This will roll out initially in the United States followed by other countries.

Image of three smart phones side by side showing Covid vaccination cards

Example COVID Cards from Healthvana, a company serving Los Angeles County

Once a user stores the digital version of the COVID Card to their device, they will be able to access it via a shortcut on their device home screen, even when they are offline or in areas that have weak internet service. To use this feature, the device needs to run Android 5 or later and be Play Protect certified. Installing the Google Pay app is not a requirement to access COVID Cards.

The COVID Card has been designed with privacy and security at its core.

  • Storing information: The user’s COVID vaccination and test information is stored on their Android device. If a user wants to access this information on multiple devices, the user will need to manually store it on each device. Google does not retain a copy of the user’s COVID vaccination or test information.
  • Sharing information: Users can choose to show their COVID Card to others. The information in the user’s COVID Card is not shared by Google with its various services or third parties and it is not used for targeting ads.
  • Securing information: A lock screen is required in order to store a COVID Card on a device. This is for added security and to protect the user’s personal information. When a user wants to access their COVID Card, they will be asked for the password, pin or biometric method set up for their Android device.

If you are a qualified provider, please sign up to share your interest here. And, for more information about COVID cards and their privacy and security features, please see the help center.

What do you think?

Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDevs and follow us @GooglePayDevs.

Nakisha Wynn helps other moms build profitable blogs

Nakisha Wynn was working at a financial services firm when life took an unexpected turn. She thought about starting a blog aimed at other moms, particularly single mothers. “I didn’t see anybody who looked like me doing the blogging thing,” Nakisha notes. “It was either these fabulous girls showing off their fashions or huge bloggers I couldn’t relate to, so I birthed my blog from that.”

In 2016, Nakisha, where she blogs about single parenting, personal finance, working at home, family travel, frugal living and self-care.

Nakisha Wynn blogs about single parenting, personal finance, working at home, family travel, frugal living and self-care.

Today, her following extends to social media, including YouTube, Instagram andFacebook. She has developed brand partnerships, participates in affiliate programs and offers her professional services as a content creator, coach and speaker. 

Nakisha describes how “hard work, persistence and dedication” led to her entrepreneurial success as a web creator, blogger and YouTuber. 

How would you start a blog from scratch?

The very first thing I would do is connect with current bloggers that I look up to. Comment on their videos or blog posts and follow them on Instagram and put yourself into that person's community. Study them to see what they're doing and how they're doing, and then just go for it! I would do as much as I can and then I would get mentorship to take me to the next level.

photo of Nakisha Wynn

Nakisha draws on her finance background to help other moms become successful bloggers

Could you break down your website sections?

My ”Mom Life” section is where my roots are. That's what really caught the attention of everyone, because I was sharing a very unique story of being a single mom who decided to jump up and leave her corporate job and get out here and just wing it. Motherhood is the foundation of my business. I am passionate about moms pursuing their real-life dreams and going for it. 

I have “Family Finances” because my background is banking. And I have overcome some financial obstacles in my life, dealing with credit issues and learning how to budget and manage finances by myself being single. So I share things around saving money and how to budget and making money on the side, side hustles. One of the biggest things that I learned, which helped me to truly get out here and work for myself, is multiple streams of income is huge.

a photo of Nakisha's website

Mom Life is a core topic on Nakisha’s website and blog.

How do you come up with ideas for your blog? 

I'm often live on YouTube or Instagram, and I use what my audience is asking me. I think it's one of the most amazing ways to come up with content ideas. Because if you are having that ongoing conversation with your audience, they will tell you exactly what they want to see. 

screenshot of Nakisha's YouTube channel

Nakisha vlogs about blogging on her YouTube channel.

Do you keep an editorial calendar? 

For sure! There’s no way I could stay on task and create as much content as I do without one. I posted a video on How to Create a Content Calendar. It's essential — especially if you have other businesses or if you're working other jobs — to stay on task to make sure you're staying up on trends. 

Is blogging still worth it?

Absolutely! I think it's important for people to have a place to go to, to see what you're all about. People will watch you for months before they even reach out. If I only have your social media to go to, I don't really know what your story is. I want to go to your blog. I want to see your website’s “About” section. You need a home and a place to house your information, and to [show] who you are about so that when people are ready to pay you, they can have somewhere to come and knock on the door to give you the check.

You put out so much content. Are you a one-woman show?

This is a one-woman show! Listen, it can be done. It takes hard work, persistence and dedication. You got to really want this, and I really want this. For me, it is such an absolute pleasure and privilege to be able to do something I absolutely love every single day.