Monthly Archives: May 2020

Celebrating Africa Day, virtually

An annual celebration of African unity, Africa Day commemorates the founding of the African Union. Now, in the midst of the COVID-19 crisis, thinking about a “Borderless Africa: Celebrating Commonalities” has a special resonance.



We’re lending our support to the (virtual) festivities, as well as to fundraising efforts in support of aid organizations.

We are kicking things off with the Africa Day Benefit Concert At Home, in partnership with Viacom and MTV Base. The two-hour special on YouTube, hosted by Idris Elba, will bring together multiple African artists: Angelique Kidjo, Burna Boy, Sauti Sol, Sho Madjozi, Diamond Platnumz and more. The concert showcases the continent’s music during a month-long campaign to support UNICEF and the UN World Food Program in providing food assistance to Africans affected by COVID-19.

If the concert leaves you wanting more, head over to YouTube Music and check out our Africa Day playlist: Between the afrobeat, a bit of house and African soul, you’ll be getting a musical tour of the continent from your home. Along with contemporary artists like Eddy Kenzo, Lady Zamar, Sauti Sol and Wizkid, it also draws from the sounds of great African voices past, such as Fela Kuti, Hugh Masekela and Johnny Clegg.


For more of our shared cultural heritage and creativity, visit the online exhibit 11 Ways to Celebrate Africa Day at Google Arts & Culture. Discover what unites us across art, food, music, fashion, people, landmarks and more. Peruse the collections of 26 cultural institutions, or use search tools like color and time to explore over 15,000 photographs.. Get the party started with a hearty bowl of jollof rice, visit the Great Pyramid of Giza, feel the beat of the African drum, and zoom into the intricate beadwork of a Ndebele cape. Or turn your home into a gallery: with Art Projector you can virtually place the artwork of Mary Ogembo in your living room—or just turn your living room into Tanzania’s Gereza Fort through augmented reality.


Happy Africa Day from all of us at Google Africa.


Charles Murito, Director, Government Affairs and Public Policy, SSA

====

Dev Channel Update for Chrome OS

The Dev channel is being updated to 84.0.4147.10 (Platform version: 13099.7.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.




If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using 'Report an issue...' in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Marina Kazatcker
Google Chrome OS

Dev Channel Update for Chrome OS

The Dev channel is being updated to 84.0.4147.10 (Platform version: 13099.7.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.




If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using 'Report an issue...' in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Marina Kazatcker
Google Chrome OS

Dev Channel Update for Desktop

The Dev channel has been updated to 84.0.4147.13 for Windows, Mac, and Linux platforms.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Prudhvikumar Bommana Google Chrome

Dev Channel Update for Desktop

The Dev channel has been updated to 84.0.4147.13 for Windows, Mac, and Linux platforms.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Prudhvikumar Bommana Google Chrome

Open-Sourcing BiT: Exploring Large-Scale Pre-training for Computer Vision



A common refrain for computer vision researchers is that modern deep neural networks are always hungry for more labeled data — current state-of-the-art CNNs need to be trained on datasets such as OpenImages or Places, which consist of over 1M labelled images. However, for many applications, collecting this amount of labeled data can be prohibitive to the average practitioner.

A common approach to mitigate the lack of labeled data for computer vision tasks is to use models that have been pre-trained on generic data (e.g., ImageNet). The idea is that visual features learned on the generic data can be re-used for the task of interest. Even though this pre-training works reasonably well in practice, it still falls short of the ability to both quickly grasp new concepts and understand them in different contexts. In a similar spirit to how BERT and T5 have shown advances in the language domain, we believe that large-scale pre-training can advance the performance of computer vision models.

In “Big Transfer (BiT): General Visual Representation Learning” we devise an approach for effective pre-training of general features using image datasets at a scale beyond the de-facto standard (ILSVRC-2012). In particular, we highlight the importance of appropriately choosing normalization layers and scaling the architecture capacity as the amount of pre-training data increases. Our approach exhibits unprecedented performance adapting to a wide range of new visual tasks, including the few-shot recognition setting and the recently introduced “real-world” ObjectNet benchmark. We are excited to share the best BiT models pre-trained on public datasets, along with code in TF2, Jax, and PyTorch. This will allow anyone to reach state-of-the-art performance on their task of interest, even with just a handful of labeled images per class.

Pre-training
In order to investigate the effect of data scale, we revisit common design choices of the pre-training setup (such as normalizations of activations and weights, model width/depth and training schedules) using three datasets: ILSVRC-2012 (1.28M images with 1000 classes), ImageNet-21k (14M images with ~21k classes) and JFT (300M images with ~18k classes). Importantly, with these datasets we concentrate on the previously underexplored large data regime.

We first investigate the interplay between dataset size and model capacity. To do this we train classical ResNet architectures, which perform well, while being simple and reproducible. We train variants from the standard 50-layer deep “R50x1” up to the 4x wider and 152-layer deep “R152x4” on each of the above-mentioned datasets. A key observation is that in order to profit from more data, one also needs to increase model capacity. This is exemplified by the red arrows in the left-hand panel of the figure below
Left: In order to make effective use of a larger dataset for pre-training, one needs to increase model capacity. The red arrows exemplify this: small architectures (smaller point) become worse when pre-trained on the larger ImageNet-21k, whereas the larger architectures (larger points) improve. Right: Pre-training on a larger dataset alone does not necessarily result in improved performance, e.g., when going from ILSVRC-2012 to the relatively larger ImageNet-21k. However, by also increasing the computational budget and training for longer, the performance improvement is pronounced.
A second, even more important observation, is that the training duration becomes crucial. If one pre-trains on a larger dataset without adjusting the computational budget and training longer, performance is likely to become worse. However, by adapting the schedule to the new dataset, the improvements can be significant.

During our exploration phase, we discovered another modification crucial to improving performance. We show that replacing batch normalization (BN, a commonly used layer that stabilizes training by normalizing activations) with group normalization (GN) is beneficial for pre-training at large scale. First, BN’s state (mean and variance of neural activations) needs adjustment between pre-training and transfer, whereas GN is stateless, thus side-stepping this difficulty. Second, BN uses batch-level statistics, which become unreliable with small per-device batch sizes that are inevitable for large models. Since GN does not compute batch-level statistics, it also side-steps this issue. For more technical details, including the use of a weight standardization technique to ensure stable behavior, please see our paper.
Summary of our pre-training strategy: take a standard ResNet, increase depth and width, replace BatchNorm (BN) with GroupNorm and Weight Standardization (GNWS), and train on a very large and generic dataset for many more iterations.
Transfer Learning
Following the methods established in the language domain by BERT, we fine-tune the pre-trained BiT model on data from a variety of “downstream” tasks of interest, which may come with very little labeled data. Because the pre-trained model already comes with a good understanding of the visual world, this simple strategy works remarkably well.

Fine-tuning comes with a lot of hyper-parameters to be chosen, such as learning-rate, weight-decay, etc. We propose a heuristic for selecting these hyper-parameters that we call “BiT-HyperRule”, which is based only on high-level dataset characteristics, such as image resolution and the number of labeled examples. We successfully apply the BiT-HyperRule on more than 20 diverse tasks, ranging from natural to medical images.
Once the BiT model is pre-trained, it can be fine-tuned on any task, even if only few labeled examples are available.
When transfering BiT to tasks with very few examples, we observe that as we simultaneously increase the amount of generic data used for pre-training and the architecture capacity, the ability of the resulting model to adapt to novel data drastically improves. On both 1-shot and 5-shot CIFAR (see Fig below) increasing model capacity yields limited returns when pre-training on ILSVRC (green curves). Yet, with large-scale pre-training on JFT, each step-up in model capacity yields massive returns (brown curves), up to BiT-L which attains 64% 1-shot and 95% 5-shot.
The curves depict median accuracy over 5 independent runs (light points) when transferring to CIFAR-10 with only 1 or 5 images per class (10 or 50 images total). It is evident that large architectures pre-trained on large datasets are significantly more data-efficient.
In order to verify that this result holds more generally, we also evaluate BiT on VTAB-1k, which is a suite of 19 diverse tasks with only 1000 labeled examples per task. We transfer the BiT-L model to all these tasks and achieve a score of 76.3% overall, which is a 5.8% absolute improvement over the previous state-of-the-art.

We show that this strategy of large-scale pre-training and simple transfer is effective even when a moderate amount of data is available by evaluating BiT-L on several standard computer vision benchmarks such as Oxford Pets and Flowers, CIFAR, etc. On all of these, BiT-L matches or surpasses state-of-the-art results. Finally, we use BiT as a backbone for RetinaNet on the MSCOCO-2017 detection task and confirm that even for such a structured output task, using large-scale pre-training helps considerably.
Left: Accuracy of BiT-L compared to the previous state-of-the-art general model on various standard computer vision benchmarks. Right: Results in average precision (AP) of using BiT as backbone for RetinaNet on MSCOCO-2017.
It is important to emphasize that across all the different downstream tasks we consider, we do not perform per-task hyper-parameter tuning and rely on the BiT-HyperRule. As we show in the paper, even better results can be achieved by tuning hyperparameters on sufficiently large validation data.

Evaluation on “Real-World” Images (ObjectNet)
To further assess the robustness of BiT in a more challenging scenario, we evaluate BiT models that were fine-tuned on ILSVRC-2012 on the recently introduced ObjectNet dataset. This dataset closely resembles real-world scenarios, where objects may appear in atypical context, viewpoint, rotation, etc. Interestingly, the benefit from data and architecture scale is even more pronounced with the BiT-L model achieving unprecedented top-5 accuracy of 80.0%, an almost 25% absolute improvement over the previous state-of-the-art.
Results of BiT on the ObjectNet evaluation dataset. Left: top-5 accuracy, right: top-1 accuracy.
Conclusion
We show that given pre-training on large amounts of generic data, a simple transfer strategy leads to impressive results, both on large datasets as well as tasks with very little data, down to a single image per class. We release the BiT-M model, a R152x4 pre-trained on ImageNet-21k, along with colabs for transfer in Jax, TensorFlow2, and PyTorch. We hope that practitioners and researchers find it a useful alternative to commonly used ImageNet pre-trained models.

Acknowledgements
We would like to thank Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby who have co-authored the BiT paper and been involved in all aspects of its development, as well as the Brain team in Zürich. We also would like to thank Andrei Giurgiu for his help in debugging input pipelines. We thank Tom Small for creating the animations used in this blogpost. Finally, we refer the interested reader to the related approaches in this direction by our colleagues in Google Research, Noisy Student, as well as Facebook Research’s highly relevant Exploring the Limits of Weakly Supervised Pretraining.

Source: Google AI Blog


Find wheelchair accessible places with Google Maps

Editor’s note: Today is Global Accessibility Awareness Day, and we’ll be sharing resources and tools for education, as well as accessibility features and updates for Android and Google Maps. 

Imagine making plans to go somewhere new, taking the journey to get there and arriving— only to be stuck outside, prevented from sitting with family or being unable to access the restroom. It’s a deeply frustrating experience I’ve had many times since becoming a wheelchair user in 2009. And it’s an experience all too familiar to the 130 million wheelchair users worldwide and the more than 30 million Americans who have difficulty using stairs.

So imagine instead being able to “know before you go” whether a destination is wheelchair accessible, just as effortlessly as looking up the address. In recognition of Global Accessibility Awareness Day, we’re announcing a new Google Maps feature that does just that.

People can now turn on an “Accessible Places” feature to have wheelchair accessibility information more prominently displayed in Google Maps. When Accessible Places is switched on, a wheelchair icon will indicate an accessible entrance and you’ll be able to see if a place has accessible seating, restrooms or parking. If it’s confirmed that a place does not have an accessible entrance, we’ll show that information on Maps as well.

Better maps for everyone, whether you walk or roll

Today, Google Maps has wheelchair accessibility information for more than 15 million places around the world. That number has more than doubled since 2017 thanks to the dedication of more than 120 million Local Guidesand others who’ve responded to our call to share accessibility information. In total, this community has contributed more than 500 million wheelchair accessibility updates to Google Maps. Store owners have also helped, using Google My Business to add accessibility information for their business profiles to help users needing stair-free access find them on Google Maps and Search.


With this feature “rollout”, it’s easier to find and contribute wheelchair accessibility information to Google Maps. That benefits everyone, from those of us using wheelchairs and parents pushing strollers to older adults with tired legs and people hauling heavy items. And in this time of COVID-19, it’s especially important to know before you go so that you won’t be stranded outside that pharmacy, grocery or restaurant.

How to contribute accessibility information to Google Maps

Anyone can contribute accessibility information to Google Maps

To get wheelchair accessibility information more prominently displayed in Google Maps, update your app to the latest version, go to Settings, select “Accessibility,” and turn on “Accessible Places.” The feature is available on both Android and iOS. 

We’re also rolling out an update that allows people using iOS devices to more easily contribute accessibility information, joining the millions of Android users who have been sharing this type of information on Maps. This guide has tips for rating accessibility, in case you’re not sure what counts as being “accessible.” We invite everyone to switch on Accessible Places and contribute accessibility information to help people in your community.

A Maps milestone, built on a movement

This launch is a milestone in our journey to build a better, more helpful map for everyone, which includes recent efforts to help people find accessible places, transit routes and walking directions. Our work wouldn’t be possible without the decades of advocacy from those who have fought for equal access for people with disabilities. Were it not for them, there would be far fewer accessible places for Google Maps to show.

The Accessible Places feature is starting to rollout for Google Maps users in Australia, Japan, the United Kingdom and the United States, with support for additional countries on the way.

How to turn on Accessible Places

Use the Accessible Places feature to see accessibility information more prominently displayed in Google Maps

A is for accessibility: How to make remote learning work for everyone

Editor’s note: Today is Global Accessibility Awareness Day, and we’ll be sharing resources and tools for education, as well as accessibility features and updates for Android and Google Maps

When it comes to equity and access in education, nothing is more important than making sure  our digital tools are accessible to all learners—especially now as distance learning becomes the norm. I’m a proud member of the disability community, and I come from a family of special education teachers and paraprofessionals. So I’ve seen firsthand how creative educators and digital tools can elevate the learning experience for students with disabilities. It’s been amazing to see how tools like select-to-speak help students improve reading comprehension as they listen while reading along or assist students who have low vision. And tools like voice typing in Docs can greatly benefit students who have physical disabilities that limit their ability to use a keyboard.

This Global Accessibility Awareness DayI'm reminded of how far we’ve come in sharing inclusive tools for people with different abilities. But it doesn’t stop there. Everyday we strive to make our products and tools more inclusive for every learner, everywhere.

Applying technology to accessibility challenges

At Google, we’re always focused on how we can use new technologies, like artificial intelligence, to broaden digital accessibility. Since everyone learns in different ways, we’ve  built tools and features right into our products, like G Suite for Education and Chromebooks,  that can adapt to a range of needs and learning styles. For learners who are Deaf, hard of hearing, or need extra support to focus, you can turn on live captions in Google Slides and in  Google Meet. On Chromebooks, students have access to built-in tools, like screen readers, including ChromeVox and Select-to-speak, and Chromebook apps and extensions from EdTech companies like Don Johnston, Grackle Docs, Crick Software, Scanning Pens and Text Help, with distance learning solutions on the Chromebook App Hub

As more students learn from home, we’ve seen how features like these have helped students learn in ways that work best for them.

Helping all students shine during distance learning

Educators and students around the world are using Google tools to make learning more inclusive and accessible. Whether that’s using Sheets to make to-do lists for students, sharing the built-in magnification tools in Chromebooks to help those who are visually impaired, or using voice typing in Google Docs to dictate lesson plans or essays. 

In Portage Public Schools in Portage, Michigan, teachers are taking advantage of accessibility features in Meet to help all of their students learn at their own pace.  They use live captioning in Meet so that students who are Deaf or hard of hearing can follow along with the lesson. And with the ability to record and save meetings, every student can refer back to the material if they need to.  

In Daegu, South Korea, about 100 teachers worked together to quickly build an e-learning content hub that included tools for special education students, such as Meet, Classroom and Translate. “In the epidemic situation, it was very clear that students in special education were placed in the blind spot of learning,” said one Daegu teacher. But thanks to digital accessibility features that were shared with students and parents, the teacher said, “I saw hope.” 

live captions in Meet.gif

Accessibility resources for schools

At a time when digital tools are creating the  connection between students, classmates, and teachers, we need to prioritize accessibility so that no student is left behind. The good news is that support and tools are readily available for parents, guardians, educators and students:

Your stories of how technology is making learning accessible for more learners during COVID-19 help us and so many others learn new use cases. Please share how you're using accessibility tools and requests for how we can continue to meet the needs of more learners.

Accessibility updates that help tech work for everyone

Editor’s note: Today is Global Accessibility Awareness Day, and we’ll be sharing resources and tools for education, as well as new accessibility features for Android and Google Maps

In 1993, Paul Amadeus Lane was an EMT with his whole life planned out. But at age 22, he was in a multi-car collision that left him fighting for his life and in recovery for eight months. After the crash, Paul became quadriplegic. He soon realized that his voice was one of his most powerful assets—professionally and personally. He went back to school to study broadcasting and became a radio producer and morning show host. Along the way, Paul discovered how he could use technology as an everyday tool to help himself and others. Today, he uses accessibility features, like Voice Access, to produce his own radio show and share his passion for technology.

Stories like Paul’s remind us why accessible technology matters to all of us every single day. Products built with and for people with disabilities help us all pursue our interests, passions and careers. Today, in honor of Global Accessibility Awareness Day, we’re announcing helpful Android features and applications for people with hearing loss, deafness, and cognitive differences. While these updates were designed for people with disabilities, the result is better products that can be helpful for everyone. 

Action Blocks: One-tap actions on Android for people with cognitive disabilities

Every day, people use their phones for routine tasks—whether it’s video calling family, checking the weather or reading the news. Typically, these activities require multiple steps. You might have to scroll to find your video chat app, tap to open it and then type in the name of the contact you’re looking for. 

For people with cognitive disabilities or age-related cognitive conditions, it can be difficult to learn and remember each of these steps. For others, it can be time consuming and cumbersome—especially if you have limited mobility. Now, you can perform these tasks with one tap—thanks to Action Blocks, a new Android app that allows you to create customizable home screen buttons

Android Blocks

With Action Blocks, tap on the customized button to launch an activity.

Create an Action Block for any action that the Google Assistant can perform, like making calls, sending texts, playing videos and controlling devices in your home. Then pick an image for the Action Block from your camera or photo gallery, and place it on your home screen for one-touch access.

Action Blocks is part of our ongoing effort to make technology more helpful for people with cognitive disabilities and their caregivers. The app is available on the Play Store, and works on Android devices on Android 5.0 and above. 

Live Transcribe: Real-time transcriptions for everyday conversations

In 2019, we launched Live Transcribe, an app that provides real-time, speech-to-text transcriptions of everyday conversations for people who are deaf or hard of hearing. Based on feedback we’ve received from people using the product, we’re rolling out new features:

  • Set your phone to vibrate when someone nearby says your name. If you’re looking elsewhere or want to maintain social distance, your phone will let you know when someone is trying to get your attention. 
  • Add custom names or terms for different places and objects not commonly found in the dictionary. With the ability to customize your experience, Live Transcribe can better recognize and spell words that are important to you. 
  • It’s now easier to search past conversations. Simply use the search bar to look through past transcriptions. To use the feature, turn on ‘Saving Transcriptions’ in Settings. Once turned on, transcriptions will be saved locally on your device for three days.
  • We’re expanding our support of 70 languages to include: Albanian, Burmese, Estonian, Macedonian, Mongolian, Punjabi, and Uzbek.

Live Transcribe is pre-installed on Pixel devices and is available on Google Play for devices Android 5.0 and up. 

Sound Amplifier: Making the sounds around you clearer and louder

Sound Amplifier, a feature that clarifies the sound around you, now works with Bluetooth headphones. Connect your Bluetooth headphones and place your phone close to the source of the sound, like a TV or a lecturer, so that you can hear more clearly. On Pixel, now you can also boost the audio from media playing on your device—whether you are watching YouTube videos, listening to music, or enjoying a podcast. Sound Amplifier is available on Google Play for devices Android 6.0 and above.

Sound Amplifier

Use Sound Amplifier to clarify sound playing on your phone.

Accessibility matters for everyone

We strive to build products that are delightful and helpful for people of all abilities. After all, that’s our mission: to make the world’s information universally accessible for everyone. If you have questions on how these features can be helpful for you, visit our Help Center, connect with our Disability Support team or learn more about our accessibility products on Android

Source: Android


Navigating the road ahead: The benefits of real-time marketing

Changes in consumer behavior have always resulted in adjustments to marketing strategies. COVID-19 has shown how quickly consumers’ interests, expectations, and purchasing behavior can shift—and with it, an ebb and flow in demand for products and services. Despite these changes, consumer expectations for businesses and brands remain high. In fact, 78 percent of people surveyed say brands should show how they can be helpful in the new everyday life.1

Adjusting your media buying and the way your business shows up in these dynamic conditions is difficult, especially when some businesses are having to manage twice the complexity with half the capacity. Today, we’ll explore the unique role automation can play in helping you respond to the impact of COVID-19 in real time.


Get the most out of your budget

As conditions change, so do auction dynamics. Communities are in various stages of response to COVID-19 and the things people care about are rapidly shifting. This influences things like location, mobile browsing habits, conversions, and other variables that impact ad performance. It’s in this constant sea of change where Smart Bidding can help.

Smart Bidding (also available as Google Ads auction-time bidding in Search Ads 360) uses machine learning to automatically calculate bids for each and every auction. Utilizing signals like location, search query, and conversion data, Smart Bidding can optimize bids in real time to hit your performance goal even as query and conversion volume fluctuates. 

It’s important to note that unpredictable changes in conversion rates, for example: shifts in conversion cycles, cancellation or return rates, are challenging for any bid automation tool. Under these conditions, consider adjusting your cost per acquisition or return on ad spend targets to ensure the best allocation of your budget. For additional flexibility, consider shared budgets and portfolio bid strategies which are effective ways to automatically adjust bids and move spend across campaigns based on performance.


Reach new and existing customers

From flour to at-home workouts to studying at home, the things people are searching for and how they’re searching for them is evolving. It can be difficult to identify where consumers’ attention and demand is shifting while ensuring you have the right query coverage. Dynamic Search Ads are an easy way to reach customers who are searching for exactly what you have to offer. Using the content on your website, Dynamic Search Ads automatically delivers relevant ads with headlines and landing pages that are specific to the user’s query. So as consumer behavior shifts, you can ensure your Search ads are adjusting in real time to meet that demand, all while saving time.

Another way to find keyword opportunities is through the Recommendations page. "Keywords & Targeting" recommendations help you identify new trends that are relevant to your business. In fact, more than 16 million keyword recommendations in Google Ads are based on market trends alone, with new ones added every day. Consider adding keywords that are projected to drive additional traffic beyond your existing ones, or pausing keywords that are performing poorly.

Once you’ve applied the recommendations that make the most sense for your business, keep an eye on your optimization score. Each recommendation comes with a score uplift, and historically we’ve seen that advertisers who have increased their score by 10 points saw a 10 percent increase in conversions on average. You can quickly check for new recommendations using the Google Ads mobile app.


Optimization.gif

Google Ads mobile app

Show up with the right message

COVID-19 has not only disrupted business operations, like inventory and shipping, but has also impacted the way businesses communicate with customers. As conditions change week to week and community to community, it’s critical to adjust how you’re communicating and interacting with your customers at scale.

Responsive search ads and responsive display ads enable you to make updates to your Search and Display ads at scale, respectively. Using multiple creative assets, like headlines and descriptions, responsive search ads and responsive display ads automatically identify the best combination of assets in order to deliver an ad that’s likely to perform best. For responsive search ads, you can also pin critical information like modified support options or updated business hours to ensure it shows with your ads. If you're seeing an increase in call volume, or your business is operating on limited hours or staffing, call ads (formally known as call-only ads) now also include an optional “Visit website” link. This gives your customers more flexibility in how they connect with your business.

When it comes to adjusting the messages in your video ads, time and resources are limiting factors right now. Rather than starting from scratch, consider using Video Builder. It’s a free beta tool that animates static assets—images, text and logos—with music from YouTube’s audio library. You can choose from a variety of layouts based on your message and goals, customize colors and font, and quickly generate a short YouTube video.


Know what’s working

As the world moves from responding to recovering from this crisis, it’s important you have the right tools available to understand the impact of COVID-19 on your business. Over the past few weeks, we’ve introduced improvements to attribution in Google Ads to help you understand your Google media better.

A new look for attribution reports helps you quickly see how customers are engaging with your ads so you can select the right attribution model for your business. One model, data-driven attribution, uses machine learning to determine how much credit to assign to each click along the customer journey. This ensures your media strategy is accounting for changes in consumer behavior during times of crisis. And with more people turning to YouTube during this pandemic, you can use cross-network reports (currently in beta) to understand how customers interact with your Search, Shopping, and YouTube ads—including clicks and video engagements—before converting.


Helpful resources for managing your campaigns

We’ve created a single destination for product guidance and business considerations when managing your campaigns through COVID-19. You can find the full list of guides and checklists here. We’ll continue updating and adding more through the rest of the year.

In early June, we’ll also be launching The Update on Think with Google: a new video series to share the latest insights, news, best practices, and products. Enjoy this sneak peek, and stay tuned.


1.  Kantar, COVID-19 Barometer Global Report, Wave2, 50 countries, n=9,815, fielded 27th-30th March