Responsibly applying AI models to Search

For over two decades of Search, we’ve been at the forefront of innovation in language understanding to help deliver on our mission of making the world’s information more accessible and useful for everyone. We’ve seen how critical these advancements are to making information more helpful, and being able to better connect people to creators, publishers and businesses on the web. It’s this constant improvement in understanding human language that’s enabled us to send more traffic to the web every year since Google was created.

We’ve also seen how AI models have significantly improved language innovation. Each successive milestone, from neural nets, to BERT, to MUM, has blown us away with the step changes in information understanding they’ve offered. But with each step forward, we look closely at the limitations and risks new technologies can present.

Across Google, we have been examining the risks and challenges associated with more powerful language models, and we’re committed to responsibly applying AI in Search. Here are some of the ways we do that.

Training on high quality data

We pretrain our models on high-quality data to reduce their potential to perpetuate undesirable biases that may exist in web content. In the case of MUM, we ensured that training data from the web was designated as high-quality based on our search quality metrics, which are informed by our Search Quality Rater Guidelines and driven by our quality rating and evaluation system. This substantially reduces the risk of training on misinformation or explicit content, for example, and is key to our approach.

And as part of our efforts to build a Search experience that works for everyone, MUM was trained on over 75 languages from around the world.

Rigorous Evaluation

Every improvement to Google Search undergoes a rigorous evaluation process to ensure we’re providing more relevant, helpful results. Our Search Quality Rater Guidelines are our north star for how we evaluate great search results. Human raters follow these guidelines and help us understand if our improvements are better fulfilling people’s information needs.

This evaluation process is central to the responsible application of any improvement to Search, whether we’re introducing powerful new systems like BERT or MUM, or simply adding a new feature.

Some changes are bigger than others, so we have to adjust our process accordingly. At the time of its introduction to Search, BERT impacted 1 in 10 English-language queries, so we scaled our evaluation process to be even more rigorous than usual. We subjected our systems to an unprecedented amount of scrutiny, increasing both the scale and granularity of quality testing, to help ensure they weren’t introducing concerning patterns into our systems.

While our standard evaluation process helps us judge launches across a representative query stream, for some improvements, we also more closely examine whether changes provide quality gains or losses across specific slices of queries, or topic areas. This allows us to identify if concerning patterns exist and pursue mitigations before launching an improvement to Search.

Search is not perfect, and any application of AI will not be perfect — this is why any change to Search involves extensive and constant evaluation and testing.

Responsible application design

In addition to working with responsibly designed and trained models, the thoughtful design of products and applications is key to addressing some of the challenges of language models. In Search, many of these critical mitigations take place at the application level, where we can focus on the end-user experience and more effectively manage risk in smaller models designed for specific tasks.

When we adopt new AI technologies such as BERT or MUM, they’re able to help improve individual systems to perform tasks more efficiently and effectively. This approach allows us to focus the scope of our evaluation and understand if an application is introducing concerning patterns. In the event that we do find concerning behavior, we’re able to design much more targeted solutions.

Minding our footprint

Training and running advanced AI models can be energy consumptive. Another benefit of training smaller, application-specific models is that the energy costs of the larger base model, such as MUM, are amortized over the many different applications.

The Google Research team recently published research detailing the energy costs of training state-of-the art language models, and their findings show that combining efficient models, processors, and data centers with clean energy sources can reduce the carbon footprint of a model by as much as one thousand-fold — and we follow this approach to train our models in Search.

Language models in practice

New language models like MUM have enormous potential to transform our ability to understand language and information about the world. And while they may be powerful, they do not make our existing systems obsolete. Today, Google Search employs hundreds of algorithms and machine learning models, none of which are wholly reliant on any singular, large model.

Amongst these hundreds of applications are systems and protections designed specifically to ensure you have a safe, high quality experience. For example, we design our ranking systems to surface relevant and reliable information. Even if a model were to present issues around low quality content, our systems are built to counteract this.

As we’re able to introduce new technologies like MUM into Search, they’ll help us greatly improve our systems and introduce entirely new product experiences. And they can also help us tackle other challenges we face. Improved AI systems can help bolster our spam fighting capabilities and even help us combat known loss patterns. In fact, we recently introduced a BERT-based system to better identify queries seeking explicit content, so we can better avoid shocking or offending users not looking for that information, and ultimately make our Search experience safer for everyone.

We look forward to making Search a better, more helpful product with improved information understanding from these advanced language models, and bringing these new capabilities to Search in a responsible way.

Pathdreamer: A World Model for Indoor Navigation

When a person navigates around an unfamiliar building, they take advantage of many visual, spatial and semantic cues to help them efficiently reach their goal. For example, even in an unfamiliar house, if they see a dining area, they can make intelligent predictions about the likely location of the kitchen and lounge areas, and therefore the expected location of common household objects. For robotic agents, taking advantage of semantic cues and statistical regularities in novel buildings is challenging. A typical approach is to implicitly learn what these cues are, and how to use them for navigation tasks, in an end-to-end manner via model-free reinforcement learning. However, navigation cues learned in this way are expensive to learn, hard to inspect, and difficult to re-use in another agent without learning again from scratch.

People navigating in unfamiliar buildings can take advantage of visual, spatial and semantic cues to predict what’s around a corner. A computational model with this capability is a visual world model.

An appealing alternative for robotic navigation and planning agents is to use a world model to encapsulate rich and meaningful information about their surroundings, which enables an agent to make specific predictions about actionable outcomes within their environment. Such models have seen widespread interest in robotics, simulation, and reinforcement learning with impressive results, including finding the first known solution for a simulated 2D car racing task, and achieving human-level performance in Atari games. However, game environments are still relatively simple compared to the complexity and diversity of real-world environments.

In “Pathdreamer: A World Model for Indoor Navigation”, published at ICCV 2021, we present a world model that generates high-resolution 360º visual observations of areas of a building unseen by an agent, using only limited seed observations and a proposed navigation trajectory. As illustrated in the video below, the Pathdreamer model can synthesize an immersive scene from a single viewpoint, predicting what an agent might see if it moved to a new viewpoint or even a completely unseen area, such as around a corner. Beyond potential applications in video editing and bringing photos to life, solving this task promises to codify knowledge about human environments to benefit robotic agents navigating in the real world. For example, a robot tasked with finding a particular room or object in an unfamiliar building could perform simulations using the world model to identify likely locations before physically searching anywhere. World models such as Pathdreamer can also be used to increase the amount of training data for agents, by training agents in the model.

Provided with just a single observation (RGB, depth, and segmentation) and a proposed navigation trajectory as input, Pathdreamer synthesizes high resolution 360º observations up to 6-7 meters away from the original location, including around corners. For more results, please refer to the full video.

How Does Pathdreamer Work?
Pathdreamer takes as input a sequence of one or more previous observations, and generates predictions for a trajectory of future locations, which may be provided up front or iteratively by the agent interacting with the returned observations. Both inputs and predictions consist of RGB, semantic segmentation, and depth images. Internally, Pathdreamer uses a 3D point cloud to represent surfaces in the environment. Points in the cloud are labelled with both their RGB color value and their semantic segmentation class, such as wall, chair or table.

To predict visual observations in a new location, the point cloud is first re-projected into 2D at the new location to provide ‘guidance’ images, from which Pathdreamer generates realistic high-resolution RGB, semantic segmentation and depth. As the model ‘moves’, new observations (either real or predicted) are accumulated in the point cloud. One advantage of using a point cloud for memory is temporal consistency — revisited regions are rendered in a consistent manner to previous observations.

Internally, Pathdreamer represents surfaces in the environment via a 3D point cloud containing both semantic labels (top) and RGB color values (bottom). To generate a new observation, Pathdreamer ‘moves’ through the point cloud to the new location and uses the re-projected point cloud image for guidance.

To convert guidance images into plausible, realistic outputs Pathdreamer operates in two stages: the first stage, the structure generator, creates segmentation and depth images, and the second stage, the image generator, renders these into RGB outputs. Conceptually, the first stage provides a plausible high-level semantic representation of the scene, and the second stage renders this into a realistic color image. Both stages are based on convolutional neural networks.

Pathdreamer operates in two stages: the first stage, the structure generator, creates segmentation and depth images, and the second stage, the image generator, renders these into RGB outputs. The structure generator is conditioned on a noise variable to enable the model to synthesize diverse scenes in areas of high uncertainty.

Diverse Generation Results
In regions of high uncertainty, such as an area predicted to be around a corner or in an unseen room, many different scenes are possible. Incorporating ideas from stochastic video generation, the structure generator in Pathdreamer is conditioned on a noise variable, which represents the stochastic information about the next location that is not captured in the guidance images. By sampling multiple noise variables, Pathdreamer can synthesize diverse scenes, allowing an agent to sample multiple plausible outcomes for a given trajectory. These diverse outputs are reflected not only in the first stage outputs (semantic segmentation and depth images), but in the generated RGB images as well.

Pathdreamer is capable of generating multiple diverse and plausible images for regions of high uncertainty. Guidance images on the leftmost column represent pixels that were previously seen by the agent. Black pixels represent regions that were previously unseen, for which Pathdreamer renders diverse outputs by sampling multiple random noise vectors. In practice, the generated output can be informed by new observations as the agent navigates the environment.

Pathdreamer is trained with images and 3D environment reconstructions from Matterport3D, and is capable of synthesizing realistic images as well as continuous video sequences. Because the output imagery is high-resolution and 360º, it can be readily converted for use by existing navigation agents for any camera field of view. For more details and to try out Pathdreamer yourself, we recommend taking a look at our open source code.

Application to Visual Navigation Tasks
As a visual world model, Pathdreamer shows strong potential to improve performance on downstream tasks. To demonstrate this, we apply Pathdreamer to the task of Vision-and-Language Navigation (VLN), in which an embodied agent must follow a natural language instruction to navigate to a location in a realistic 3D environment. Using the Room-to-Room (R2R) dataset, we conduct an experiment in which an instruction-following agent plans ahead by simulating many possible navigable trajectory through the environment, ranking each against the navigation instructions, and choosing the best ranked trajectory to execute. Three settings are considered. In the Ground-Truth setting, the agent plans by interacting with the actual environment, i.e. by moving. In the Baseline setting, the agent plans ahead without moving by interacting with a navigation graph that encodes the navigable routes within the building, but does not provide any visual observations. In the Pathdreamer setting, the agent plans ahead without moving by interacting with the navigation graph and also receives corresponding visual observations generated by Pathdreamer.

When planning ahead for three steps (approximately 6m), in the Pathdreamer setting the VLN agent achieves a navigation success rate of 50.4%, significantly higher than the 40.6% success rate in the Baseline setting without Pathdreamer. This suggests that Pathdreamer encodes useful and accessible visual, spatial and semantic knowledge about real-world indoor environments. As an upper bound illustrating the performance of a perfect world model, under the Ground-Truth setting (planning by moving) the agent’s success rate is 59%, although we note that this setting requires the agent to expend significant time and resources to physically explore many trajectories, which would likely be prohibitively costly in a real-world setting.

We evaluate several planning settings for an instruction-following agent using the Room-to-Room (R2R) dataset. Planning ahead using a navigation graph with corresponding visual observations synthesized by Pathdreamer (Pathdreamer setting) is more effective than planning ahead using the navigation graph alone (Baseline setting), capturing around half the benefit of planning ahead using a world model that perfectly matches reality (Ground-Truth setting).

Conclusions and Future Work
These results showcase the promise of using world models such as Pathdreamer for complicated embodied navigation tasks. We hope that Pathdreamer will help unlock model-based approaches to challenging embodied navigation tasks such as navigating to specified objects and VLN.

Applying Pathdreamer to other embodied navigation tasks such as Object-Nav, continuous VLN, and street-level navigation are natural directions for future work. We also envision further research on improved architecture and modeling directions for the Pathdreamer model, as well as testing it on more diverse datasets, including but not limited to outdoor environments. To explore Pathdreamer in more detail, please visit our GitHub repository.

Acknowledgements
This project is a collaboration with Jason Baldridge, Honglak Lee, and Yinfei Yang. We thank Austin Waters, Noah Snavely, Suhani Vora, Harsh Agrawal, David Ha, and others who provided feedback throughout the project. We are also grateful for general support from Google Research teams. Finally, we thank Tom Small for creating the animation in the third figure.

Source: Google AI Blog


Magic in the making: The 4 pillars of great creative

Consumers report that helpfulness is their top expectation of brands since the start of the COVID-19 pandemic, with 78% saying a brand's advertising should show how they can be helpful in everyday life.1 This means businesses need to quickly engage audiences with meaningful messages, using immersive storytelling to bring their brand and products to life.

To help you build visually-rich ad experiences that easily drive consumers to action, we've brought together our top creative guidance across Google Ads solutions in a single guide. Learn to craft stronger calls-to-action, engaging ad copy and striking visual assets — plus, get the latest insights from our team of creative and data scientists at Creative Works. You can also explore tips by marketing objective in order to craft more impactful creative to meet your business goals.

An image of two phones featuring natural soap products.

Dr. Squatch using a clear call-to-action, engaging copy and rich product visuals with Google Ads.

The 4 pillars of compelling creative

Lead with a clear call-to-action:Personalized descriptions perform up to two times better for their campaign goal versus non-personalized descriptions.2 This means businesses need to help consumers immediately see what they have to offer by including words like "you" to draw attention, and adding their product or brand name in headlines and descriptions.

Connect more authentically with a wide variety of assets:Audiences take action faster if they can relate to your message — 64% of consumers said they took some sort of action after seeing an ad that they considered to be diverse or inclusive.3 And images that feature people perform over 30% better for their campaign goal versus images that don’t.4 Given the variety of consumers looking online for new products to try, brands should show a wide range of people using their products or services to resonate with audiences.

Build for smaller screens: Images with no overlaid text, or overlaid text under 20 characters, perform up to 1.2X better for their campaign goal versus images with longer overlaid text.5 With people spending more time on a broad range of small devices, businesses should consider how and where consumers are seeing their ads and provide visual assets that clearly communicate their call-to-action.

Give your creatives time to test: We've seen that waiting 2-3 weeks between changes to ad creative minimizes performance fluctuations, allowing the Google Ads system time to learn and adapt to your most effective assets. Review Ad strength and asset reporting to better understand which assets resonate best and help you make the call on which to remove or replace.

An image of two phones featuring beauty products.

Beauty brand COSMEDIX using a variety of image assets in multiple aspect ratios with Google Ads.

Get help with building better assets

Consumers expect businesses of all sizes to offer more helpful brand experiences. Stand out with more relevant, engaging offers with help from our new guide to building better creative. And for more support with developing new creative assets or campaign strategies, check out our approved creative production agencies to find the right partner to help you achieve your business goals.


1. Kantar, COVID-19 Barometer Global Report, Wave2, runs across 50 countries, n=9,815, fielded 27th-30th March 2020.

2. Google internal data based on an aggregate study of median performance of campaign goals for Responsive display ads (CTR), Discovery ads (CTR, CVR), Video action campaigns (VTR) and Video discovery ads (VTR) across 78K assets for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

3. Google/Ipsos, U.S., Inclusive Marketing Study, n of 2,987 U.S. consumers ages 13–54 who access the internet at least monthly, Aug. 2019.

4. Google internal data based on an aggregate study of median performance of campaign goals for Discovery ads (CTR, CVR), Video action campaigns (VTR), Video discovery ads (VTR), App campaigns for installs (IPM), and App campaigns for engagement (EPM) for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

5. Google internal data based on an aggregate study of median performance of campaign goals for Discovery ads (CTR, CVR), and Responsive display ads (CTR) across 78K assets for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

Magic in the making: The 4 pillars of great creative

Consumers report that helpfulness is their top expectation of brands since the start of the COVID-19 pandemic, with 78% saying a brand's advertising should show how they can be helpful in everyday life.1 This means businesses need to quickly engage audiences with meaningful messages, using immersive storytelling to bring their brand and products to life.

To help you build visually-rich ad experiences that easily drive consumers to action, we've brought together our top creative guidance across Google Ads solutions in a single guide. Learn to craft stronger calls-to-action, engaging ad copy and striking visual assets — plus, get the latest insights from our team of creative and data scientists at Creative Works. You can also explore tips by marketing objective in order to craft more impactful creative to meet your business goals.

An image of two phones featuring natural soap products.

Dr. Squatch using a clear call-to-action, engaging copy and rich product visuals with Google Ads.

The 4 pillars of compelling creative

Lead with a clear call-to-action:Personalized descriptions perform up to two times better for their campaign goal versus non-personalized descriptions.2 This means businesses need to help consumers immediately see what they have to offer by including words like "you" to draw attention, and adding their product or brand name in headlines and descriptions.

Connect more authentically with a wide variety of assets:Audiences take action faster if they can relate to your message — 64% of consumers said they took some sort of action after seeing an ad that they considered to be diverse or inclusive.3 And images that feature people perform over 30% better for their campaign goal versus images that don’t.4 Given the variety of consumers looking online for new products to try, brands should show a wide range of people using their products or services to resonate with audiences.

Build for smaller screens: Images with no overlaid text, or overlaid text under 20 characters, perform up to 1.2X better for their campaign goal versus images with longer overlaid text.5 With people spending more time on a broad range of small devices, businesses should consider how and where consumers are seeing their ads and provide visual assets that clearly communicate their call-to-action.

Give your creatives time to test: We've seen that waiting 2-3 weeks between changes to ad creative minimizes performance fluctuations, allowing the Google Ads system time to learn and adapt to your most effective assets. Review Ad strength and asset reporting to better understand which assets resonate best and help you make the call on which to remove or replace.

An image of two phones featuring beauty products.

Beauty brand COSMEDIX using a variety of image assets in multiple aspect ratios with Google Ads.

Get help with building better assets

Consumers expect businesses of all sizes to offer more helpful brand experiences. Stand out with more relevant, engaging offers with help from our new guide to building better creative. And for more support with developing new creative assets or campaign strategies, check out our approved creative production agencies to find the right partner to help you achieve your business goals.


1. Kantar, COVID-19 Barometer Global Report, Wave2, runs across 50 countries, n=9,815, fielded 27th-30th March 2020.

2. Google internal data based on an aggregate study of median performance of campaign goals for Responsive display ads (CTR), Discovery ads (CTR, CVR), Video action campaigns (VTR) and Video discovery ads (VTR) across 78K assets for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

3. Google/Ipsos, U.S., Inclusive Marketing Study, n of 2,987 U.S. consumers ages 13–54 who access the internet at least monthly, Aug. 2019.

4. Google internal data based on an aggregate study of median performance of campaign goals for Discovery ads (CTR, CVR), Video action campaigns (VTR), Video discovery ads (VTR), App campaigns for installs (IPM), and App campaigns for engagement (EPM) for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

5. Google internal data based on an aggregate study of median performance of campaign goals for Discovery ads (CTR, CVR), and Responsive display ads (CTR) across 78K assets for Media & Entertainment, Retail, and Finance verticals. Global. January 2020 - June 2021.

From Beginner to Machine Learning Instructor In A Year

Posted by Salim Abid, MENA Regional Lead, Developer Relations

Banner that reads Google Developer Student Clubs, Misr University for Science and Technology (MUST). Includes overhead image of person coding on a laptop

Yara Elkady, Google Developer Student Club (GDSC) Lead, can trace her passion for tech all the way back to a single moment. She was sitting in computer class when her middle school teacher posed a question to the class:

“Did you know that you can create apps and games like the ones that you spend so much time on?”

It was a simple question, but it was enough to plant the seed that would define the trajectory of Yara’s career. Following in the footsteps of so many beginners before her, Yara did a Google search to find out more about creating apps. She didn’t realize it at the time, but Yara had just taken her first steps down the path to becoming a developer.

Knowing that she wanted to pursue tech further, Yara went to college at Misr University for Science and Technology (MUST) in Giza, Egypt to study computer science. In her second year, she had begun reading more about artificial intelligence. Yara was blown away by the potential of training a machine to make decisions on its own. With machine learning, she could pursue more creative ideas that went beyond what was possible with traditional programming. As Yara explains, “It felt like magic”. Still, she felt lost like any beginner interested in AI.

Enter Google Developer Student Clubs

Yara first discovered the GDSC chapter at MUST through her school’s social media page. For the entirety of her second year, Yara attended workshops and saw firsthand how GDSC events could leave an impact on students aspiring to become developers. With help from Google Developer Student Clubs, Yara was able to grow her skills as a developer and connect with peers who shared her interests. At the end of the year, Yara applied to be a Lead so that she could help more students engage with the community. Not too long after, Yara was accepted as a GDSC Lead for the 2020-2021 season!

A classroom of people attend a GDSC MUST speaker session

A GDSC MUST speaker session

As part of becoming a GDSC Lead, Yara enrolled in the MENA DSC Leads Academy to receive hands-on training in various Google technologies. Despite being only the first time the Academy had ever been hosted (both in person and virtually), 100+ Leads from 150 GDSC chapters attended over the course of six weeks. Yara applied to the Machine Learning track and was chosen for the program. During the course, Yara mastered advanced machine learning concepts, including classical ML models, deep learning, data manipulation, and TensorFlow training. She also got to work with other Leads on advanced machine learning projects, helping her gain even more confidence in her ML knowledge.

Soon after passing the program, Yara collaborated with the GDSC Leads she met during the course to host a one-month ML track to pass on the knowledge they had learned to the GDSC community. Through the sessions she hosted, Yara was contacted by BambooGeeks, a startup that creates training opportunities for local tech aspirants to help them become industry-ready. Yara was offered a job as a machine learning instructor, and could now create sessions for the largest audience of trainees she’d ever worked with.

The road to certification

Yara didn’t realize it yet, but even more opportunities were headed her way. She learned from the GDSC MENA program manager that GDSC Leads would have the opportunity to take the TensorFlow Certification exam, if they wished to take it. It wouldn’t be easy, but Yara knew she had all the resources she needed to succeed. She wasted no time and created a study group with other GDSC Leads working to get certified. Together, Yara and her fellow Leads pulled endless all-nighters over the next few months so that they could skill up for the exam and support each other through the arduous study process. They also worked with Elyes Manai, a ML Google Developer Expert, who gave them an overview of the exam and recommended resources that would help them pass.

Thanks to those resources, support from her peers, and tons of hard work, Yara passed the exam and received her TensorFlow certification! And she wasn’t the only one. 11 other MENA GDSC Leads also passed the exam to receive their certifications. Yara and her study partners were the first women in Egypt to be featured in the TensorFlow Certificate Network, and Yara became one of 27 people in Africa to receive the TensorFlow Developer Certificate!

Image of Yara Elkady's TensorFlow Developer Certificate

Yara’s TensorFlow Developer Certificate

When Yara looks back at how she was able to fast track from beginner to certified machine learning developer in just a year, she credits Google Developer Student Clubs with:

  • Offering advanced Machine Learning training
  • Fostering connections with other Leads to host study jams
  • Providing guidance from machine learning GDEs
  • TensorFlow certification exam prep
  • Exposure to opportunities that enabled her to inspire others
  • Endless community support

The truth is, students like Yara make Google Developer Student Clubs special by sharing their knowledge with the community and building a support system with their peers that extends far beyond the classroom.

On the importance of community, Yara says it best:

“Reaching your goals is a much more enjoyable process when you have someone with you on the same journey, to share your ups and downs, and push you to do more when you feel like quitting. Your success becomes their success and that gives more meaning to your accomplishments.”

If you’re a student who is ready to join your own Google Developer Student Club community, find one near you here.

Open Source in the 2021 Accelerate State of DevOps Report

To truly thrive, organizations need to adopt practices and capabilities that will lead them to performance improvements. Therefore, having access to data-driven insights and recommendations about the most effective and efficient ways to develop and deliver technology is critical. Over the past seven years, the DevOps Research and Assessment (DORA) has collected data from more than 32,000 industry professionals and used rigorous statistical analysis to deepen our understanding of the practices that lead to excellence in technology delivery and to powerful business outcomes.
 
One of the most valuable insights that has come from this research is the categorization of organizations on four different performance profiles (Elite, High, Medium, and Low) based on their performance on four software delivery metrics centered around throughput and stability - Deployment Frequency, Lead Time for Changes, Time to Restore Service and Change Failure Rate. We found that organizations that excel at these four metrics can be classified as elite performers while those that do not can be classified as low performers. See DevOps Research and Assessment (DORA) for a detailed description of these metrics and the different levels of organizational performance.

DevOps Research and Assessment (DORA) showing a detailed description of these metrics and the different levels of organizational performance

We have found that a number of technical capabilities are associated with improved continuous delivery performance. Our findings indicate that organizations that have incorporated loosely coupled architecture, continuous testing and integration, truck-based development, deployment automation, database change management, monitoring and observability and have leveraged open source technologies perform better than organizations that have not adopted these capabilities.

Now that you know a little bit about what DORA is and some of its key findings, let’s dive into whether the use of open source technologies within organizations impacts performance.

A quick Google search will yield hundreds (if not, thousands) of articles describing the myriad of ways organizations benefit from using open source software—faster innovation, higher quality products, stronger security, flexibility, ease of customization, etc. We know using open source software is the way to go, but until recently, we still had little empirical evidence demonstrating that its use is associated with improved organizational performance – until today.

This year, we surveyed 1,200 working professionals from a variety of industries around the globe about the factors that drive higher performance, including the use of open source software. Research from this year’s DORA report illustrates that low performing organizations have the highest use of proprietary software. In contrast, elite performers are 1.75 times more likely to make extensive use of open source components, libraries, and platforms. We also find that elite performers are 1.5 times more likely to have plans to expand their use of open source software compared to their low-performing counterparts. But, the question remains—does leveraging open source software impact an organization’s performance? Turns out the answer is, yes!

Our research also found that elite performers who meet their reliability targets are 2.4 times more likely to leverage open source technologies. We suspect that the original tenets of the open source movement of transparency and collaboration play a big role. Developers are less likely to waste time reinventing the wheel which allows them to spend more time innovating, they are able to leverage global talent instead of relying on the few people in their team or organization.

Technology transformations take time, effort, and resources. They also require organizations to make significant mental shifts. These shifts are easier when there is empirical evidence backing recommendations—organizations don’t have to take someone’s word for it, they can look at the data, look at the consistency of findings to know that success and improvement are in fact possible.

In addition to open source software, the 2021 Accelerate State of DevOps Report discusses a variety of capabilities and practices that drive performance. In the 2021 report, we also examined the effects of SRE best practices, the pandemic and burnout, the importance of quality documentation, and we revisited our exploration of leveraging the cloud. If you’d like to read the full report or any previous report, you can visit cloud.google.com/devops.

Giving users more transparency into their Google ad experience

Today, people engage with a wider variety of ad formats on more Google products than ever before — from Video ads on YouTube to Shopping ads across Search, Display and more. And they increasingly want to know more about the ads they see. That’s why we’ve been innovating on features like “About this ad” to help users understand why an ad was shown, and to mute ads or advertisers they aren’t interested in.

Last spring, we also introduced an advertiser identity verification program that requires Google advertisers to verify information about their businesses, where they operate from and what they’re selling or promoting. This transparency helps users learn more about the company behind a specific ad. It also helps differentiate credible advertisers in the ecosystem, while limiting the ability of bad actors to misrepresent themselves. Since launching the program last year, we have started verifying advertisers in more than 90 countries — and we’re not stopping there.

Introducing advertiser pages 

To give users of our products even more transparency, we are enhancing ad disclosures with new advertiser pages. Users can access these disclosures in our new “About this ad” menu to see the ads a specific verified advertiser has run over the past 30 days. For example, imagine you’re seeing an ad for a coat you’re interested in, but you don’t recognize the brand. With advertiser pages, you can learn more about that advertiser before visiting their site or making a purchase.

Users can tap on an ad to learn more about the advertiser showing them the ad

In addition to learning about the ads and advertiser, users can more easily report an ad if they believe it violates one of our policies. When an ad is reported, a member of our team reviews it for compliance with our policies and will take it down if appropriate. Creating a safe experience is a top priority for us, and user feedback is an important part of how we do that.

Advertiser pages will launch in the coming months in the United States, and will roll out in phases to more countries in 2022. We will also continue to explore how to share additional data within advertiser pages over time.

Improving transparency for ads on Google

Enhanced ad disclosures build on our efforts to create a clear and intuitive experience for users who engage with ads on Google products. More than 30 million users interact with our ads transparency and control menus every day, and “About this ad” has received positive feedback on its streamlined experience. Users engage with our ads transparency and control tools on YouTube more than any other Google product. To help our users make informed decisions online — no matter where they engage — we will roll out the “About this ad” feature to YouTube and Search in the coming months. 

We're committed to creating a trustworthy Google ad experience, and enhanced ad disclosures represent the next step in that journey. We will continue to work towards helping our users have greater control and understanding over the ads they see.

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/golang@sha256:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!

Distroless Builds Are Now SLSA 2

A few months ago we announced that we started signing all distroless images with cosign, which allows users to verify that they have the correct image before starting the build process. Signing our images was our first step towards fully securing the distroless supply chain. Since then, we’ve implemented even more accountability in our supply chain and are excited to announce that distroless builds have achieved SLSA 2. SLSA is a security framework for increasing supply chain security, and Level 2 ensures that the build service is tamper resistant.


This means that in addition to a signature, each distroless image now has an associated signed provenance. This provenance is an in-toto attestation and includes information around how each image was built, what command was run, and what build system was used. It also includes any special parameters that were passed in, the exact commit the images were built at, and more. This provenance is a useful tool for builds that need to be audited in the future.

SLSA 2 Requirement

Distroless

Source - Version controlled

Source code in Github

Build - Scripted build

Build script exists as a Tekton Pipeline, invoked as a Google Cloud Build step

Build - Build service

All steps run on Kubernetes with Tekton

Provenance - Available

Provenance is available in the rekor transparency log as an in-toto attestation

Provenance - Authenticated

Provenance is signed with the distroless GCP KMS key

Provenance - Service generated

Provenance is generated by Tekton Chains from a Tekton TaskRun



Achieving SLSA 2 required some changes to the distroless build pipeline: we set up Tekton Pipelines and Tekton Chains in a GKE cluster to automate building images and generating provenance. Every time a pull request is merged to the distroless Github repo, a Tekton Pipeline is triggered. This Pipeline builds the distroless images, and Tekton Chains is responsible for generating signed provenance for each image. Tekton Chains stores the signed provenance alongside the image in an OCI registry and also stores a record of the provenance in the rekor transparency log.

Don't trust us?


You can try the build yourself. Because distroless builds are reproducible, all the information to replicate the build is in the provenance, and you or a trusted third party can build the image yourselves and verify the build is correct by matching image digests.


You can verify an attestation for a distroless image with cosign and the distroless public key:

$ cosign verify-attestation -key cosign.pub gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


Verification for gcr.io/distroless/base@sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91 --

The following checks were performed on each of these signatures:

  - The cosign claims were validated

  - The signatures were verified against the specified public key

  - Any certificates were verified against the Fulcio roots.


...


And you can find the provenance for the image in the rekor transparency log with the rekor-cli tool. For example, you could find the provenance for the above image by using the image’s digest and running:

$ rekor-cli search --sha sha256:4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91


af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8


$ rekor-cli get --uuid af7a9687d263504ccdb2759169c9903d8760775045c6e7554e365ec2bf29f6f8 --format json | jq -r .Attestation | base64 --decode | jq


{

  "_type": "distroless-provenance",

  "predicateType": "https://tekton.dev/chains/provenance",

  "subject": [

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "2dfdd5bf591d0da3f67a25f3fc96d929b256d5be3e0af084db10952e5da2c661"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "703a4726aedc9ec7a7e32251087565246db117bb9a141a7993d1c4bb4036660d"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "4f8aa0aba190e375a5a53bb71a303c89d9734c817714aeaca9bb23b82135ed91"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "d322ed16d530596c37eee3eb57a039677502aa71f0e4739b0272b1ebd8be9bce"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian10",

      "digest": {

        "sha256": "dc0a793d83196a239abf3ba035b3d1a0c7a24184856c2649666e84bc82fc5980"

      }

    },

    {

      "name": "gcr.io/distroless/base-debian11",

      "digest": {

        "sha256": "c9507268813f235b11e63a7ae01526b180c94858bd718d6b4746c9c0e8425f7a"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "4af613acf571a1b86b1d3c50682caada0b82024e566c1c4c2fe485a70f3af47d"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/cc-debian10",

      "digest": {

        "sha256": "2c4bb6b7236db0a55ec54ba8845e4031f5db2be957ac61867872bf42e56c4deb"

      }

    },

    {

      "name": "gcr.io/distroless/java",

      "digest": {

        "sha256": "deb41661be772c6256194eb1df6b526cc95a6f60e5f5b740dda2769b20778c51"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "f106757268ab4e650b032e78df0372a35914ed346c219359b58b3d863ad9fb58"

      }

    },

    {

      "name": "gcr.io/distroless/nodejs-debian10",

      "digest": {

        "sha256": "927dd07e7373e1883469c95f4ecb31fe63c3acd104aac1655e15cfa9ae0899bf"

      }

    },

    {

      "name": "gcr.io/distroless/python3",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/python3-debian10",

      "digest": {

        "sha256": "aa8a0358b2813e8b48a54c7504316c7dcea59d6ae50daa0228847de852c83878"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "9acfd1fdf62b26cbd4f3c31422cf1edf3b7b01a9ecee00a499ef8b7e3536914d"

      }

    },

    {

      "name": "gcr.io/distroless/static-debian10",

      "digest": {

        "sha256": "e50641dbb871f78831f9aa7ffa59ec8f44d4cc33ae4ee992c9f4b046040e97f2"

      }

    }

  ],

  "predicate": {

    "invocation": {

      "parameters": [

        "MANIFEST_SUBSECTION={string 0 []}",

        "CHAINS-GIT_COMMIT={string 976c1c9bc178ac0371d8888d69893145c3df09f0 []}",

        "CHAINS-GIT_URL={string https://github.com/GoogleContainerTools/distroless []}"

      ],

      "recipe_uri": "task://distroless-provenance",

      "event_id": "531c282f-806e-41e4-b3ad-b596c4283381",

      "builder.id": "tekton-chains"

    },

    "recipe": {

      "steps": [

        {

          "entryPoint": "#!/bin/sh\nset -ex\n\n# get the digests for a subset of images built, and store in the IMAGES result\ngo run provenance/provenance.go images $(params.MANIFEST_SUBSECTION) > $(results.IMAGES.path)\n",

          "arguments": null,

          "environment": {

            "container": "provenance",

            "image": "docker.io/library/golang@sha256:cb1a7482cb5cfc52527c5cdea5159419292360087d5249e3fe5472f3477be642"

          },

          "annotations": null

        }

      ]

    },

    "metadata": {

      "buildStartedOn": "2021-09-16T00:03:04Z",

      "buildFinishedOn": "2021-09-16T00:04:36Z"

    },

    "materials": [

      {

        "uri": "https://github.com/GoogleContainerTools/distroless",

        "digest": {

          "revision": "976c1c9bc178ac0371d8888d69893145c3df09f0"

        }

      }

    ]

  }

}



As you might guess, our next step is getting distroless to SLSA 3, which will require adding non-falsifiable provenance and isolated builds to the distroless supply chain. Stay tuned for more!

Giving users more transparency into their Google ad experience

Today, people engage with a wider variety of ad formats on more Google products than ever before — from Video ads on YouTube to Shopping ads across Search, Display and more. And they increasingly want to know more about the ads they see. That’s why we’ve been innovating on features like “About this ad” to help users understand why an ad was shown, and to mute ads or advertisers they aren’t interested in.

Last spring, we also introduced an advertiser identity verification program that requires Google advertisers to verify information about their businesses, where they operate from and what they’re selling or promoting. This transparency helps users learn more about the company behind a specific ad. It also helps differentiate credible advertisers in the ecosystem, while limiting the ability of bad actors to misrepresent themselves. Since launching the program last year, we have started verifying advertisers in more than 90 countries — and we’re not stopping there.

Introducing advertiser pages 

To give users of our products even more transparency, we are enhancing ad disclosures with new advertiser pages. Users can access these disclosures in our new “About this ad” menu to see the ads a specific verified advertiser has run over the past 30 days. For example, imagine you’re seeing an ad for a coat you’re interested in, but you don’t recognize the brand. With advertiser pages, you can learn more about that advertiser before visiting their site or making a purchase.

Users can tap on an ad to learn more about the advertiser showing them the ad

In addition to learning about the ads and advertiser, users can more easily report an ad if they believe it violates one of our policies. When an ad is reported, a member of our team reviews it for compliance with our policies and will take it down if appropriate. Creating a safe experience is a top priority for us, and user feedback is an important part of how we do that.

Advertiser pages will launch in the coming months in the United States, and will roll out in phases to more countries in 2022. We will also continue to explore how to share additional data within advertiser pages over time.

Improving transparency for ads on Google

Enhanced ad disclosures build on our efforts to create a clear and intuitive experience for users who engage with ads on Google products. More than 30 million users interact with our ads transparency and control menus every day, and “About this ad” has received positive feedback on its streamlined experience. Users engage with our ads transparency and control tools on YouTube more than any other Google product. To help our users make informed decisions online — no matter where they engage — we will roll out the “About this ad” feature to YouTube and Search in the coming months. 

We're committed to creating a trustworthy Google ad experience, and enhanced ad disclosures represent the next step in that journey. We will continue to work towards helping our users have greater control and understanding over the ads they see.