Author Archives: Scott Spencer

Our 2021 Ads Safety Report

User safety is at the top of our list when we make decisions about ads and monetized content on our platforms. In fact, thousands of Googlers work around the clock to prevent malicious use of our advertising services and help make them safer for people, businesses and publishers. We do this important work because an ad-supported internet means everyone can access essential information.

And as the digital world evolves, our policy development and enforcement strategies evolve with it — helping to prevent abuse while allowing businesses to reach new customers and grow. We’ve continued to invest in our policies, teams of experts and enforcement technology to stay ahead of potential threats. In 2021, we introduced a multi-strike system for repeat policy violations. We added or updated over 30 policies for advertisers and publishers including a policy prohibiting claims that promote climate change denial and a certification for U.S.-based health insurance providers to only allow ads from government exchanges, first-party providers and licensed third-party brokers.

In 2021, we removed over 3.4 billion ads, restricted over 5.7 billion ads and suspended over 5.6 million advertiser accounts. We also blocked or restricted ads from serving on 1.7 billion publisher pages, and took broader site-level enforcement action on approximately 63,000 publisher sites.

Gif with the text "3.4B bad ads stopped in 2021"

Check out the entire 2021 Ads Safety Report for enforcement data, and read on for a few of the highlights.

Responding to the war in Ukraine

Though the report only covers 2021, we also wanted to share an update on our response to the war in Ukraine — given it’s top of mind for so many around the world, including our enforcement teams. We acted quickly to institute a sensitive event, prohibiting ads from profiting from or exploiting the situation. This is in addition to our longstanding policies prohibiting content that incites violence or denies the occurrence of tragic events to run as ads or monetize using our services.

We’ve also taken several other steps to pause the majority of our commercial activities in Russia across our products — including pausing ads from showing in Russia and ads from Russian-based advertisers, and pausing monetization of Russian state-funded media across our platforms.

So far, we’ve blocked over eight million ads related to the war in Ukraine under our sensitive event policy and separately removed ads from more than 60 state-funded media sites across our platforms.

Suspending triple the number of advertiser accounts

As we shared in our 2020 report, we’ve seen an increase in fraudulent activity during the pandemic. In 2021, we continued to see bad actors operate with more sophistication and at a greater scale, using a variety of tactics to evade our detection. This included creating thousands of accounts simultaneously and using techniques like cloaking and text manipulation to show our reviewers and systems different ad content than they’d show a user — making that content more difficult to detect and enforce against.

We’re continuing to take a multi-pronged approach to combat this behavior, like verifying advertisers’ identities and identifying coordinated activity between accounts using signals in our network. We are actively verifying advertisers in over 180 countries. And if an advertiser fails to complete our verification program when prompted, the account is automatically suspended.

This combination of efforts has allowed us to match the scale of our adversaries and more efficiently remove multiple accounts associated with a single bad actor at once. As a result, between 2020 and 2021, we tripled the number of account-level suspensions for advertisers.

Preventing unreliable claims from monetizing and serving in ads

In 2021, we doubled down on our enforcement of unreliable content. We blocked ads from running on more than 500,000 pages that violated our policies against harmful health claims related to COVID-19 and demonstrably false claims that could undermine trust and participation in elections. Late last year, we also launched a new Unreliable Claims policy on climate change, which prohibits content that contradicts well-established scientific consensus around its existence and causes.

We’ve stayed focused on preventing abuse in ads related to COVID-19, which was especially important in 2021 for claims related to vaccines, testing and price-gouging for critical supplies like masks. Since the beginning of the pandemic, we’ve blocked over 106 million ads related to COVID-19. And we supported local NGOs and governments with $250 million in Ad Grants to help connect people to accurate vaccine information.

Introducing new brand safety tools and resources for advertisers and publishers

Maintaining advertiser brand safety remains a top priority. Last year, we added a new feature to our advertiser controls that allows brands to upload dynamic exclusion lists that can be automatically updated and maintained by trusted third parties. This helps advertisers get access to the resources and expertise of trusted organizations to better protect their brands and strengthen their campaigns.

We know that advertisers care about all the content on a page where their ads may run, including user-generated content (UGC) like comment sections. That’s why we hold publishers responsible for moderating these features. We’ve released several resources in the past year to help them do that — including an infographic and blog post, troubleshooters to solve UGC issues and a video tutorial.

In addition to these resources, we made targeted improvements to the publisher approval process that helped us better detect and block bad actors before they could even create accounts. As a result, we reduced the number of sites that needed site-level action compared to previous years.

Looking ahead to 2022

A trustworthy advertising experience is critical to getting helpful and useful information to people around the world. And this year, we’ll continue to address areas of abuse across our platforms and network to protect users and help credible advertisers and publishers. Providing more transparency and control over the ads people see is a big part of that goal. Our new “About this ad” feature is rolling out globally to help people understand why an ad was shown and which advertiser ran it. They can also report an ad if they believe it violates one of our policies or block an ad they aren’t interested in.

We believe this combination of work will help to create a safer experience for users everywhere. You can find ongoing updates to our policies and controls in our Help Center.

Our annual Ads Safety Report

At Google, we actively look for ways to ensure a safe user experience when making decisions about the ads people see and the content that can be monetized on our platforms. Developing policies in these areas and consistently enforcing them is one of the primary ways we keep people safe and preserve trust in the ads ecosystem. 

2021 marks one decade of releasing our annual Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms. Providing visibility on the ways we’re preventing policy violations in the ads ecosystem has long been a priority — and this year we’re sharing more data than ever before. 

Our Ads Safety Report is just one way we provide transparency to people about how advertising works on our platforms. Last spring, we also introduced ouradvertiser identity verification program. We are currently verifying advertisers in more than 20 countries and have started to share the advertiser name and location in our About this ad feature, so that people know who is behind a specific ad and can make more informed decisions.

Enforcement at scale

In 2020, our policies and enforcement were put to the test as we collectively navigated a global pandemic, multiple elections around the world and the continued fight against bad actors looking for new ways to take advantage of people online. Thousands of Googlers worked around the clock to deliver a safe experience for users, creators, publishers and advertisers. We added or updated more than 40 policies for advertisers and publishers. We also blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads. 

Our enforcement is not one-size-fits-all, and this is the first year we’re sharing information on ad restrictions, a core part of our overall strategy. Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal. For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed. Over the past several years, we’ve seen an increase in country-specific ad regulations, and restricting ads allows us to help advertisers follow these requirements regionally with minimal impact on their broader campaigns. 

We also continued to invest in our automated detection technology to effectively scan the web for publisher policy compliance at scale. Due to this investment, along with several new policies, we vastly increased our enforcement and removed ads from 1.3 billion publisher pages in 2020, up from 21 million in 2019. We also stopped ads from serving on over 1.6 million publisher sites with pervasive or egregious violations.

Remaining nimble when faced with new threats

As the number of COVID-19 cases rose around the world last January, we enforced our sensitive events policy to prevent behavior like price-gouging on in-demand products like hand sanitizer, masks and paper goods, or ads promoting false cures. As we learned more about the virus and health organizations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse. Additionally, as claims and conspiracies about the coronavirus’s origin and spread were circulated online, we launched a new policy to prohibit both ads and monetized content about COVID-19 or other global health emergencies that contradict scientific consensus. 

In total, we blocked over 99 million Covid-related ads from serving throughout the year, including those for miracle cures, N95 masks due to supply shortages, and most recently, fake vaccine doses. We continue to be nimble, tracking bad actors’ behavior and learning from it. In doing so, we’re able to better prepare for future scams and claims that may arise. 

Fighting the newest forms of fraud and scams

Often when we experience a major event like the pandemic, bad actors look for ways to to take advantage of people online. We saw an uptick in opportunistic advertising and fraudulent behavior from actors looking to mislead users last year. Increasingly, we’ve seen them use cloaking to hide from our detection, promote non-existent virtual businesses or run ads for phone-based scams to either hide from detection or lure unsuspecting consumers off our platforms with an aim to defraud them.

In 2020 we tackled this adversarial behavior in a few key ways: 

  • Introduced multiple new policies and programs including our advertiser identity verification program and business operations verification program

  • Invested in technology to better detect coordinated adversarial behavior, allowing us to connect the dots across accounts and suspend multiple bad actors at once.

  • Improved our automated detection technology and human review processes based on network signals, previous account activity, behavior patterns and user feedback.

The number of ad accounts we disabled for policy violations increased by 70% from 1 million to over 1.7 million. We also blocked or removed over 867 million ads for attempting to evade our detection systems, including cloaking, and an additional 101 million ads for violating our misrepresentation policies. That’s a total of over 968 million ads.   

Protecting elections around the world 

When it comes to elections around the world, ads help voters access authoritative information about the candidates and voting processes. Over the past few years, we introduced strict policies and restrictions around who can run election-related advertising on our platform and the ways they can target ads; we launched comprehensive political ad libraries in the U.S., the U.K., the European Union, India, Israel, Taiwan, Australia and New Zealand; and we worked diligently with our enforcement teams around the world to protect our platforms from abuse. Globally, we continue to expand our verification program and verified more than 5,400 additional election advertisers in 2020. In the U.S, as it became clear the outcome of the presidential election would not be determined immediately, we determined that the U.S election fell under our sensitive events policy, and enforced a U.S. political ads pause starting after the polls closed and continuing through early December. During that time, we temporarily paused more than five million ads and blocked ads on over three billion Search queries referencing the election, the candidates or its outcome. We made this decision to limit the potential for ads to amplify confusion in the post-election period.

Demonetizing hate and violence

Last year, news publishers played a critical role in keeping people informed, prepared and safe. We’re proud that digital advertising, including the tools we offer to connect advertisers and publishers, supports this content. We have policies in place to protect both brands and users.

In 2017, we developed more granular means of reviewing sites at the page level, including user-generated comments, to allow publishers to continue to operate their broader sites while protecting advertisers from negative placements by stopping persistent violations. In the years since introducing page-level action, we’ve continued to invest in our automated technology, and it was crucial in a year in which we saw an increase in hate speech and calls to violence online. This investment helped us to prevent harmful web content from monetizing. We took action on nearly 168 million pages under our dangerous and derogatory policy.

Continuing this work in 2021 

We know that when we make decisions through the lens of user safety, it will benefit the broader ecosystem. Preserving trust for advertisers and publishers helps their businesses succeed in the long term. In the upcoming year, we will continue to invest in policies, our team of experts and enforcement technology to stay ahead of potential threats. We also remain steadfast on our path to scale our verification programs around the world in order to increase transparency and make more information about the ad experience universally available.

Upcoming update to housing, employment, and credit advertising policies

Our Google Ads policies are written to protect users, advertisers, and publishers, and prohibit advertisers from unlawful behavior like discriminating against users. We also give users control over the kinds of ads they see, including the ability to opt out of seeing any personalized ads. Our ads policies apply to all the ads we serve and if we find ads that violate our policies, we take action.

For over a decade, we’ve also had personalized advertising policies that prohibit advertisers from targeting users on the basis of sensitive categories related to their identity, beliefs, sexuality, or personal hardships. This means we don’t allow advertisers to target ads based on categories such as race, religion, ethnicity, or sexual orientation, to name a few. We regularly evaluate and evolve our policies to ensure they are protecting users from behaviors like unlawful discrimination. 

To further improve access to housing, employment, and credit opportunities, we are introducing a new personalized advertising policy for certain types of ads. This policy will prohibit impacted employment, housing, and credit advertisers from targeting or excluding ads based on gender, age, parental status, marital status, or ZIP Code, in addition to our longstanding policies prohibiting personalization based on sensitive categories like race, religion, ethnicity, sexual orientation, national origin or disability. While the changing circumstances of the coronavirus pandemic and business continuity issues for many advertisers make precise timelines difficult, we plan to roll out this update in the U.S. and Canada as soon as possible and, in any event, by the end of this year. We will be providing advertisers with more information about how these changes may impact them in the coming weeks.

We’ve been working closely with the U.S. Department of Housing and Urban Development (HUD) on these changes for some time, and we appreciate their guidance in helping us make progress on these important issues. As part of our effort we’ll provide housing advertisers with additional information about fair housing to help ensure they are acting in ways that support access to housing opportunities. We will also continue to work with HUD, civil rights and housing experts, and the broader advertising industry to address concerns around discrimination in ad targeting. These changes complement our work with businesses, governments, and community organizations to distribute $1 billion we committed for Bay Area housing. In the first six months of this commitment, we’ve helped to create hundreds of new affordable housing units in the Bay Area, including an investment in a development focused on affordable and inclusive housing for adults with disabilities.

Google is committed to working with the broader advertising ecosystem to help set high standards for online advertising, and we will continue to strive to set policies that improve inclusion and access for users.

Source: Google Ads


Upcoming update to housing, employment, and credit advertising policies

Our Google Ads policies are written to protect users, advertisers, and publishers, and prohibit advertisers from unlawful behavior like discriminating against users. We also give users control over the kinds of ads they see, including the ability to opt out of seeing any personalized ads. Our ads policies apply to all the ads we serve and if we find ads that violate our policies, we take action.

For over a decade, we’ve also had personalized advertising policies that prohibit advertisers from targeting users on the basis of sensitive categories related to their identity, beliefs, sexuality, or personal hardships. This means we don’t allow advertisers to target ads based on categories such as race, religion, ethnicity, or sexual orientation, to name a few. We regularly evaluate and evolve our policies to ensure they are protecting users from behaviors like unlawful discrimination. 

To further improve access to housing, employment, and credit opportunities, we are introducing a new personalized advertising policy for certain types of ads. This policy will prohibit impacted employment, housing, and credit advertisers from targeting or excluding ads based on gender, age, parental status, marital status, or ZIP Code, in addition to our longstanding policies prohibiting personalization based on sensitive categories like race, religion, ethnicity, sexual orientation, national origin or disability. While the changing circumstances of the coronavirus pandemic and business continuity issues for many advertisers make precise timelines difficult, we plan to roll out this update in the U.S. and Canada as soon as possible and, in any event, by the end of this year. We will be providing advertisers with more information about how these changes may impact them in the coming weeks.

We’ve been working closely with the U.S. Department of Housing and Urban Development (HUD) on these changes for some time, and we appreciate their guidance in helping us make progress on these important issues. As part of our effort we’ll provide housing advertisers with additional information about fair housing to help ensure they are acting in ways that support access to housing opportunities. We will also continue to work with HUD, civil rights and housing experts, and the broader advertising industry to address concerns around discrimination in ad targeting. These changes complement our work with businesses, governments, and community organizations to distribute $1 billion we committed for Bay Area housing. In the first six months of this commitment, we’ve helped to create hundreds of new affordable housing units in the Bay Area, including an investment in a development focused on affordable and inclusive housing for adults with disabilities.

Google is committed to working with the broader advertising ecosystem to help set high standards for online advertising, and we will continue to strive to set policies that improve inclusion and access for users.

Stopping bad ads to protect users

People trust Google when they’re looking for information, and we’re committed to ensuring they can trust the ads they see on our platforms, too. This commitment is especially important in times of uncertainty, such as the past few months as the world has confronted COVID-19. 


Responding to COVID-19

Since the beginning of the COVID-19 outbreak, we’ve closely monitored advertiser behavior to protect users from ads looking to take advantage of the crisis. These often come from sophisticated actors attempting to evade our enforcement systems with advanced tactics. For example, as the situation evolved, we saw a sharp spike in fraudulent ads for in-demand products like face masks. These ads promoted products listed significantly above market price, misrepresented the product quality to trick people into making a purchase or were placed by merchants who never fulfilled the orders. 

We have a dedicated COVID-19 task force that’s been working around the clock. They have built new detection technology and have also improved our existing enforcement systems to stop bad actors. These concerted efforts are working. We’ve blocked and removed tens of millions of coronavirus-related ads over the past few months for policy violations including price-gouging, capitalizing on global medical supply shortages, making misleading claims about cures and promoting illegitimate unemployment benefits.

Simultaneously, the coronavirus has become an important and enduring topic in everyday conversation and we’re working on ways to allow advertisers across industries to share relevant updates with their audiences. Over the past several weeks, for example, we’ve specifically helped NGOs, governments, hospitals and healthcare providers run PSAs. We continue to take a measured approach to adjusting our enforcement to ensure that we are protecting users while prioritizing critical information from trusted advertisers.


Preserving the integrity of the ecosystem

Preserving the integrity of the ads on our platforms, as we’re doing during the COVID-19 outbreak, is a continuation of the work we do every day to minimize content that violates our policies and stop malicious actors. We have thousands of people working across our teams to make sure we’re protecting our users and enabling a safe ecosystem for advertisers and publishers, and each year we share a summary of the work we’ve done.

In 2019, we blocked and removed 2.7 billion bad ads—that’s more than 5,000 bad ads per minute. We also suspended nearly 1 million advertiser accounts for policy violations. On the publisher side, we terminated over 1.2 million accounts and removed ads from over 21 million web pages that are part of our publisher network for violating our policies. Terminating accounts—not just removing an individual ad or page—is an especially effective enforcement tool that we use if advertisers or publishers engage in egregious policy violations or have a history of violating policy.

2.7 billion taken down.gif

Improving enforcement against phishing and "trick-to-click" ads 

If we find specific categories of ads are more prone to abuse, we prioritize our resources to prevent bad actors from taking advantage of users. One of the areas that we’ve become familiar with is phishing, a common practice used by deceptive players to collect personal information from users under false pretenses. For example, in 2019 we saw more bad actors targeting people seeking to renew their passport. These ads mimicked real ads for renewal sites but their actual intent was to get users to provide sensitive information such as their social security or credit card number. Another common area of abuse is “trick-to-click” ads—which are designed to trick people into interacting with them by using prominent links (for example, “click here”) often designed to look like computer or mobile phone system warnings.

Because we’ve come to expect certain recurring categories like phishing and “trick-to-click,” we’re able to more effectively fight them. In 2019, we assembled an internal team to track the patterns and signals of these types of fraudulent advertisers so we could identify and remove their ads faster. As a result, we saw nearly a 50 percent decrease of bad ads served in both categories from the previous year. In total, we blocked more than 35 million phishing ads and 19 million “trick-to-click” ads in 2019.

Top Offenders.png

Adapting our policies and technology in real time

Certain industries are particularly susceptible to malicious behavior. For example, as more consumers turn to online financial services over brick and mortar locations, we identified an increase in personal loan ads with misleading information on lending terms. To combat this, we broadened our policy to only allow loan-related ads to run if the advertiser clearly states all fees, risks and benefits on their website or app so that users can make informed decisions. This updated policy enabled us to take down 9.6 million of these types of bad ads in 2019, doubling our number from 2018. 

At the end of last year, we also introduced a certification program for debt management advertisers in select countries that offer to negotiate with creditors to remedy debt or credit problems. We know users looking for help with this are often at their most vulnerable and we want to create a safe experience for them. This new program ensures we’re only allowing advertisers who are registered by the local regulatory agencies to serve ads for this type of service. We’re continuing to explore ways to scale this program to more countries to match local finance regulations. 


Looking forward

Maintaining trust in the digital advertising ecosystem is a top priority for Google. And with global health concerns now top of mind for everyone, preparing for and responding to attempts to take advantage of our users is as important as it has ever been. We know abuse tactics will continue evolving and new societal issues will arise. We'll continue to make sure we’re protecting our users, advertisers and publishers from bad actors across our advertising platforms. 

Source: Google Ads


Stopping bad ads to protect users

People trust Google when they’re looking for information, and we’re committed to ensuring they can trust the ads they see on our platforms, too. This commitment is especially important in times of uncertainty, such as the past few months as the world has confronted COVID-19. 


Responding to COVID-19

Since the beginning of the COVID-19 outbreak, we’ve closely monitored advertiser behavior to protect users from ads looking to take advantage of the crisis. These often come from sophisticated actors attempting to evade our enforcement systems with advanced tactics. For example, as the situation evolved, we saw a sharp spike in fraudulent ads for in-demand products like face masks. These ads promoted products listed significantly above market price, misrepresented the product quality to trick people into making a purchase or were placed by merchants who never fulfilled the orders. 

We have a dedicated COVID-19 task force that’s been working around the clock. They have built new detection technology and have also improved our existing enforcement systems to stop bad actors. These concerted efforts are working. We’ve blocked and removed tens of millions of coronavirus-related ads over the past few months for policy violations including price-gouging, capitalizing on global medical supply shortages, making misleading claims about cures and promoting illegitimate unemployment benefits.

Simultaneously, the coronavirus has become an important and enduring topic in everyday conversation and we’re working on ways to allow advertisers across industries to share relevant updates with their audiences. Over the past several weeks, for example, we’ve specifically helped NGOs, governments, hospitals and healthcare providers run PSAs. We continue to take a measured approach to adjusting our enforcement to ensure that we are protecting users while prioritizing critical information from trusted advertisers.


Preserving the integrity of the ecosystem

Preserving the integrity of the ads on our platforms, as we’re doing during the COVID-19 outbreak, is a continuation of the work we do every day to minimize content that violates our policies and stop malicious actors. We have thousands of people working across our teams to make sure we’re protecting our users and enabling a safe ecosystem for advertisers and publishers, and each year we share a summary of the work we’ve done.

In 2019, we blocked and removed 2.7 billion bad ads—that’s more than 5,000 bad ads per minute. We also suspended nearly 1 million advertiser accounts for policy violations. On the publisher side, we terminated over 1.2 million accounts and removed ads from over 21 million web pages that are part of our publisher network for violating our policies. Terminating accounts—not just removing an individual ad or page—is an especially effective enforcement tool that we use if advertisers or publishers engage in egregious policy violations or have a history of violating policy.

2.7 billion taken down.gif

Improving enforcement against phishing and "trick-to-click" ads 

If we find specific categories of ads are more prone to abuse, we prioritize our resources to prevent bad actors from taking advantage of users. One of the areas that we’ve become familiar with is phishing, a common practice used by deceptive players to collect personal information from users under false pretenses. For example, in 2019 we saw more bad actors targeting people seeking to renew their passport. These ads mimicked real ads for renewal sites but their actual intent was to get users to provide sensitive information such as their social security or credit card number. Another common area of abuse is “trick-to-click” ads—which are designed to trick people into interacting with them by using prominent links (for example, “click here”) often designed to look like computer or mobile phone system warnings.

Because we’ve come to expect certain recurring categories like phishing and “trick-to-click,” we’re able to more effectively fight them. In 2019, we assembled an internal team to track the patterns and signals of these types of fraudulent advertisers so we could identify and remove their ads faster. As a result, we saw nearly a 50 percent decrease of bad ads served in both categories from the previous year. In total, we blocked more than 35 million phishing ads and 19 million “trick-to-click” ads in 2019.

Top Offenders.png

Adapting our policies and technology in real time

Certain industries are particularly susceptible to malicious behavior. For example, as more consumers turn to online financial services over brick and mortar locations, we identified an increase in personal loan ads with misleading information on lending terms. To combat this, we broadened our policy to only allow loan-related ads to run if the advertiser clearly states all fees, risks and benefits on their website or app so that users can make informed decisions. This updated policy enabled us to take down 9.6 million of these types of bad ads in 2019, doubling our number from 2018. 

At the end of last year, we also introduced a certification program for debt management advertisers in select countries that offer to negotiate with creditors to remedy debt or credit problems. We know users looking for help with this are often at their most vulnerable and we want to create a safe experience for them. This new program ensures we’re only allowing advertisers who are registered by the local regulatory agencies to serve ads for this type of service. We’re continuing to explore ways to scale this program to more countries to match local finance regulations. 


Looking forward

Maintaining trust in the digital advertising ecosystem is a top priority for Google. And with global health concerns now top of mind for everyone, preparing for and responding to attempts to take advantage of our users is as important as it has ever been. We know abuse tactics will continue evolving and new societal issues will arise. We'll continue to make sure we’re protecting our users, advertisers and publishers from bad actors across our advertising platforms. 

Source: Google Ads


Protecting the mobile app ecosystem

Mobile apps have transformed the way people engage with the world. From gaming, to ride sharing, to messaging, apps enrich the lives of billions — and are often funded by ads. Ads help make content available to everyone, creating a more diverse ecosystem of apps for people to enjoy. But one of the biggest threats to ad-supported content is ad fraud, a pervasive issue for users, developers, and advertisers alike. 

For over 20 years, Google has been heavily investing in creating a healthier ads ecosystem that generates value fairly, for everyone involved. In 2019, we delivered on key initiatives to protect advertisers, publishers, and users. 

  • Protected advertiser spend by reducing ad fraud:Google blacklisted numerous bad actors that were found to be committing large scale invalid traffic and ad fraud, which violates Google policies. In 2019 Google removed tens of thousands of apps and developers that were found to be in violation of our policies, from both AdMob and Play. Taking corrective action was an imperative step in protecting advertiser dollars, leveling the playing field for legitimate publishers, and removing bad app experiences for users.
  • Protected publisher revenue from app spoofing: Bad actors may attempt to disguise their inventory as a high value app to unfairly claim associated ad revenue. To help publishers publicly declare authorized inventory to combat this issue, we launched our app ads.txt solution in August 2019 and in just four months, the majority of Google’s app ad inventory is now ads.txt-protected.
  • Improved safety of family-friendly content for users: In addition to fighting ad fraud, the Google Play and Ads teams both announced new steps to help ensure that ad content served in apps for children is appropriate for their intended users. The Play team updated its Families Policies and the requirements for inclusion in its Designed for Families (DFF) program to better ensure that apps for children are appropriate. AdMob now offers a maximum ad content rating to give publishers more control over the ad content shown to their users. 
We know that ensuring a safe and high quality app experience has never been more important to the success of your business. That’s why multiple teams across Google are coming together to further secure and protect the ads ecosystem for our most important audiences. As a preview, here are three key areas where we are focused and you can expect to hear more from us in the months ahead:

  • Double down on safeguarding advertiser spend from invalid traffic: One area of focus for the Ads team is developing new ways to detect disruptive ads shown outside of the app — for example, out-of-context ads from an app not currently in use. This behavior violates Google policies, so Google removes these apps from both AdMob and Play — in fact, a recent enforcement sweep resulted in the joint removal of nearly 600 apps. Our investigations are ongoing and when we find violations we will continue to take action.
  • Help app publishers towards compliance with industry regulations:As industry regulations evolve, Google is providing tools for app publishers to manage their compliance strategy, maintain user trust, and minimize the risk of losing revenue.
  • Give users more control over their app experiences:Android is making fundamental platform changes to minimize interruptions in app experiences and keep the user more in control of what's shown on their screen. 

We’re excited to build on this momentum in app ad safety as well as peace of mind. Stay tuned for more updates over the next year on how Google is protecting its ad systems and improving ad traffic quality. 

An update on our political ads policy

We’re proud that people around the world use Google to find relevant information about elections and that candidates use Google and search ads to raise small-dollar donations that help fund their campaigns. We’re also committed to a wide range of efforts to help protect campaigns, surface authoritative election news, and protect elections from foreign interference.

But given recent concerns and debates about political advertising, and the importance of shared trust in the democratic process, we want to improve voters' confidence in the political ads they may see on our ad platforms. So we’re making a few changes to how we handle political ads on our platforms globally. Regardless of the cost or impact to spending on our platforms, we believe these changes will help promote confidence in digital political advertising and trust in electoral processes worldwide. 

Our ads platforms today

Google’s ad platforms are distinctive in a number of important ways: 

  • The main formats we offer political advertisers are search ads (which appear on Google in response to a search for a particular topic or candidate), YouTube ads (which appear on YouTube videos and generate revenue for those creators), and display ads (which appear on websites and generate revenue for our publishing partners). 

  • We provide a publicly accessible, searchable, and downloadable transparency report of election ad content and spending on our platforms, going beyond what’s offered by most other advertising media.  

  • We’ve never allowed granular microtargeting of political ads on our platforms. In many countries, the targeting of political advertising is regulated and we comply with those laws. In the U.S., we have offered basic political targeting capabilities to verified advertisers, such as serving ads based on public voter records and general political affiliations (left-leaning, right-leaning, and independent). 

Taking a new approach to targeting election ads

While we've never offered granular microtargeting of election ads, we believe there’s more we can do to further promote increased visibility of election ads. That’s why we’re limiting election ads audience targeting to the following general categories: age, gender, and general location (postal code level). Political advertisers can, of course, continue to do contextual targeting, such as serving ads to people reading or watching a story about, say, the economy. This will align our approach to election ads with long-established practices in media such as TV, radio, and print, and result in election ads being more widely seen and available for public discussion. (Of course, some media, like direct mail, continues to be targeted more granularly.) It will take some time to implement these changes, and we will begin enforcing the new approach in the U.K. within a week (ahead of the General Election), in the EU by the end of the year, and in the rest of the world starting on January 6, 2020.

Clarifying our ads policies

Whether you’re running for office or selling office furniture, we apply the same ads policies to everyone; there are no carve-outs. It’s against our policies for any advertiser to make a false claim—whether it's a claim about the price of a chair or a claim that you can vote by text message, that election day is postponed, or that a candidate has died. To make this more explicit, we’re clarifying our ads policies and adding examples to show how our policies prohibit things like “deep fakes” (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process. Of course, we recognize that robust political dialogue is an important part of democracy, and no one can sensibly adjudicate every political claim, counterclaim, and insinuation. So we expect that the number of political ads on which we take action will be very limited—but we will continue to do so for clear violations.

Providing increased transparency

We want the ads we serve to be transparent and widely available so that many voices can debate issues openly. We already offer election advertising transparency in India, in the EU, and for federal U.S. election ads. We provide both in-ad disclosures and a transparency report that shows the actual content of the ads themselves, who paid for them, how much they spent, how many people saw them, and how they were targeted. Starting on December 3, 2019, we’re expanding the coverage of our election advertising transparency to include U.S. state-level candidates and officeholders, ballot measures, and ads that mention federal or state political parties, so that all of those ads will now be searchable and viewable as well. 

We’re also looking at ways to bring additional transparency to the ads we serve and we’ll have additional details to share in the coming months. We look forward to continuing our work in this important area.

Simplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience

In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Source: Inside AdSense


Simplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience

In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Source: Inside AdSense