Tag Archives: safety

A New Approach to Real-Money Games on Google Play

Posted by Karan Gambhir – Director, Global Trust and Safety Partnerships

As a platform, we strive to help developers responsibly build new businesses and reach wider audiences across a variety of content types and genres. In response to strong demand, in 2021 we began onboarding a wider range of real-money gaming (RMG) apps in markets with pre-existing licensing frameworks. Since then, this app category has continued to flourish with developers creating new RMG experiences for mobile.

To ensure Google Play keeps up with the pace of developer innovation, while promoting user safety, we’ve since conducted several pilot programs to determine how to support more RMG operators and game types. For example, many developers in India were eager to bring RMG apps to more Android users, so we launched a pilot program, starting with Rummy and Daily Fantasy Sports (DFS), to understand the best way to support their businesses.

Based on the learnings from the pilots and positive feedback from users and developers, Google Play will begin supporting more RMG apps this year, including game types and operators not covered by an existing licensing framework. We’ll launch this expanded RMG support in June to developers for their users in India, Mexico, and Brazil, and plan to expand to users in more countries in the future.

We’re pleased that this new approach will provide new business opportunities to developers globally while continuing to prioritize user safety. It also enables developers currently participating in RMG pilots in India and Mexico to continue offering their apps on Play.

    • India pilot: For developers in the Google Play Pilot Program for distributing DFS and Rummy apps to users in India, we are extending the grace period for pilot apps to remain on Google Play until June 30, 2024 when the new policy will take effect. After that time, developers can distribute RMG apps on Google Play to users in India, beyond DFS and Rummy, in compliance with local laws and our updated policy.
    • Mexico pilot: For developers in the Google Play Pilot Program for DFS in Mexico, the pilot will end as scheduled on June 30, 2024, at which point developers can distribute RMG apps on Google Play to users in Mexico, beyond DFS, in compliance with local laws and our updated policy.

Google Play’s existing developer policies supporting user safety, such as requiring age-gating to limit RMG experiences to adults and requiring developers use geo-gating to offer RMG apps only where legal, remain unchanged and we’ll continue to strengthen them. In addition, Google Play will continue other key user safety and transparency efforts such as our expanded developer verification mechanisms.

With this policy update, we will also be evolving our service fee model for RMG to reflect the value Google Play provides and to help sustain the Android and Play ecosystems. We are working closely with developers to ensure our new approach reflects the unique economics and various developer earning models of this industry. We will have more to share in the coming months on our new policy and future expansion plans.

For developers already involved in the real-money gaming space, or those looking to expand their involvement, we hope this helps you prepare for the upcoming policy change. As Google Play evolves our support of RMG around the world, we look forward to helping you continue to delight users, grow your businesses, and launch new game types in a safe way.

Giving Users More Transparency and Control Over Account Data

Posted by Bethel Otuteye, Senior Director, Product Management, Android App Safety

Google Play has launched a number of recent initiatives to help developers build consumer trust by showcasing their apps' privacy and security practices in a way that is simple and easy to understand. Today, we’re building on this work with a new data deletion policy that aims to empower users with greater clarity and control over their in-app data.

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online. This web requirement, which you will link in your Data safety form, is especially important so that a user can request account and data deletion without having to reinstall an app.

While Play’s Data safety section already lets developers highlight their data deletion options, we know that users want an easier and more consistent way to request them. By creating a more intuitive experience with this policy, we hope to better educate our shared users on the data controls available to them and create greater trust in your apps and in Google Play more broadly.

As the new policy states, when you fulfill a request to delete an account, you must also delete the data associated with that account. The feature also gives developers a way to provide more choice: users who may not want to delete their account entirely can choose to delete other data only where applicable (such as activity history, images, or videos). For developers that need to retain certain data for legitimate reasons such as security, fraud prevention, or regulatory compliance, you must clearly disclose those data retention practices.

Moving image of a accessing account deletion from a mobile device.
Note: Images are examples and subject to change

While we’re excited about the greater control this will give people over their data, we understand it will take time for developers to prepare, especially those without an existing deletion functionality or web presence, which is why we’re sharing information and resources today.

As a first step, we’re asking developers to submit answers to new Data deletion questions in your app’s Data Safety form by December 7. Early next year, Google Play users will begin to see reflected changes in your app’s store listing, including the refreshed data deletion badge in the Data safety section and the new Data deletion area.

Developers who need more time can file for an extension in Play Console until May 31, 2024 to comply with the new policy.

For more information on data deletion and other policy changes announced today:

As always, thank you for your continued partnership in making Google Play a safe and trustworthy platform for everyone.

Keeping Android and Google Play safe with our key 2023 initiatives

Posted by Bethel Otuteye, Senior Director, Product Management, Android App Safety

It’s our top priority to keep Android and Google Play safe for developers to build successful businesses and provide quality apps and games to billions of users around the world. Over the past years, we’ve continued to share more tools to help protect your apps, evolve our policies to help keep people and their families safe and secure, and collaborate with you to build more private advertising technology.

We know it can be difficult to keep up with how quickly the privacy and security landscape evolves. So, we’ve been sharing more product and policy support, frequent updates about our work, and advance notice about changes. As we did last year, we’re sharing a preview of some of our key priorities that we’re excited to collaborate with you on, on the behalf of our shared users.

What we look forward to this year

Building a more privacy-friendly approach to advertising

Last year, we announced the Privacy Sandbox on Android, an industry-wide initiative to raise the bar for user privacy and ensure continued access to free content and services. Building on our web efforts, we’re developing solutions for digital advertising that limit user data sharing and don't rely on cross-app identifiers. We’re working closely with the industry to gather feedback and test these new technologies.

Now, we’re entering the next phase of this initiative: Rolling out the first Beta for the Privacy Sandbox on Android to a small percentage of Android devices. With the Beta, users and developers will be able to experience and evaluate these new solutions in the real world. See our developer guidance on how to participate in the Beta and follow our Privacy Sandbox blog for regular updates. We’ll continue to work in collaboration with developers, publishers, regulators and more as we navigate the transition to a more private mobile ecosystem.

Giving people more control over their data

Developers want to build consumer trust by showcasing responsible data practices in a way that’s simple and easy to understand. Over the past few years, we’ve helped developers provide more transparency around if and how they collect, share, and protect user data. This year, we’ll continue improving Google Play’s Data safety section with new features and policies that aim to give people more clarity and control around deletion practices.

You can also enhance your users’ safety by reducing the permissions you request for accessing users’ data. Your app can often leverage privacy-preserving methods for fulfilling its use case. For example, you can use the photo picker intent to allow users to select individual photos to share with your app rather than requesting access to all the photos on their device through runtime permissions. You can also start testing privacy, security, and transparency enhancements in our Android 14 Developer Preview 1. Stay tuned to our Android 14 and Google Play policy updates, as we’ll share more soon.

Protecting your apps from abuse and attacks

Developers have told us that they want more help protecting their business, users, and IP. So, we’ve continued enhancing Play Integrity API and automated integrity protection to help you better detect and prevent risks, and strengthen your anti-abuse strategy. Developers who use these products have seen an average of over 50% reduction in unauthorized access of their apps and games. Get started today with the Play Integrity API. And, stay tuned for some highly-requested feature updates to integrity products and expanded access to automatic integrity protection.

Helping you navigate SDKs

Developers have shared that they want more help deciding which SDKs are reliable and safe to use. So, we’ve created ways for SDK providers to directly message you through Play Console and Android Studio about critical issues for specific SDK versions and how to fix SDK-related crashes. We’ve also launched Google Play SDK Index to give you insights and usage data for over 100 of the most popular commercial SDKs on Google Play. Soon, we’ll share even more information about sensitive permissions an SDK may use and whether specific SDK versions may violate Google Play policy. By partnering with SDK providers to build safer SDKs and giving you greater insight, we hope to help you and your users avoid disruptions and exposure to risks.

Enhancing protections for kids and families

We’re proud that together with developers, we have made Google Play a trusted destination for families to find educational and delightful experiences for kids. Over the past years, we’ve launched new features, expanded our programs, and evolved our policies to improve app experience and strengthen privacy and security protections. This year, you’ll continue to see improved ways for Google Play to help families discover great apps and more policy updates to protect kids’ safety. Stay updated through our policy email updates and PolicyBytes videos.

Boosting responsible data collection and use

We continue to emphasize that developers and apps should only collect and use data that’s required for their apps to function and provide quality user experiences. This year, you’ll continue to see new permission and policy requirements. Stay updated through our policy email updates and PolicyBytes videos.
 
Fostering developer innovation, while keeping users safe

As a platform, we’re always looking to understand the challenges developers face and help them bring innovative ideas to life. While Google Play already hosts a variety of blockchain related apps, we’ve increasingly heard from developers who want to introduce additional web3 components, including the tokenization of digital assets as NFTs, into their apps and games. With any new technology, we must balance innovation with our responsibility to protect users, which is why we’ve begun conversations with developer partners to assess how potential policy changes could responsibly support these opportunities. As always, engaging with developers is an essential part of how we evolve our platform and maintain a safe, transparent, and trusted experience for our shared users. We hope to have more to share in the coming months.
 
Giving you a better experience with our policies and Console

We’re continuously improving our policy communication, support, and experience. We’ve recently introduced a new Play Console feature to give you more flexibility and control over the app review process. This year, we’ll provide even more features and support.

Developers have shared that they want a place to ask questions and hear from others. So in February, we opened up the Google Play Developer Community to all developers in English so you can ask for advice and share best practices with fellow developer experts. Developers have shared positive feedback about this new forum, and we welcome you to sign up to be a Product Expert (select Play Console as your product and English as your language).

We’re also expanding our pilot programs like the Google Play Developer Helpline pilot, which provides direct policy phone support. Today, we’ve expanded the pilot to nearly 60,000 in 26 countries (16,000 more developers and 9 more countries since November). We’ve completed nearly 5,000 policy support sessions with developers and with a satisfaction score of 90%.

And last, we’ve also been sending you more notices and reminders about upcoming requirements to your Developer Console Inbox, so we reach you when you’re thinking about updating your app. This year, we’re also building a new feature to help you plan ahead about declarations.

We’ll continue to share updates with you throughout the year. Thank you for your partnership in keeping Android and Google Play safe and trustworthy for everyone.

Helping Families Find High-Quality Apps for Kids

Posted by Mindy Brooks, General Manager, Kids and FamilyApps play an increasingly important role in all of our lives and we’re proud that Google Play helps families find educational and delightful experiences for kids. Of course, this wouldn’t be possible without the continued ingenuity and commitment of our developer partners. From kid-friendly entertainment apps to educational games, you’ve helped make our platform a fantastic destination for high-quality content for families. Today, we’re sharing a few updates on how we’re building on this work to create a safe and positive experience on Play.

Expanding Play’s Teacher Approved Program

In 2020, we introduced the Teacher Approved program to highlight high-quality apps that are reviewed and rated by teachers and child development specialists. Through this program, all apps in the Play Store’s Kids tab are Teacher Approved, and families can now more easily discover quality apps and games.

As part of our continued investments in Teacher Approved, we’re excited to expand the program so that all apps that meet Play’s Families Policy will be eligible to be reviewed and shown on the Kids tab. We’re also streamlining the process for developers. Moving forward, the requirements for the Designed for Families program, which previously were a separate prerequisite from Teacher Approved eligibility, will be merged into the broader Families Policy. By combining our requirements into one policy and expanding eligibility for the Teacher Approved program, we look forward to providing families with even more Teacher Approved apps and to help you, our developer partners, reach more users.

If you’re new to the Teacher Approved program, you might wonder what we’re looking for. Beyond strict privacy and security requirements, great content for kids can take many forms, whether that’s sparking curiosity, helping kids learn, or just plain fun. Our team of teachers and experts across the world review and rate apps on factors like age-appropriateness, quality of experience, enrichment, and delight. For added transparency, we include information in the app listing about why the app was rated highly to help parents determine if the app is right for their child. Please visit Google Play Academy for more information about how to design high-quality apps for kids.

Building on our Ads Policies to Protect Children

When you're creating a great app experience for kids and families, it’s also important that any ads served to children are appropriate and compliant with our Families Policy. This includes using Families Self-Certified Ads SDKs to serve ads to children. We recently made changes to the Families Self-Certified Ads SDK Program to help better protect users and make life easier for Families developers. SDKs that participate in the program are now required to identify which versions of their SDKs are appropriate for use in Families apps and you can view the list of self-certified versions in our Help Center.

Next year, all Families developers will be required to use only those versions of a Families Self-Certified Ads SDK that the SDK has identified as appropriate for use in Families apps. We encourage you to begin preparing now before the policy takes full effect.


Building Transparency with New Data Safety Section Options

In the coming weeks, all apps which include children in their target audience will be able to showcase their compliance with Play’s Families Policy requirements with a special badge on the Data safety section. This is another great way that you can better help families find apps that meet their needs, while supporting Play’s commitment to provide users more transparency and control over their data. To display the badge, please visit the "Security practices" section of your Data safety form.

Screenshot of a cellphone screen showing the Data Safety form in Google Play with the 'Security practices'section highlighted

As always, we’re grateful for your partnership in helping to make Play a fantastic platform for delightful, high-quality content for kids and families. For more developer resources:

Helping Families Find High-Quality Apps for Kids

Posted by Mindy Brooks, General Manager, Kids and FamilyApps play an increasingly important role in all of our lives and we’re proud that Google Play helps families find educational and delightful experiences for kids. Of course, this wouldn’t be possible without the continued ingenuity and commitment of our developer partners. From kid-friendly entertainment apps to educational games, you’ve helped make our platform a fantastic destination for high-quality content for families. Today, we’re sharing a few updates on how we’re building on this work to create a safe and positive experience on Play.

Expanding Play’s Teacher Approved Program

In 2020, we introduced the Teacher Approved program to highlight high-quality apps that are reviewed and rated by teachers and child development specialists. Through this program, all apps in the Play Store’s Kids tab are Teacher Approved, and families can now more easily discover quality apps and games.

As part of our continued investments in Teacher Approved, we’re excited to expand the program so that all apps that meet Play’s Families Policy will be eligible to be reviewed and shown on the Kids tab. We’re also streamlining the process for developers. Moving forward, the requirements for the Designed for Families program, which previously were a separate prerequisite from Teacher Approved eligibility, will be merged into the broader Families Policy. By combining our requirements into one policy and expanding eligibility for the Teacher Approved program, we look forward to providing families with even more Teacher Approved apps and to help you, our developer partners, reach more users.

If you’re new to the Teacher Approved program, you might wonder what we’re looking for. Beyond strict privacy and security requirements, great content for kids can take many forms, whether that’s sparking curiosity, helping kids learn, or just plain fun. Our team of teachers and experts across the world review and rate apps on factors like age-appropriateness, quality of experience, enrichment, and delight. For added transparency, we include information in the app listing about why the app was rated highly to help parents determine if the app is right for their child. Please visit Google Play Academy for more information about how to design high-quality apps for kids.

Building on our Ads Policies to Protect Children

When you're creating a great app experience for kids and families, it’s also important that any ads served to children are appropriate and compliant with our Families Policy. This includes using Families Self-Certified Ads SDKs to serve ads to children. We recently made changes to the Families Self-Certified Ads SDK Program to help better protect users and make life easier for Families developers. SDKs that participate in the program are now required to identify which versions of their SDKs are appropriate for use in Families apps and you can view the list of self-certified versions in our Help Center.

Next year, all Families developers will be required to use only those versions of a Families Self-Certified Ads SDK that the SDK has identified as appropriate for use in Families apps. We encourage you to begin preparing now before the policy takes full effect.


Building Transparency with New Data Safety Section Options

In the coming weeks, all apps which include children in their target audience will be able to showcase their compliance with Play’s Families Policy requirements with a special badge on the Data safety section. This is another great way that you can better help families find apps that meet their needs, while supporting Play’s commitment to provide users more transparency and control over their data. To display the badge, please visit the "Security practices" section of your Data safety form.

Screenshot of a cellphone screen showing the Data Safety form in Google Play with the 'Security practices'section highlighted

As always, we’re grateful for your partnership in helping to make Play a fantastic platform for delightful, high-quality content for kids and families. For more developer resources:

Progress on initiatives to keeping Google Play safe

Posted by Krish Vitaldevara Director, Product Management, Play and Android Trust & Safety

Google Play Privacy 

We want to keep you updated on the privacy and security initiatives we shared earlier this year, so you can plan ahead and use new tools to safely build your business. In the past few months, we launched:

  • Google Play SDK Index to help you evaluate an SDK’s reliability and safety and make informed decisions about whether an SDK is right for your business and your users. See insights and usage data on over 100 of the most widely used commercial SDKs on Google Play.
  • The Data safety section on Google Play, helping users better understand your apps’ data safety practices. Developers have told us that this new feature helps them explain privacy practices with their users and build trust. If you haven't yet, complete your Data safety form by July 20th.
  • Enhancements to app integrity tools like Play App Signing to securely sign millions of apps on Google Play and help ensure that app updates can be trusted. Use Play App Signing to help protect your app signing key from loss or compromise with Google's secure key management service.
  • Play Integrity API to help protect your app, your IP, and your users from piracy and malicious activity. Use this API to help detect fraudulent and risky interactions, such as traffic from modified or pirated app versions and rooted or compromised devices.
  • And a new Target API Level policy to strengthen user security by protecting users from installing apps that may not have the expected privacy and security features.

What’s coming up

  • As part of our work with the industry to build more private advertising solutions, we’ve launched initial developer previews for Privacy Sandbox on Android. We have more developer previews coming soon and a beta later this year.
  • We continue to help developers update their apps before policy enforcement actions are taken. We’ve extended time to make changes, improved clarity of responses, and added new training materials. Recent tests of advanced Play Console warnings have also shown solid results. As we refine these features, we’ll expand them to more developers this year.

Thank you for your partnership in making Google Play a safe and trustworthy platform for everyone.

Making sign-in safer and more convenient

For most of us, passwords are the first line of defense for our digital lives. However, managing a set of strong passwords isn’t always convenient, which leads many people to look for  shortcuts (i.e. dog’s name + birthday) or to neglect password best practices altogether, which opens them up to online risks. At Google, we protect our users with products that are secure by default – it’s how we keep more people safe online than anyone else in the world. 


As we celebrate Cybersecurity Awareness Month, we’d like to share all the ways we are making your sign-in safer


Making password sign-in seamless and safe


Everyday, Google checks the security of 1 billion passwords to protect your accounts from being hacked. Google’s Password Manager, built directly into Chrome, Android and the Google App, uses the latest security technology to keep your passwords safe across all the sites and apps you use. It makes it easier to create and use strong and unique passwords on all your devices, without the need to remember or repeat each one.

 

On iOS you can select Chrome to autofill saved passwords in other apps, too. That means your sign-in experience goes from remembering and typing in a password on each individual site to literally one tap.  And soon, you will be able to take advantage of Chrome’s strong password generation feature for any iOS app, similar to how Autofill with Google works on Android today.  


We're also rolling out a feature in the Google app that allows you to access all of the passwords you've saved in Google Password Manager right from the Google app menu. These enhancements are designed to make your password experience easier and safer—not just on Google, but across the web.


Getting people enrolled in 2SV  


In addition to passwords, we know that having a second form of authentication dramatically decreases an attacker’s chance of gaining access to an account. For years, Google has been at the forefront of innovation in two-step verification (2SV), one of the most reliable ways to prevent unauthorized access to accounts and networks. 2SV is strongest when it combines both "something you know" (like a password) and "something you have" (like your phone or a security key).


2SV has been core to Google’s own security practices and today we make it seamless for our users with a Google prompt, which requires a simple tap on your mobile device to prove it’s really you trying to sign in. And because we know the best way to keep our users safe is to turn on our security protections by default, we have started to automatically configure our users’ accounts into a more secure state. By the end of 2021, we plan to  auto-enroll an additional 150 million Google users in 2SV and require  2 million YouTube creators to turn it on.

We also recognize that today’s 2SV options aren’t suitable for everyone, so we are working on technologies that provide a convenient, secure authentication experience and reduce the reliance on passwords in the long-term. Right now we are auto-enrolling Google accounts that have the proper backup mechanisms in place to make a seamless transition to 2SV. To make sure your account has the right settings in place, take our quick Security Checkup


Building security keys into devices 


As part of our security work, we led the invention of security keys — another form of authentication that requires you to tap your key during suspicious sign-in attempts. We know security keys provide the highest degree of sign-in security possible, that’s why we've partnered with organizations to provide free security keys to over 10,000 high risk users this year. 


To make security keys more accessible, we built the capability right into Android phones and our Google Smart Lock app on Apple devices. Today, over two billion devices around the world automatically support the strongest, most convenient 2SV technology available. 


Additional sign-in enhancements 


We recently launched One Tap and a new family of Identity APIs called Google Identity Services, which uses secure tokens, rather than passwords, to sign users into partner websites and apps, like Reddit and Pinterest. With the new Google Identity Services, we've combined Google's advanced security with easy sign in to deliver a convenient experience that also keeps users safe. These new services represent the future of authentication and protect against vulnerabilities like click-jacking, pixel tracking, and other web and app-based threats.


Ultimately, we want all of our users to have an easy, seamless sign-in experience that includes the best security protections across all of their devices and accounts. To learn more about all the ways we’re making every day safer with Google visit our Safety Center


Posted by Guemmy Kim, Director, Account Security and Safety and AbdelKarim Mardini, Group Product Manager, Chrome


Giving kids and teens a safer experience online

We're committed to building products that are secure by default, private by design, and that put people in control. And while our policies don’t allow kids under 13 to create a standard Google account, we’ve worked hard to design enriching product experiences specifically for them, teens, and families. Through Family Link, we allow parents to set up supervised accounts for their children, set screen time limits, and more. Our Be Internet Awesome digital literacy program helps kids learn how to be safe and engaged digital citizens; and our dedicated YouTube Kids app, Kids Space and teacher approved apps in Play offer experiences that are customized for younger audiences. 


Technology has helped kids and teens during the pandemic stay in school through lockdowns and maintain connections with family and friends. As kids and teens spend more time online, parents, educators, child safety and privacy experts, and policy makers are rightly concerned about how to keep them safe. We engage with these groups regularly, and share these concerns. 


Some countries are implementing regulations in this area, and as we comply with these regulations, we’re looking at ways to develop consistent product experiences and user controls for kids and teens globally. Today, we’re announcing a variety of new policies and updates:


Giving minors more control over their digital footprint

While we already provide a range of removal options for people using Google Search, children are at particular risk when it comes to controlling their imagery on the internet. In the coming weeks, we’ll introduce a new policy that enables anyone under the age of 18, or their parent or guardian, to request the removal of their images from Google Image results. Of course, removing an image from Search doesn’t remove it from the web, but we believe this change will help give young people more control of their images online. 


Tailoring product experiences for kids and teens 

Some of our most popular products help kids and teens explore their interests, learn more about the world, and connect with friends. We’re committed to constantly making these experiences safer for them. That’s why in the coming weeks and months we're going to make a number of changes to Google Accounts for people under 18:


  • YouTube: We’re going to change the default upload setting to the most private option available for teens ages 13-17. In addition we’ll more prominently surface digital wellbeing features, and provide safeguards and education about commercial content. Learn more about these changes here

  • Search: We have a range of systems, tools and policies that are designed to help people discover content from across the web while not surprising them with mature content they haven’t searched for. One of the protections we offer is SafeSearch, which helps filter out explicit results when enabled and is already on by default for all signed-in users under 13 who have accounts managed by Family Link. In the coming months, we’ll turn SafeSearch on for existing signed-in users under 18 and make this the default setting for teens setting up new accounts. 

  • Assistant: We are always working to prevent mature content from surfacing during a child’s experience with Google Assistant on shared devices, and in the coming months we’ll be introducing new default protections. For example, we will apply our SafeSearch technology to the web browser on smart displays.

  • Location History: Location History is a Google account setting that helps make our products more useful. It's already off by default for all accounts, and children with supervised accounts don’t have the option of turning Location History on. Taking this a step further, we’ll soon extend this to users under the age of 18 globally, meaning that Location History will remain off (without the option to turn it on).

  • Play: Building on efforts like content ratings, and our "Teacher-approved apps" for quality kids content, we're launching a new safety section that will let parents know which apps follow our Families policies. Apps will be required to disclose how they use the data they collect in greater detail, making it easier for parents to decide if the app is right for their child before they download it. 

  • Google Workspace for Education: As we recently announced, we’re making it much easier for administrators to tailor experiences for their users based on age (such as restricting student activity on YouTube). And to make web browsing safer, K-12 institutions will have SafeSearch technology enabled by default, while switching to Guest Mode and Incognito Mode for web browsing will be turned off by default.


New advertising changes

We’ll be expanding safeguards to prevent age-sensitive ad categories from being shown to teens, and we will block ad targeting based on the age, gender or interests of people under 18. We’ll start rolling out these updates across our products globally over the coming months. Our goal is to ensure we’re providing additional protections and delivering age-appropriate experiences for ads on Google.


New digital wellbeing tools 

In Family Link, parents can set screen time limits and reminders for their kids’ supervised devices. And, on Assistant-enabled smart devices, we give parents control through Digital Wellbeing tools available in the Google Home app. In the coming months, we’ll roll out new Digital Wellbeing filters that allow people to block news, podcasts, and access to webpages on Assistant-enabled smart devices.


On YouTube, we’ll turn on take a break and bedtime reminders and turn off autoplay for users under 18. And, on YouTube Kids we’ll add an autoplay option and turn it off by default to empower parents to make the right choice for their families. 

Transparency Resources: The Family Link Privacy Guide for Children and Teens and the Teen Privacy Guide

Improving how we communicate our data practices to kids and teens
Data plays an important role in making our products functional and helpful. It’s our job to make it easy for kids and teens to understand what data is being collected, why, and how it is used. Based on research, we’re developing engaging, easy-to-understand materials for young people and their parents to help them better understand our data practices. These resources will begin to roll out globally in the coming months. 


Ongoing work to develop age assured product experiences

We regularly engage with kids and teens, parents, governments, industry leaders, and experts in the fields of privacy, child safety, wellbeing and education to design better, safer products for kids and teens. Having an accurate age for a user can be an important element in providing experiences tailored to their needs. Yet, knowing the accurate age of our users across multiple products and surfaces, while at the same time respecting their privacy and ensuring that our services remain accessible, is a complex challenge. It will require input from regulators, lawmakers, industry bodies, technology providers, and others to address it – and to ensure that we all build a safer internet for kids. 


Posted by Mindy Brooks, General Manager, Kids and Families


Our annual Ads Safety Report

At Google, we actively look for ways to ensure a safe user experience when making decisions about the ads people see and the content that can be monetized on our platforms. Developing policies in these areas and consistently enforcing them is one of the primary ways we keep people safe and preserve trust in the ads ecosystem. 


2021 marks one decade of releasing our annual Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms. Providing visibility on the ways we’re preventing policy violations in the ads ecosystem has long been a priority and this year we’re sharing more data than ever before. 


Our Ads Safety Report is just one way we provide transparency to people about how advertising works on our platforms. Last spring, we also introduced our advertiser identity verification program. We are currently verifying advertisers in more than 20 countries and have started to share the advertiser name and location in our About this ad feature, so that people know who is behind a specific ad and can make more informed decisions.


Enforcement at scale

In 2020, our policies and enforcement were put to the test as we collectively navigated a global pandemic, multiple elections around the world and the continued fight against bad actors looking for new ways to take advantage of people online. Thousands of Googlers worked around the clock to deliver a safe experience for users, creators, publishers and advertisers. We added or updated more than 40 policies for advertisers and publishers. We also blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads. 


Our enforcement is not one-size-fits-all, and this is the first year we’re sharing information on ad restrictions, a core part of our overall strategy. Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal. For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed. Over the past several years, we’ve seen an increase in country-specific ad regulations, and restricting ads allows us to help advertisers follow these requirements regionally with minimal impact on their broader campaigns. 


We also continued to invest in our automated detection technology to effectively scan the web for publisher policy compliance at scale. Due to this investment, along with several new policies, we vastly increased our enforcement and removed ads from 1.3 billion publisher pages in 2020, up from 21 million in 2019. We also stopped ads from serving on over 1.6 million publisher sites with pervasive or egregious violations.


Remaining nimble when faced with new threats

As the number of COVID-19 cases rose around the world last January, we enforced our sensitive events policy to prevent behavior like price-gouging on in-demand products like hand sanitizer, masks and paper goods, or ads promoting false cures. As we learned more about the virus and health organizations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse. Additionally, as claims and conspiracies about the coronavirus’s origin and spread were circulated online, we launched a new policy to prohibit both ads and monetized content about COVID-19 or other global health emergencies that contradict scientific consensus. 


In total, we blocked over 99 million Covid-related ads from serving throughout the year, including those for miracle cures, N95 masks due to supply shortages, and most recently, fake vaccine doses. We continue to be nimble, tracking bad actors’ behavior and learning from it. In doing so, we’re able to better prepare for future scams and claims that may arise. 


Fighting the newest forms of fraud and scams

Often when we experience a major event like the pandemic, bad actors look for ways to to take advantage of people online. We saw an uptick in opportunistic advertising and fraudulent behavior from actors looking to mislead users last year. Increasingly, we’ve seen them use cloaking to hide from our detection, promote non-existent virtual businesses or run ads for phone-based scams to either hide from detection or lure unsuspecting consumers off our platforms with an aim to defraud them.

In 2020 we tackled this adversarial behavior in a few key ways: 

  • Introduced multiple new policies and programs including our advertiser identity verification program and business operations verification program

  • Invested in technology to better detect coordinated adversarial behavior, allowing us to connect the dots across accounts and suspend multiple bad actors at once.

  • Improved our automated detection technology and human review processes based on network signals, previous account activity, behavior patterns and user feedback.


The number of ad accounts we disabled for policy violations increased by 70% from 1 million to over 1.7 million. We also blocked or removed over 867 million ads for attempting to evade our detection systems, including cloaking, and an additional 101 million ads for violating our misrepresentation policies. That’s a total of over 968 million ads.   


Protecting elections around the world 

When it comes to elections around the world, ads help voters access authoritative information about the candidates and voting processes. Over the past few years, we introduced strict policies and restrictions around who can run election-related advertising on our platform and the ways they can target ads; we launched comprehensive political ad libraries in the U.S., the U.K., the European Union, India, Israel, Taiwan, Australia and New Zealand; and we worked diligently with our enforcement teams around the world to protect our platforms from abuse. Globally, we continue to expand our verification program and verified more than 5,400 additional election advertisers in 2020. In the U.S, as it became clear the outcome of the presidential election would not be determined immediately, we determined that the U.S election fell under our sensitive events policy, and enforced a U.S. political ads pause starting after the polls closed and continuing through early December. During that time, we temporarily paused more than five million ads and blocked ads on over three billion Search queries referencing the election, the candidates or its outcome. We made this decision to limit the potential for ads to amplify confusion in the post-election period.


Demonetizing hate and violence

Last year, news publishers played a critical role in keeping people informed, prepared and safe. We’re proud that digital advertising, including the tools we offer to connect advertisers and publishers, supports this content. We have policies in place to protect both brands and users.


In 2017, we developed more granular means of reviewing sites at the page level, including user-generated comments, to allow publishers to continue to operate their broader sites while protecting advertisers from negative placements by stopping persistent violations. In the years since introducing page-level action, we’ve continued to invest in our automated technology, and it was crucial in a year in which we saw an increase in hate speech and calls to violence online. This investment helped us to prevent harmful web content from monetizing. We took action on nearly 168 million pages under our dangerous and derogatory policy.


Continuing this work in 2021 

We know that when we make decisions through the lens of user safety, it will benefit the broader ecosystem. Preserving trust for advertisers and publishers helps their businesses succeed in the long term. In the upcoming year, we will continue to invest in policies, our team of experts and enforcement technology to stay ahead of potential threats. We also remain steadfast on our path to scale our verification programs around the world in order to increase transparency and make more information about the ad experience universally available.


Posted by Scott Spencer, Vice President, Ads Privacy & Safety


Our annual Ads Safety Report

At Google, we actively look for ways to ensure a safe user experience when making decisions about the ads people see and the content that can be monetized on our platforms. Developing policies in these areas and consistently enforcing them is one of the primary ways we keep people safe and preserve trust in the ads ecosystem. 


2021 marks one decade of releasing our annual Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms. Providing visibility on the ways we’re preventing policy violations in the ads ecosystem has long been a priority and this year we’re sharing more data than ever before. 


Our Ads Safety Report is just one way we provide transparency to people about how advertising works on our platforms. Last spring, we also introduced our advertiser identity verification program. We are currently verifying advertisers in more than 20 countries and have started to share the advertiser name and location in our About this ad feature, so that people know who is behind a specific ad and can make more informed decisions.


Enforcement at scale

In 2020, our policies and enforcement were put to the test as we collectively navigated a global pandemic, multiple elections around the world and the continued fight against bad actors looking for new ways to take advantage of people online. Thousands of Googlers worked around the clock to deliver a safe experience for users, creators, publishers and advertisers. We added or updated more than 40 policies for advertisers and publishers. We also blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads. 


Our enforcement is not one-size-fits-all, and this is the first year we’re sharing information on ad restrictions, a core part of our overall strategy. Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal. For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed. Over the past several years, we’ve seen an increase in country-specific ad regulations, and restricting ads allows us to help advertisers follow these requirements regionally with minimal impact on their broader campaigns. 


We also continued to invest in our automated detection technology to effectively scan the web for publisher policy compliance at scale. Due to this investment, along with several new policies, we vastly increased our enforcement and removed ads from 1.3 billion publisher pages in 2020, up from 21 million in 2019. We also stopped ads from serving on over 1.6 million publisher sites with pervasive or egregious violations.


Remaining nimble when faced with new threats

As the number of COVID-19 cases rose around the world last January, we enforced our sensitive events policy to prevent behavior like price-gouging on in-demand products like hand sanitizer, masks and paper goods, or ads promoting false cures. As we learned more about the virus and health organizations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse. Additionally, as claims and conspiracies about the coronavirus’s origin and spread were circulated online, we launched a new policy to prohibit both ads and monetized content about COVID-19 or other global health emergencies that contradict scientific consensus. 


In total, we blocked over 99 million Covid-related ads from serving throughout the year, including those for miracle cures, N95 masks due to supply shortages, and most recently, fake vaccine doses. We continue to be nimble, tracking bad actors’ behavior and learning from it. In doing so, we’re able to better prepare for future scams and claims that may arise. 


Fighting the newest forms of fraud and scams

Often when we experience a major event like the pandemic, bad actors look for ways to to take advantage of people online. We saw an uptick in opportunistic advertising and fraudulent behavior from actors looking to mislead users last year. Increasingly, we’ve seen them use cloaking to hide from our detection, promote non-existent virtual businesses or run ads for phone-based scams to either hide from detection or lure unsuspecting consumers off our platforms with an aim to defraud them.

In 2020 we tackled this adversarial behavior in a few key ways: 

  • Introduced multiple new policies and programs including our advertiser identity verification program and business operations verification program

  • Invested in technology to better detect coordinated adversarial behavior, allowing us to connect the dots across accounts and suspend multiple bad actors at once.

  • Improved our automated detection technology and human review processes based on network signals, previous account activity, behavior patterns and user feedback.


The number of ad accounts we disabled for policy violations increased by 70% from 1 million to over 1.7 million. We also blocked or removed over 867 million ads for attempting to evade our detection systems, including cloaking, and an additional 101 million ads for violating our misrepresentation policies. That’s a total of over 968 million ads.   


Protecting elections around the world 

When it comes to elections around the world, ads help voters access authoritative information about the candidates and voting processes. Over the past few years, we introduced strict policies and restrictions around who can run election-related advertising on our platform and the ways they can target ads; we launched comprehensive political ad libraries in the U.S., the U.K., the European Union, India, Israel, Taiwan, Australia and New Zealand; and we worked diligently with our enforcement teams around the world to protect our platforms from abuse. Globally, we continue to expand our verification program and verified more than 5,400 additional election advertisers in 2020. In the U.S, as it became clear the outcome of the presidential election would not be determined immediately, we determined that the U.S election fell under our sensitive events policy, and enforced a U.S. political ads pause starting after the polls closed and continuing through early December. During that time, we temporarily paused more than five million ads and blocked ads on over three billion Search queries referencing the election, the candidates or its outcome. We made this decision to limit the potential for ads to amplify confusion in the post-election period.


Demonetizing hate and violence

Last year, news publishers played a critical role in keeping people informed, prepared and safe. We’re proud that digital advertising, including the tools we offer to connect advertisers and publishers, supports this content. We have policies in place to protect both brands and users.


In 2017, we developed more granular means of reviewing sites at the page level, including user-generated comments, to allow publishers to continue to operate their broader sites while protecting advertisers from negative placements by stopping persistent violations. In the years since introducing page-level action, we’ve continued to invest in our automated technology, and it was crucial in a year in which we saw an increase in hate speech and calls to violence online. This investment helped us to prevent harmful web content from monetizing. We took action on nearly 168 million pages under our dangerous and derogatory policy.


Continuing this work in 2021 

We know that when we make decisions through the lens of user safety, it will benefit the broader ecosystem. Preserving trust for advertisers and publishers helps their businesses succeed in the long term. In the upcoming year, we will continue to invest in policies, our team of experts and enforcement technology to stay ahead of potential threats. We also remain steadfast on our path to scale our verification programs around the world in order to increase transparency and make more information about the ad experience universally available.


Posted by Scott Spencer, Vice President, Ads Privacy & Safety