Tag Archives: safety

Progress on initiatives to keeping Google Play safe

Posted by Krish Vitaldevara Director, Product Management, Play and Android Trust & Safety

Google Play Privacy 

We want to keep you updated on the privacy and security initiatives we shared earlier this year, so you can plan ahead and use new tools to safely build your business. In the past few months, we launched:

  • Google Play SDK Index to help you evaluate an SDK’s reliability and safety and make informed decisions about whether an SDK is right for your business and your users. See insights and usage data on over 100 of the most widely used commercial SDKs on Google Play.
  • The Data safety section on Google Play, helping users better understand your apps’ data safety practices. Developers have told us that this new feature helps them explain privacy practices with their users and build trust. If you haven't yet, complete your Data safety form by July 20th.
  • Enhancements to app integrity tools like Play App Signing to securely sign millions of apps on Google Play and help ensure that app updates can be trusted. Use Play App Signing to help protect your app signing key from loss or compromise with Google's secure key management service.
  • Play Integrity API to help protect your app, your IP, and your users from piracy and malicious activity. Use this API to help detect fraudulent and risky interactions, such as traffic from modified or pirated app versions and rooted or compromised devices.
  • And a new Target API Level policy to strengthen user security by protecting users from installing apps that may not have the expected privacy and security features.

What’s coming up

  • As part of our work with the industry to build more private advertising solutions, we’ve launched initial developer previews for Privacy Sandbox on Android. We have more developer previews coming soon and a beta later this year.
  • We continue to help developers update their apps before policy enforcement actions are taken. We’ve extended time to make changes, improved clarity of responses, and added new training materials. Recent tests of advanced Play Console warnings have also shown solid results. As we refine these features, we’ll expand them to more developers this year.

Thank you for your partnership in making Google Play a safe and trustworthy platform for everyone.

Making sign-in safer and more convenient

For most of us, passwords are the first line of defense for our digital lives. However, managing a set of strong passwords isn’t always convenient, which leads many people to look for  shortcuts (i.e. dog’s name + birthday) or to neglect password best practices altogether, which opens them up to online risks. At Google, we protect our users with products that are secure by default – it’s how we keep more people safe online than anyone else in the world. 


As we celebrate Cybersecurity Awareness Month, we’d like to share all the ways we are making your sign-in safer


Making password sign-in seamless and safe


Everyday, Google checks the security of 1 billion passwords to protect your accounts from being hacked. Google’s Password Manager, built directly into Chrome, Android and the Google App, uses the latest security technology to keep your passwords safe across all the sites and apps you use. It makes it easier to create and use strong and unique passwords on all your devices, without the need to remember or repeat each one.

 

On iOS you can select Chrome to autofill saved passwords in other apps, too. That means your sign-in experience goes from remembering and typing in a password on each individual site to literally one tap.  And soon, you will be able to take advantage of Chrome’s strong password generation feature for any iOS app, similar to how Autofill with Google works on Android today.  


We're also rolling out a feature in the Google app that allows you to access all of the passwords you've saved in Google Password Manager right from the Google app menu. These enhancements are designed to make your password experience easier and safer—not just on Google, but across the web.


Getting people enrolled in 2SV  


In addition to passwords, we know that having a second form of authentication dramatically decreases an attacker’s chance of gaining access to an account. For years, Google has been at the forefront of innovation in two-step verification (2SV), one of the most reliable ways to prevent unauthorized access to accounts and networks. 2SV is strongest when it combines both "something you know" (like a password) and "something you have" (like your phone or a security key).


2SV has been core to Google’s own security practices and today we make it seamless for our users with a Google prompt, which requires a simple tap on your mobile device to prove it’s really you trying to sign in. And because we know the best way to keep our users safe is to turn on our security protections by default, we have started to automatically configure our users’ accounts into a more secure state. By the end of 2021, we plan to  auto-enroll an additional 150 million Google users in 2SV and require  2 million YouTube creators to turn it on.

We also recognize that today’s 2SV options aren’t suitable for everyone, so we are working on technologies that provide a convenient, secure authentication experience and reduce the reliance on passwords in the long-term. Right now we are auto-enrolling Google accounts that have the proper backup mechanisms in place to make a seamless transition to 2SV. To make sure your account has the right settings in place, take our quick Security Checkup


Building security keys into devices 


As part of our security work, we led the invention of security keys — another form of authentication that requires you to tap your key during suspicious sign-in attempts. We know security keys provide the highest degree of sign-in security possible, that’s why we've partnered with organizations to provide free security keys to over 10,000 high risk users this year. 


To make security keys more accessible, we built the capability right into Android phones and our Google Smart Lock app on Apple devices. Today, over two billion devices around the world automatically support the strongest, most convenient 2SV technology available. 


Additional sign-in enhancements 


We recently launched One Tap and a new family of Identity APIs called Google Identity Services, which uses secure tokens, rather than passwords, to sign users into partner websites and apps, like Reddit and Pinterest. With the new Google Identity Services, we've combined Google's advanced security with easy sign in to deliver a convenient experience that also keeps users safe. These new services represent the future of authentication and protect against vulnerabilities like click-jacking, pixel tracking, and other web and app-based threats.


Ultimately, we want all of our users to have an easy, seamless sign-in experience that includes the best security protections across all of their devices and accounts. To learn more about all the ways we’re making every day safer with Google visit our Safety Center


Posted by Guemmy Kim, Director, Account Security and Safety and AbdelKarim Mardini, Group Product Manager, Chrome


Giving kids and teens a safer experience online

We're committed to building products that are secure by default, private by design, and that put people in control. And while our policies don’t allow kids under 13 to create a standard Google account, we’ve worked hard to design enriching product experiences specifically for them, teens, and families. Through Family Link, we allow parents to set up supervised accounts for their children, set screen time limits, and more. Our Be Internet Awesome digital literacy program helps kids learn how to be safe and engaged digital citizens; and our dedicated YouTube Kids app, Kids Space and teacher approved apps in Play offer experiences that are customized for younger audiences. 


Technology has helped kids and teens during the pandemic stay in school through lockdowns and maintain connections with family and friends. As kids and teens spend more time online, parents, educators, child safety and privacy experts, and policy makers are rightly concerned about how to keep them safe. We engage with these groups regularly, and share these concerns. 


Some countries are implementing regulations in this area, and as we comply with these regulations, we’re looking at ways to develop consistent product experiences and user controls for kids and teens globally. Today, we’re announcing a variety of new policies and updates:


Giving minors more control over their digital footprint

While we already provide a range of removal options for people using Google Search, children are at particular risk when it comes to controlling their imagery on the internet. In the coming weeks, we’ll introduce a new policy that enables anyone under the age of 18, or their parent or guardian, to request the removal of their images from Google Image results. Of course, removing an image from Search doesn’t remove it from the web, but we believe this change will help give young people more control of their images online. 


Tailoring product experiences for kids and teens 

Some of our most popular products help kids and teens explore their interests, learn more about the world, and connect with friends. We’re committed to constantly making these experiences safer for them. That’s why in the coming weeks and months we're going to make a number of changes to Google Accounts for people under 18:


  • YouTube: We’re going to change the default upload setting to the most private option available for teens ages 13-17. In addition we’ll more prominently surface digital wellbeing features, and provide safeguards and education about commercial content. Learn more about these changes here

  • Search: We have a range of systems, tools and policies that are designed to help people discover content from across the web while not surprising them with mature content they haven’t searched for. One of the protections we offer is SafeSearch, which helps filter out explicit results when enabled and is already on by default for all signed-in users under 13 who have accounts managed by Family Link. In the coming months, we’ll turn SafeSearch on for existing signed-in users under 18 and make this the default setting for teens setting up new accounts. 

  • Assistant: We are always working to prevent mature content from surfacing during a child’s experience with Google Assistant on shared devices, and in the coming months we’ll be introducing new default protections. For example, we will apply our SafeSearch technology to the web browser on smart displays.

  • Location History: Location History is a Google account setting that helps make our products more useful. It's already off by default for all accounts, and children with supervised accounts don’t have the option of turning Location History on. Taking this a step further, we’ll soon extend this to users under the age of 18 globally, meaning that Location History will remain off (without the option to turn it on).

  • Play: Building on efforts like content ratings, and our "Teacher-approved apps" for quality kids content, we're launching a new safety section that will let parents know which apps follow our Families policies. Apps will be required to disclose how they use the data they collect in greater detail, making it easier for parents to decide if the app is right for their child before they download it. 

  • Google Workspace for Education: As we recently announced, we’re making it much easier for administrators to tailor experiences for their users based on age (such as restricting student activity on YouTube). And to make web browsing safer, K-12 institutions will have SafeSearch technology enabled by default, while switching to Guest Mode and Incognito Mode for web browsing will be turned off by default.


New advertising changes

We’ll be expanding safeguards to prevent age-sensitive ad categories from being shown to teens, and we will block ad targeting based on the age, gender or interests of people under 18. We’ll start rolling out these updates across our products globally over the coming months. Our goal is to ensure we’re providing additional protections and delivering age-appropriate experiences for ads on Google.


New digital wellbeing tools 

In Family Link, parents can set screen time limits and reminders for their kids’ supervised devices. And, on Assistant-enabled smart devices, we give parents control through Digital Wellbeing tools available in the Google Home app. In the coming months, we’ll roll out new Digital Wellbeing filters that allow people to block news, podcasts, and access to webpages on Assistant-enabled smart devices.


On YouTube, we’ll turn on take a break and bedtime reminders and turn off autoplay for users under 18. And, on YouTube Kids we’ll add an autoplay option and turn it off by default to empower parents to make the right choice for their families. 

Transparency Resources: The Family Link Privacy Guide for Children and Teens and the Teen Privacy Guide

Improving how we communicate our data practices to kids and teens
Data plays an important role in making our products functional and helpful. It’s our job to make it easy for kids and teens to understand what data is being collected, why, and how it is used. Based on research, we’re developing engaging, easy-to-understand materials for young people and their parents to help them better understand our data practices. These resources will begin to roll out globally in the coming months. 


Ongoing work to develop age assured product experiences

We regularly engage with kids and teens, parents, governments, industry leaders, and experts in the fields of privacy, child safety, wellbeing and education to design better, safer products for kids and teens. Having an accurate age for a user can be an important element in providing experiences tailored to their needs. Yet, knowing the accurate age of our users across multiple products and surfaces, while at the same time respecting their privacy and ensuring that our services remain accessible, is a complex challenge. It will require input from regulators, lawmakers, industry bodies, technology providers, and others to address it – and to ensure that we all build a safer internet for kids. 


Posted by Mindy Brooks, General Manager, Kids and Families


Our annual Ads Safety Report

At Google, we actively look for ways to ensure a safe user experience when making decisions about the ads people see and the content that can be monetized on our platforms. Developing policies in these areas and consistently enforcing them is one of the primary ways we keep people safe and preserve trust in the ads ecosystem. 


2021 marks one decade of releasing our annual Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms. Providing visibility on the ways we’re preventing policy violations in the ads ecosystem has long been a priority and this year we’re sharing more data than ever before. 


Our Ads Safety Report is just one way we provide transparency to people about how advertising works on our platforms. Last spring, we also introduced our advertiser identity verification program. We are currently verifying advertisers in more than 20 countries and have started to share the advertiser name and location in our About this ad feature, so that people know who is behind a specific ad and can make more informed decisions.


Enforcement at scale

In 2020, our policies and enforcement were put to the test as we collectively navigated a global pandemic, multiple elections around the world and the continued fight against bad actors looking for new ways to take advantage of people online. Thousands of Googlers worked around the clock to deliver a safe experience for users, creators, publishers and advertisers. We added or updated more than 40 policies for advertisers and publishers. We also blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads. 


Our enforcement is not one-size-fits-all, and this is the first year we’re sharing information on ad restrictions, a core part of our overall strategy. Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal. For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed. Over the past several years, we’ve seen an increase in country-specific ad regulations, and restricting ads allows us to help advertisers follow these requirements regionally with minimal impact on their broader campaigns. 


We also continued to invest in our automated detection technology to effectively scan the web for publisher policy compliance at scale. Due to this investment, along with several new policies, we vastly increased our enforcement and removed ads from 1.3 billion publisher pages in 2020, up from 21 million in 2019. We also stopped ads from serving on over 1.6 million publisher sites with pervasive or egregious violations.


Remaining nimble when faced with new threats

As the number of COVID-19 cases rose around the world last January, we enforced our sensitive events policy to prevent behavior like price-gouging on in-demand products like hand sanitizer, masks and paper goods, or ads promoting false cures. As we learned more about the virus and health organizations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse. Additionally, as claims and conspiracies about the coronavirus’s origin and spread were circulated online, we launched a new policy to prohibit both ads and monetized content about COVID-19 or other global health emergencies that contradict scientific consensus. 


In total, we blocked over 99 million Covid-related ads from serving throughout the year, including those for miracle cures, N95 masks due to supply shortages, and most recently, fake vaccine doses. We continue to be nimble, tracking bad actors’ behavior and learning from it. In doing so, we’re able to better prepare for future scams and claims that may arise. 


Fighting the newest forms of fraud and scams

Often when we experience a major event like the pandemic, bad actors look for ways to to take advantage of people online. We saw an uptick in opportunistic advertising and fraudulent behavior from actors looking to mislead users last year. Increasingly, we’ve seen them use cloaking to hide from our detection, promote non-existent virtual businesses or run ads for phone-based scams to either hide from detection or lure unsuspecting consumers off our platforms with an aim to defraud them.

In 2020 we tackled this adversarial behavior in a few key ways: 

  • Introduced multiple new policies and programs including our advertiser identity verification program and business operations verification program

  • Invested in technology to better detect coordinated adversarial behavior, allowing us to connect the dots across accounts and suspend multiple bad actors at once.

  • Improved our automated detection technology and human review processes based on network signals, previous account activity, behavior patterns and user feedback.


The number of ad accounts we disabled for policy violations increased by 70% from 1 million to over 1.7 million. We also blocked or removed over 867 million ads for attempting to evade our detection systems, including cloaking, and an additional 101 million ads for violating our misrepresentation policies. That’s a total of over 968 million ads.   


Protecting elections around the world 

When it comes to elections around the world, ads help voters access authoritative information about the candidates and voting processes. Over the past few years, we introduced strict policies and restrictions around who can run election-related advertising on our platform and the ways they can target ads; we launched comprehensive political ad libraries in the U.S., the U.K., the European Union, India, Israel, Taiwan, Australia and New Zealand; and we worked diligently with our enforcement teams around the world to protect our platforms from abuse. Globally, we continue to expand our verification program and verified more than 5,400 additional election advertisers in 2020. In the U.S, as it became clear the outcome of the presidential election would not be determined immediately, we determined that the U.S election fell under our sensitive events policy, and enforced a U.S. political ads pause starting after the polls closed and continuing through early December. During that time, we temporarily paused more than five million ads and blocked ads on over three billion Search queries referencing the election, the candidates or its outcome. We made this decision to limit the potential for ads to amplify confusion in the post-election period.


Demonetizing hate and violence

Last year, news publishers played a critical role in keeping people informed, prepared and safe. We’re proud that digital advertising, including the tools we offer to connect advertisers and publishers, supports this content. We have policies in place to protect both brands and users.


In 2017, we developed more granular means of reviewing sites at the page level, including user-generated comments, to allow publishers to continue to operate their broader sites while protecting advertisers from negative placements by stopping persistent violations. In the years since introducing page-level action, we’ve continued to invest in our automated technology, and it was crucial in a year in which we saw an increase in hate speech and calls to violence online. This investment helped us to prevent harmful web content from monetizing. We took action on nearly 168 million pages under our dangerous and derogatory policy.


Continuing this work in 2021 

We know that when we make decisions through the lens of user safety, it will benefit the broader ecosystem. Preserving trust for advertisers and publishers helps their businesses succeed in the long term. In the upcoming year, we will continue to invest in policies, our team of experts and enforcement technology to stay ahead of potential threats. We also remain steadfast on our path to scale our verification programs around the world in order to increase transparency and make more information about the ad experience universally available.


Posted by Scott Spencer, Vice President, Ads Privacy & Safety


Our annual Ads Safety Report

At Google, we actively look for ways to ensure a safe user experience when making decisions about the ads people see and the content that can be monetized on our platforms. Developing policies in these areas and consistently enforcing them is one of the primary ways we keep people safe and preserve trust in the ads ecosystem. 


2021 marks one decade of releasing our annual Ads Safety Report, which highlights the work we do to prevent malicious use of our ads platforms. Providing visibility on the ways we’re preventing policy violations in the ads ecosystem has long been a priority and this year we’re sharing more data than ever before. 


Our Ads Safety Report is just one way we provide transparency to people about how advertising works on our platforms. Last spring, we also introduced our advertiser identity verification program. We are currently verifying advertisers in more than 20 countries and have started to share the advertiser name and location in our About this ad feature, so that people know who is behind a specific ad and can make more informed decisions.


Enforcement at scale

In 2020, our policies and enforcement were put to the test as we collectively navigated a global pandemic, multiple elections around the world and the continued fight against bad actors looking for new ways to take advantage of people online. Thousands of Googlers worked around the clock to deliver a safe experience for users, creators, publishers and advertisers. We added or updated more than 40 policies for advertisers and publishers. We also blocked or removed approximately 3.1 billion ads for violating our policies and restricted an additional 6.4 billion ads. 


Our enforcement is not one-size-fits-all, and this is the first year we’re sharing information on ad restrictions, a core part of our overall strategy. Restricting ads allows us to tailor our approach based on geography, local laws and our certification programs, so that approved ads only show where appropriate, regulated and legal. For example, we require online pharmacies to complete a certification program, and once certified, we only show their ads in specific countries where the online sale of prescription drugs is allowed. Over the past several years, we’ve seen an increase in country-specific ad regulations, and restricting ads allows us to help advertisers follow these requirements regionally with minimal impact on their broader campaigns. 


We also continued to invest in our automated detection technology to effectively scan the web for publisher policy compliance at scale. Due to this investment, along with several new policies, we vastly increased our enforcement and removed ads from 1.3 billion publisher pages in 2020, up from 21 million in 2019. We also stopped ads from serving on over 1.6 million publisher sites with pervasive or egregious violations.


Remaining nimble when faced with new threats

As the number of COVID-19 cases rose around the world last January, we enforced our sensitive events policy to prevent behavior like price-gouging on in-demand products like hand sanitizer, masks and paper goods, or ads promoting false cures. As we learned more about the virus and health organizations issued new guidance, we evolved our enforcement strategy to start allowing medical providers, health organizations, local governments and trusted businesses to surface critical updates and authoritative content, while still preventing opportunistic abuse. Additionally, as claims and conspiracies about the coronavirus’s origin and spread were circulated online, we launched a new policy to prohibit both ads and monetized content about COVID-19 or other global health emergencies that contradict scientific consensus. 


In total, we blocked over 99 million Covid-related ads from serving throughout the year, including those for miracle cures, N95 masks due to supply shortages, and most recently, fake vaccine doses. We continue to be nimble, tracking bad actors’ behavior and learning from it. In doing so, we’re able to better prepare for future scams and claims that may arise. 


Fighting the newest forms of fraud and scams

Often when we experience a major event like the pandemic, bad actors look for ways to to take advantage of people online. We saw an uptick in opportunistic advertising and fraudulent behavior from actors looking to mislead users last year. Increasingly, we’ve seen them use cloaking to hide from our detection, promote non-existent virtual businesses or run ads for phone-based scams to either hide from detection or lure unsuspecting consumers off our platforms with an aim to defraud them.

In 2020 we tackled this adversarial behavior in a few key ways: 

  • Introduced multiple new policies and programs including our advertiser identity verification program and business operations verification program

  • Invested in technology to better detect coordinated adversarial behavior, allowing us to connect the dots across accounts and suspend multiple bad actors at once.

  • Improved our automated detection technology and human review processes based on network signals, previous account activity, behavior patterns and user feedback.


The number of ad accounts we disabled for policy violations increased by 70% from 1 million to over 1.7 million. We also blocked or removed over 867 million ads for attempting to evade our detection systems, including cloaking, and an additional 101 million ads for violating our misrepresentation policies. That’s a total of over 968 million ads.   


Protecting elections around the world 

When it comes to elections around the world, ads help voters access authoritative information about the candidates and voting processes. Over the past few years, we introduced strict policies and restrictions around who can run election-related advertising on our platform and the ways they can target ads; we launched comprehensive political ad libraries in the U.S., the U.K., the European Union, India, Israel, Taiwan, Australia and New Zealand; and we worked diligently with our enforcement teams around the world to protect our platforms from abuse. Globally, we continue to expand our verification program and verified more than 5,400 additional election advertisers in 2020. In the U.S, as it became clear the outcome of the presidential election would not be determined immediately, we determined that the U.S election fell under our sensitive events policy, and enforced a U.S. political ads pause starting after the polls closed and continuing through early December. During that time, we temporarily paused more than five million ads and blocked ads on over three billion Search queries referencing the election, the candidates or its outcome. We made this decision to limit the potential for ads to amplify confusion in the post-election period.


Demonetizing hate and violence

Last year, news publishers played a critical role in keeping people informed, prepared and safe. We’re proud that digital advertising, including the tools we offer to connect advertisers and publishers, supports this content. We have policies in place to protect both brands and users.


In 2017, we developed more granular means of reviewing sites at the page level, including user-generated comments, to allow publishers to continue to operate their broader sites while protecting advertisers from negative placements by stopping persistent violations. In the years since introducing page-level action, we’ve continued to invest in our automated technology, and it was crucial in a year in which we saw an increase in hate speech and calls to violence online. This investment helped us to prevent harmful web content from monetizing. We took action on nearly 168 million pages under our dangerous and derogatory policy.


Continuing this work in 2021 

We know that when we make decisions through the lens of user safety, it will benefit the broader ecosystem. Preserving trust for advertisers and publishers helps their businesses succeed in the long term. In the upcoming year, we will continue to invest in policies, our team of experts and enforcement technology to stay ahead of potential threats. We also remain steadfast on our path to scale our verification programs around the world in order to increase transparency and make more information about the ad experience universally available.


Posted by Scott Spencer, Vice President, Ads Privacy & Safety


Developer tips and guides: Common policy violations and how you can avoid them

By Andrew Ahn, Product Manager, Google Play App Safety

At Google Play, we want to foster an ecosystem of safe, engaging, useful, and entertaining apps used and loved by billions of Android users worldwide. That’s why we regularly update and revise our Google Play Developer Policies and Developer Distribution Agreement, detailing the boundaries of app content and functionalities allowed on the platform, as well as providing latest guidance on how developers can promote and monetize apps.

In recent efforts in analyzing apps for policy compliance on Google Play we identified some common mistakes and violations that developers make, and we’re sharing these with the developer community with tips and guides on how to avoid them, mitigating the risks of apps and developer accounts being suspended for violating our policies.

Links that take users back to other apps on the Play Store

One of the most common mistakes we see are apps that have buttons and menus that link out to the Play Store -- either to apps by the same developer, or other apps that may be affiliated with the developer, but not being clear that these are ads or promotional links. Without this clarity, apps may get enforced for having deceptive / disguised ads. One of the ways to avoid such mistakes is by explicitly calling these out by labeling the buttons and links as ‘More Apps’, ‘More Games’, ‘Explore’, ‘Check out our other apps’, etc.

Example of app content that link out to app listing on Play

Example of app content that link out to app listing on Play

Spammy app descriptions

Another mistake we frequently observe is where developers ‘stuff’ keywords in the app description in hope for better discoverability and ranking against certain keywords and phrases. Text blocks or lists that contain repetitive or unrelated keywords or references violate our Store Listing and Promotion policy. Writing a clear app description intended and optimized for user’s readability and understanding is one of the best ways to avoid this violation.

Watch this video to learn how to avoid spammy store listings and efforts to artificially boost app visibility.

Abandoned and broken apps

There are apps that have been published by the developers a long time ago, and are no longer being maintained. Abandoned and unmaintained apps often create user experience issues -- broken app functionality, for example. Not only are such apps at risk of getting a low star rating and negative user reviews, they will also be flagged as violating the minimum functionality policy. To mitigate the negative impact to the developer reputation and app enforcement, consider unpublishing such apps from the Play Store. Note the updated unpublish action won’t affect existing users who already installed the app, and developers can always choose to re-publish them after addressing the broken experiences.

Example of an abandoned app that provides a broken app experience

Example of an abandoned app that provides a broken app experience

Play icon with graduation cap

Take the ‘Minimum and Broken Functionality Spam’ course on Play Academy



Apps vs. Webview

Lastly, we observe a large volume of app submissions that are just webviews of existing websites. Most of these apps are submitted with a primary purpose of driving traffic rather than providing engaging app experiences to Android users. Such apps are considered webview spam, and are removed from Play. Instead, consider thinking through what users can do or do better with the app than in a web experience and implement relevant features and functionalities that enrich the user experience.

Example of webview without any app functionality

Example of a webview without any app functionality

Play icon with graduation cap

Take the ‘Webview Spam’ course on Play Academy



While the above are one of the most frequent mistakes, make sure to stay up to date with the latest policies by visiting the Play Developer Policy Center. Check out Google Play Academy’s Policy training, including our new Spam courses, and watch our Play PolicyBytes videos to learn more about recent policy updates.

Developer tips and guides: Common policy violations and how you can avoid them

By Andrew Ahn, Product Manager, Google Play App Safety

At Google Play, we want to foster an ecosystem of safe, engaging, useful, and entertaining apps used and loved by billions of Android users worldwide. That’s why we regularly update and revise our Google Play Developer Policies and Developer Distribution Agreement, detailing the boundaries of app content and functionalities allowed on the platform, as well as providing latest guidance on how developers can promote and monetize apps.

In recent efforts in analyzing apps for policy compliance on Google Play we identified some common mistakes and violations that developers make, and we’re sharing these with the developer community with tips and guides on how to avoid them, mitigating the risks of apps and developer accounts being suspended for violating our policies.

Links that take users back to other apps on the Play Store

One of the most common mistakes we see are apps that have buttons and menus that link out to the Play Store -- either to apps by the same developer, or other apps that may be affiliated with the developer, but not being clear that these are ads or promotional links. Without this clarity, apps may get enforced for having deceptive / disguised ads. One of the ways to avoid such mistakes is by explicitly calling these out by labeling the buttons and links as ‘More Apps’, ‘More Games’, ‘Explore’, ‘Check out our other apps’, etc.

Example of app content that link out to app listing on Play

Example of app content that link out to app listing on Play

Spammy app descriptions

Another mistake we frequently observe is where developers ‘stuff’ keywords in the app description in hope for better discoverability and ranking against certain keywords and phrases. Text blocks or lists that contain repetitive or unrelated keywords or references violate our Store Listing and Promotion policy. Writing a clear app description intended and optimized for user’s readability and understanding is one of the best ways to avoid this violation.

Watch this video to learn how to avoid spammy store listings and efforts to artificially boost app visibility.

Abandoned and broken apps

There are apps that have been published by the developers a long time ago, and are no longer being maintained. Abandoned and unmaintained apps often create user experience issues -- broken app functionality, for example. Not only are such apps at risk of getting a low star rating and negative user reviews, they will also be flagged as violating the minimum functionality policy. To mitigate the negative impact to the developer reputation and app enforcement, consider unpublishing such apps from the Play Store. Note the updated unpublish action won’t affect existing users who already installed the app, and developers can always choose to re-publish them after addressing the broken experiences.

Example of an abandoned app that provides a broken app experience

Example of an abandoned app that provides a broken app experience

Play icon with graduation cap

Take the ‘Minimum and Broken Functionality Spam’ course on Play Academy



Apps vs. Webview

Lastly, we observe a large volume of app submissions that are just webviews of existing websites. Most of these apps are submitted with a primary purpose of driving traffic rather than providing engaging app experiences to Android users. Such apps are considered webview spam, and are removed from Play. Instead, consider thinking through what users can do or do better with the app than in a web experience and implement relevant features and functionalities that enrich the user experience.

Example of webview without any app functionality

Example of a webview without any app functionality

Play icon with graduation cap

Take the ‘Webview Spam’ course on Play Academy



While the above are one of the most frequent mistakes, make sure to stay up to date with the latest policies by visiting the Play Developer Policy Center. Check out Google Play Academy’s Policy training, including our new Spam courses, and watch our Play PolicyBytes videos to learn more about recent policy updates.

Developer tips and guides: Common policy violations and how you can avoid them

By Andrew Ahn, Product Manager, Google Play App Safety

At Google Play, we want to foster an ecosystem of safe, engaging, useful, and entertaining apps used and loved by billions of Android users worldwide. That’s why we regularly update and revise our Google Play Developer Policies and Developer Distribution Agreement, detailing the boundaries of app content and functionalities allowed on the platform, as well as providing latest guidance on how developers can promote and monetize apps.

In recent efforts in analyzing apps for policy compliance on Google Play we identified some common mistakes and violations that developers make, and we’re sharing these with the developer community with tips and guides on how to avoid them, mitigating the risks of apps and developer accounts being suspended for violating our policies.

Links that take users back to other apps on the Play Store

One of the most common mistakes we see are apps that have buttons and menus that link out to the Play Store -- either to apps by the same developer, or other apps that may be affiliated with the developer, but not being clear that these are ads or promotional links. Without this clarity, apps may get enforced for having deceptive / disguised ads. One of the ways to avoid such mistakes is by explicitly calling these out by labeling the buttons and links as ‘More Apps’, ‘More Games’, ‘Explore’, ‘Check out our other apps’, etc.

Example of app content that link out to app listing on Play

Example of app content that link out to app listing on Play

Spammy app descriptions

Another mistake we frequently observe is where developers ‘stuff’ keywords in the app description in hope for better discoverability and ranking against certain keywords and phrases. Text blocks or lists that contain repetitive or unrelated keywords or references violate our Store Listing and Promotion policy. Writing a clear app description intended and optimized for user’s readability and understanding is one of the best ways to avoid this violation.

Watch this video to learn how to avoid spammy store listings and efforts to artificially boost app visibility.

Abandoned and broken apps

There are apps that have been published by the developers a long time ago, and are no longer being maintained. Abandoned and unmaintained apps often create user experience issues -- broken app functionality, for example. Not only are such apps at risk of getting a low star rating and negative user reviews, they will also be flagged as violating the minimum functionality policy. To mitigate the negative impact to the developer reputation and app enforcement, consider unpublishing such apps from the Play Store. Note the updated unpublish action won’t affect existing users who already installed the app, and developers can always choose to re-publish them after addressing the broken experiences.

Example of an abandoned app that provides a broken app experience

Example of an abandoned app that provides a broken app experience

Play icon with graduation cap

Take the ‘Minimum and Broken Functionality Spam’ course on Play Academy



Apps vs. Webview

Lastly, we observe a large volume of app submissions that are just webviews of existing websites. Most of these apps are submitted with a primary purpose of driving traffic rather than providing engaging app experiences to Android users. Such apps are considered webview spam, and are removed from Play. Instead, consider thinking through what users can do or do better with the app than in a web experience and implement relevant features and functionalities that enrich the user experience.

Example of webview without any app functionality

Example of a webview without any app functionality

Play icon with graduation cap

Take the ‘Webview Spam’ course on Play Academy



While the above are one of the most frequent mistakes, make sure to stay up to date with the latest policies by visiting the Play Developer Policy Center. Check out Google Play Academy’s Policy training, including our new Spam courses, and watch our Play PolicyBytes videos to learn more about recent policy updates.

Google Supports Scams Awareness Week

This year, #scamsweek2020 comes at a time where many of us are spending more time at home, and are using a plethora of new apps and communications tools to work, learn, access information, and stay connected with loved ones.  We are joining the ACCC Scamwatch team this week to promote the importance of identifying and managing online security risks - some of which we do on your behalf without you even realising and some of which we ask you to make an informed decision about. 


When people first started staying home due to COVID-19 earlier this year, our advanced, machine-learning classifiers saw 18 million daily malware and phishing attempts related to COVID-19, in addition to more than 240 million COVID-related spam messages globally. Our security systems have detected a range of new scams circulating, such as phishing emails posing as messages from charities and NGOs, directions from “administrators” to employees working from home, and even notices spoofing healthcare providers. Our systems have also spotted malware-laden sites that pose as sign-in pages for popular social media accounts, health organisations, or even official coronavirus maps. 


To protect you from these risks, we've built advanced security protections into many Google products to automatically identify and stop threats before they ever reach you. Our machine learning models in Gmail already detect and block more than 99.9 percent of spam, phishing and malware. Our built-in security also protects you by alerting you before you enter fraudulent websites, by scanning apps in Google Play before you download, and more. But we want to help you stay secure everywhere online, not just on our products, so we’re providing these simple tips, tools and resources.



Know how to spot and avoid COVID-19 scams
With many of the COVID-19 related scams coming in the form of phishing emails, it’s important to pause and evaluate any COVID-19 email before clicking any links or taking other action. Be wary of requests for personal information such as your home address or bank details. Fake links often imitate established websites by adding extra words or letters to them—check the URL’s validity by hovering over it (on desktop) or with a long press (on mobile).

Tips to Avoid Common Scams

Use your company’s enterprise email account for anything work-related
Working with our enterprise customers, we see how employees can put their company’s business at risk when using their personal accounts or devices. Even when working from home, it’s important to keep your work and personal email separate. Enterprise accounts offer additional security features that keep your company’s private information private. If you’re unsure about your company’s online security safeguards, check with your IT professionals to ensure the right security features are enabled, like two-factor authentication.



Secure your video calls on video conferencing apps
The security controls built into Google Meet are turned on by default, so that in most cases, organisations and users are automatically protected. But there are steps you can take on any video conferencing app to make your call more secure:
  • Consider adding an extra layer of verification to help ensure only invited attendees gain access to the meeting.
  • When sharing a meeting invite publicly, be sure to enable the “knocking” feature so that the meeting organiser can personally vet and accept new attendees before they enter the meeting.
  • If you receive a meeting invite that requires installing a new video-conferencing app, always be sure to verify the invitation—paying special attention to potential imposters—before installing.



Install security updates when notified
When working from home, your work computer may not automatically update your security technology as it would when in the office and connected to your corporate network. It’s important to take immediate action on any security update prompts. These updates solve for known security vulnerabilities, which attackers are actively seeking out and exploiting.



Use a password manager to create and store strong passwords
With all the new applications and services you might be using for work and school purposes, it can be tempting to use just one password for all.  In fact, 69% of Aussies admit to using the same password across multiple accounts, despite 90% knowing that this presents a security risk. To keep your private information private, always use unique, hard-to-guess passwords. A password manager, like the one built into Android, Chrome, and your Google Account can help make this easier.



Protect your Google Account
If you use a Google Account, you can easily review any recent security issues and get personalised recommendations to help protect your data and devices with the Security Checkup. Within this tool, you can also run a Password Checkup to learn if any of your saved passwords for third party sites or accounts have been compromised and then easily change them if needed.


You should also consider adding two-steps verification (also known as two-factor authentication), which you likely already have in place for online banking and other similar services, to provide an extra layer of security. This helps keep out anyone who shouldn't have access to your accounts by requiring a secondary factor on top of your username and password to sign in. To set this up for your Google Account, go to g.co/2SV.


Protecting your Google Play Console account with 2-Step Verification

Posted by Tom Grinsted, Product Manager, Google Play Console

Google Play Console has something for everyone, from QAs and PMs to engineers and marketing managers. The new Google Play Console beta, available now at play.google.com/console, offers customized, secure access to everyone on your team. For a closer look at some of its new features and workflows, tune in to this week’s series of live webinars, which will also be available on demand.

Granting your team members safe access to specific features in your developer account is one of the best ways to increase the value of our tools for your organization. We want to make sure that your developer account is as safe as possible so you feel confident when granting access. A key way to do that is to make sure that every person who has access to your account signs in using secure methods that follow best practices. That’s why, towards the end of this year, we’re going to start requiring users of Google Play Console to sign in using Google's 2-Step Verification.

Google

2-Step Verification uses both your password and a second way to identify you for added security. This could be a text message to a registered phone, an authenticator app, alerts on supported devices, or a hardware security key. Normally, you only have to do this when you sign in for the first time on a new computer. It’s one of the easiest ways to increase the level of security for you and your team members’ accounts.

Learn more about 2-Step Verification here, and how to set it up for your own account.

If you have any comments or concerns about using 2-Step Verification to sign in to Google Play Console, or if you think it will impact you or your teams’ use of Google Play Console, use this form to let us know. All responses will be read by our product team and will help us shape our future plans.

Your team won’t be required to use 2-Step Verification immediately, although we recommend that you set it up now. We will start mandating 2-Step Verification with new users to Google Play Console towards the end of Q3, followed by existing users with high-risk permissions like app publishing or changing the prices in in-app products, later in the year. We’ll also remind every impacted user in Google Play Console at least 30 days before the change takes effect. We may also start to re-verify when you’re undertaking a sensitive action like changing your developer name or transferring ownership of an app.

Hundreds of thousands of Google Play Console users already use 2-Step Verification to keep their accounts safe, and it's been the default for G Suite customers for years. But we understand that requiring this may impact some of your existing workflows, which is why we’re giving advance notice of this change and asking for your feedback.

We can all take steps to keep our accounts and the developer community safe. Thanks for publishing your apps on Google Play.


How useful did you find this blog post?