Tag Archives: A/B Testing

In reviews we trust — Making Google Play ratings and reviews more trustworthy

Posted by Fei Ye, Software Engineer and Kazushi Nagayama, Ninja Spamologist

Google Play ratings and reviews are extremely important in helping users decide which apps to install. Unfortunately, fake and misleading reviews can undermine users' trust in those ratings. User trust is a top priority for us at Google Play, and we are continuously working to make sure that the ratings and reviews shown in our store are not being manipulated.

There are various ways in which ratings and reviews may violate our developer guidelines:

  • Bad content: Reviews that are profane, hateful, or off-topic.
  • Fake ratings: Ratings and reviews meant to manipulate an app's average rating or top reviews. We've seen different approaches to manipulate the average rating; from 5-star attacks to positively boost an app's average rating, to 1-star attacks to influence it negatively.
  • Incentivized ratings: Ratings and reviews given by real humans in exchange for money or valuable items.

When we see these, we take action on the app itself, as well as the review or rating in question.

In 2018, the Google Play Trust & Safety teams deployed a system that combines human intelligence with machine learning to detect and enforce policy violations in ratings and reviews. A team of engineers and analysts closely monitor and study suspicious activities in Play's ratings and reviews, and improve the model's precision and recall on a regular basis. We also regularly ask skilled reviewers to check the decisions made by our models for quality assurance.

It's a big job. To give you a sense of the volume we manage, here are some numbers from a recent week:

  • Millions of reviews and ratings detected and removed from the Play Store.
  • Thousands of bad apps identified due to suspicious reviews and rating activities on them.

Our team can do a lot, but we need your help to keep Google Play a safe and trusted place for apps and games.

If you're a developer, you can help us by doing the following:

  • Don't buy fake or incentivized ratings.
  • Don't run campaigns, in-app or otherwise, like "Give us 5 stars and we'll give you this in-app item!" That counts as incentivized ratings, and it's prohibited by policy.
  • Do read the Google Play Developer Policy to make sure you are not inadvertently making violations.

Example of a violation: incentivized ratings is not allowed

If you're a user, you can follow these simple guidelines as well:

  • Don't accept or receive money or goods (even virtual ones) in exchange for reviews and ratings.
  • Don't use profanity to criticize an app or game; keep your feedback constructive.
  • Don't post gibberish, hateful, sexual, profane or off-topic reviews; they simply aren't allowed.
  • Do read the comment posting policy. It's pretty concise and talks about all the things you should consider when posting a review to the public.

Finally, if you find bad ratings and reviews on Google Play, help us improve by sending your feedback! Users can mark the review as "Spam" and developers can submit feedback through the Play Console.

Tooltip to flag the review as Spam.

Thanks for helping us keep Google Play a safe and trusted place to discover some of the world's best apps and games.

How useful did you find this blog post?

How to test your rewarded ads

Have you started using AdMob rewarded video, but feel like you could be getting more out of it?

We know it can be hard to get it right the first time, so we recommend A/B testing when implementing rewarded video in your app. Why? Because rewarded video gives you so much more flexibility and even the smallest tweaks can make a huge difference in your app revenue or give you peace of mind that you’re improving your user experience. With that in mind, here are four steps to help you run an effective A/B test.

1. Start with a defined goal and a hypothesis: Step back and decide on a single hypothesis that has the most potential to improve your business and start there. So where should you start testing?  One good place is the design elements in your ad template and how it can impact greater user ad engagement.For example, if you have a hypothesis that font sizes impact clarity and user engagement then you could create two variations with different font sizes (10pm and 13pm) to test key metrics like click through rates, ad revenue and of course, app exit rates. Meanwhile, key metrics to look for would be click through rate, ad revenue, and app exit rates.

Example variables that you could test are:

  • Font size
  • Ad size
  • Reward settings
  • Ad placement within app

2. Remember to test only one variation at a time for it to be a true A/B test: The testing stage will require two variations of your app screen – the current version and your re-designed version. When creating these variations, using an A/B testing platform will make it easy to design, run, and monitor your tests.

3. Run the experiment: Time to test your results. Set up your app to randomly show your original set-up to half of your users (i.e., the “control group”) and the second variation to the other 50% (i.e.,  the “experimental group”). By using a control group, you’re collecting baseline data to compare against your results. Without it, you can’t tell the difference between the response to your new designs or other variables, like seasonal chance.

4. Make a decision: Once the experiment is done, it’s time to crunch the data. First thing to do is to revisit your initial goal and hypothesis, and make that all-important final calls on whether the new variation is worth changing. Don’t be too hasty to lock in a new look. If the changes are significant, it’s smart to run the experiment over several time periods to ensure the results aren’t due to seasonality, or other variables.

As you continue to run more tests, remember that even with helpful tools, testing takes time and resources. Don’t waste time testing elements that won’t significantly impact your goal. Use app analytics data to help uncover spots in your app with a lot of opportunity and potential (think: screens with high traffic, high engagement, or large user drop off, for example). A good idea might be to have a devoted team member spend 25% of their time on monitoring analytics, identifying ad optimization ideas, and testing them.

Until next time, be sure to stay connected on all things AdMob by following our Twitter, LinkedIn and Google+ pages.

The AdMob Team

Source: Inside AdMob