Tag Archives: Testing

Open sourcing GTXiLib, an accessibility test automation framework for iOS

Google believes everyone should be able to access and enjoy the web. We share guidance on building accessible tech over at Google Accessibility and we recently launched a dedicated disability support team. Today, we’re excited to announce that we’ve open sourced GTXiLib, an accessibility test automation framework for iOS, under the Apache license.

We want our products to be accessible and automation, with frameworks like GTXiLib, is one of the ways we scale our accessibility testing. GTXiLib can automate the process of checking for some kinds of issues such as missing labels, hints, or low contrast text.

GTXiLib is written in Objective-C and will integrate with your existing XCTests to perform all the registered accessibility checks before the test tearDown. When the checks fail, the existing test fails as well. Fixing your tests will thus lead to better accessibility and your tests can catch new accessibility issues as well.
  • Reuse your tests: GTXiLib integrates into your existing functional tests, enhancing the value of any tests that you have or any that you write.
  • Incremental accessibility testing: GTXiLib can be installed onto a single test case, test class or a specific subset of tests giving you the freedom to add accessibility testing incrementally. This helped drive GTXiLib adoption in large projects at Google.
  • Author your own checks: GTXiLib has a simple API to create custom checks based on the specific needs of your app. For example, you can ensure every button in your app has an accessibilityHint using a custom check.
Do you also care about accessibility? Help us sharpen GTXiLib by suggesting a check or better yet, writing one. You can add GTXiLib to your project using CocoaPods or by using its Xcode project file.

We hope you find this useful and look forward to feedback and contributions from the community! Please check out the README for more information.

By Siddartha Janga, Google Central Accessibility Team 

Container Structure Tests: Unit Tests for Docker Images

Usage of containers in software applications is on the rise, and with their increasing usage in production comes a need for robust testing and validation. Containers provide great testing environments, but actually validating the structure of the containers themselves can be tricky. The Docker toolchain provides us with easy ways to interact with the container images themselves, but no real way of verifying their contents. What if we want to ensure a set of commands runs successfully inside of our container, or check that certain files are in the correct place with the correct contents, before shipping?

The Container Tools team at Google is happy to announce the release of the Container Structure Test framework. This framework provides a convenient and powerful way to verify the contents and structure of your containers. We’ve been using this framework at Google to test all of our team’s released containers for over a year now, and we’re excited to finally share it with the public.

The framework supports four types of tests:
  • Command Tests - to run a command inside your container image and verify the output or error it produces
  • File Existence Tests - to check the existence of a file in a specific location in the image’s filesystem
  • File Content Tests - to check the contents and metadata of a file in the filesystem
  • A unique Metadata Test - to verify configuration and metadata of the container itself
Users can specify test configurations through YAML or JSON. This provides a way to abstract away the test configuration from the implementation of the tests, eliminating the need for hacky bash scripting or other solutions to test container images.

Command Tests

The Command Tests give the user a way to specify a set of commands to run inside of a container, and verify that the output, error, and exit code were as expected. An example configuration looks like this:
globalEnvVars:
- key: "VIRTUAL_ENV"
value: "/env"
- key: "PATH"
value: "/env/bin:$PATH"

commandTests:

# check that the python binary is in the correct location
- name: "python installation"
command: "which"
args: ["python"]
expectedOutput: ["/usr/bin/python\n"]

# setup a virtualenv, and verify the correct python binary is run
- name: "python in virtualenv"
setup: [["virtualenv", "/env"]]
command: "which"
args: ["python"]
expectedOutput: ["/env/bin/python\n"]

# setup a virtualenv, install gunicorn, and verify the installation
- name: "gunicorn flask"
setup: [["virtualenv", "/env"],
["pip", "install", "gunicorn", "flask"]]
command: "which"
args: ["gunicorn"]
expectedOutput: ["/env/bin/gunicorn"]
Regexes are used to match the expected output, and error, of each command (or excluded output/error if you want to make sure something didn’t happen). Additionally, setup and teardown commands can be run with each individual test, and environment variables can be specified to be set for each individual test, or globally for the entire test run (shown in the example).

File Tests

File Tests allow users to verify the contents of an image’s filesystem. We can check for existence of files, as well as examine the contents of individual files or directories. This can be particularly useful for ensuring that scripts, config files, or other runtime artifacts are in the correct places before shipping and running a container.
fileExistenceTests:

# check that the apt-packages text file exists and has the correct permissions
- name: 'apt-packages'
path: '/resources/apt-packages.txt'
shouldExist: true
permissions: '-rw-rw-r--'
Expected permissions and file mode can be specified for each file path in the form of a standard Unix permission string. As with the Command Tests’ “Excluded Output/Error,” a boolean can be provided to these tests to tell the framework to be sure a file is not present in a filesystem.

Additionally, the File Content Tests verify the contents of files and directories in the filesystem. This can be useful for checking package or repository versions, or config file contents among other things. Following the pattern of the previous tests, regexes are used to specify the expected or excluded contents.
fileContentTests:

# check that the default apt repository is set correctly
- name: 'apt sources'
path: '/etc/apt/sources.list'
expectedContents: ['.*httpredir\.debian\.org/debian jessie main.*']

# check that the retry policy is correctly specified
- name: 'retry policy'
path: '/etc/apt/apt.conf.d/apt-retry'
expectedContents: ['Acquire::Retries 3;']

Metadata Test

Unlike the previous tests which all allow any number to be specified, the Metadata test is a singleton test which verifies a container’s configuration. This is useful for making sure things specified in the Dockerfile (e.g. entrypoint, exposed ports, mounted volumes, etc.) are manifested correctly in a built container.
metadataTest:
env:
- key: "VIRTUAL_ENV"
value: "/env"
exposedPorts: ["8080", "2345"]
volumes: ["/test"]
entrypoint: []
cmd: ["/bin/bash"]
workdir: ["/app"]

Tiny Images

One interesting case that we’ve put focus on supporting is “tiny images.” We think keeping container sizes small is important, and sometimes the bare minimum in a container image might even exclude a shell. Users might be used to running something like:
`docker run -d "cat /etc/apt/sources.list && grep -rn 'httpredir.debian.org' image"`
… but this breaks without a working shell in a container. With the structure test framework, we convert images to in-memory filesystem representations, so no shell is needed to examine the contents of an image!

Dockerless Test Runs

At their core, Docker images are just bundles of tarballs. One of the major use cases for these tests is running in CI systems, and often we can't guarantee that we'll have access to a working Docker daemon in these environments. To address this, we created a tar-based test driver, which can handle the execution of all file-related tests through simple tar manipulation. Command tests are currently not supported in this mode, since running commands in a container requires a container runtime.

This means that using the tar driver, we can retrieve images from a remote registry, convert them into filesystems on disk, and verify file contents and metadata all without a working Docker daemon on the host! Our container-diff library is leveraged here to do all the image processing; see our previous blog post for more information.
structure-test -test.v -driver tar -image gcr.io/google-appengine/python:latest structure-test-examples/python/python_file_tests.yaml

Running in Bazel

Structure tests can also be run natively through Bazel, using the “container_test” rule. Bazel provides convenient build rules for building Docker images, so the structure tests can be run as part of a build to ensure any new built images are up to snuff before being released. Check out this example repo for a quick demonstration of how to incorporate these tests into a Bazel build.

We think this framework can be useful for anyone building and deploying their own containers in the wild, and hope that it can promote their usage everywhere through increasing the robustness of containers. For more detailed information on the test specifications, check out the documentation in our GitHub repository.

By Nick Kubala, Container Tools team

OSS-Fuzz: Five months later, and rewarding projects

Five months ago, we announced OSS-Fuzz, Google’s effort to help make open source software more secure and stable. Since then, our robot army has been working hard at fuzzing, processing 10 trillion test inputs a day. Thanks to the efforts of the open source community who have integrated a total of 47 projects, we’ve found over 1,000 bugs (264 of which are potential security vulnerabilities).

Breakdown of the types of bugs we're finding.

Notable results

OSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10 in FreeType2, 17 in FFmpeg, 33 in LibreOffice, 8 in SQLite 3, 10 in GnuTLS, 25 in PCRE2, 9 in gRPC, and 7 in Wireshark, etc. We’ve also had at least one bug collision with another independent security researcher (CVE-2017-2801). (Some of the bugs are still view restricted so links may show smaller numbers.)

Once a project is integrated into OSS-Fuzz, the continuous and automated nature of OSS-Fuzz means that we often catch these issues just hours after the regression is introduced into the upstream repository, before any users are affected.

Fuzzing not only finds memory safety related bugs, it can also find correctness or logic bugs. One example is a carry propagating bug in OpenSSL (CVE-2017-3732).

Finally, OSS-Fuzz has reported over 300 timeout and out-of-memory failures (~75% of which got fixed). Not every project treats these as bugs, but fixing them enables OSS-Fuzz to find more interesting bugs.

Announcing rewards for open source projects

We believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we’d like to encourage more projects to participate and adopt the ideal integration guidelines that we’ve established.

Combined with fixing all the issues that are found, this is often a significant amount of work for developers who may be working on an open source project in their spare time. To support these projects, we are expanding our existing Patch Rewards program to include rewards for the integration of fuzz targets into OSS-Fuzz.

To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at our discretion). You have the option of donating these rewards to charity instead, and Google will double the amount.

To qualify for the ideal integration reward, projects must show that:
  • Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000).
  • Fuzz targets are efficient and provide good code coverage (>80%) (up to $5,000). 
  • Fuzz targets are part of the official upstream development and regression testing process, i.e. they are maintained, run against old known crashers and the periodically updated corpora (up to $5,000).
  • The last $5,000 is a “l33t” bonus that we may reward at our discretion for projects that we feel have gone the extra mile or done something really awesome.
We’ve already started to contact the first round of projects that are eligible for the initial reward. If you are the maintainer or point of contact for one of these projects, you may also reach out to us in order to apply for our ideal integration rewards.

The future

We’d like to thank the existing contributors who integrated their projects and fixed countless bugs. We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software.

By Oliver Chang, Abhishek Arya (Security Engineers, Chrome Security), Kostya Serebryany (Software Engineer, Dynamic Tools), and Josh Armour (Security Program Manager)

How to test your rewarded ads

Have you started using AdMob rewarded video, but feel like you could be getting more out of it?

We know it can be hard to get it right the first time, so we recommend A/B testing when implementing rewarded video in your app. Why? Because rewarded video gives you so much more flexibility and even the smallest tweaks can make a huge difference in your app revenue or give you peace of mind that you’re improving your user experience. With that in mind, here are four steps to help you run an effective A/B test.

1. Start with a defined goal and a hypothesis: Step back and decide on a single hypothesis that has the most potential to improve your business and start there. So where should you start testing?  One good place is the design elements in your ad template and how it can impact greater user ad engagement.For example, if you have a hypothesis that font sizes impact clarity and user engagement then you could create two variations with different font sizes (10pm and 13pm) to test key metrics like click through rates, ad revenue and of course, app exit rates. Meanwhile, key metrics to look for would be click through rate, ad revenue, and app exit rates.

Example variables that you could test are:

  • Font size
  • Ad size
  • Reward settings
  • Ad placement within app

2. Remember to test only one variation at a time for it to be a true A/B test: The testing stage will require two variations of your app screen – the current version and your re-designed version. When creating these variations, using an A/B testing platform will make it easy to design, run, and monitor your tests.

3. Run the experiment: Time to test your results. Set up your app to randomly show your original set-up to half of your users (i.e., the “control group”) and the second variation to the other 50% (i.e.,  the “experimental group”). By using a control group, you’re collecting baseline data to compare against your results. Without it, you can’t tell the difference between the response to your new designs or other variables, like seasonal chance.

4. Make a decision: Once the experiment is done, it’s time to crunch the data. First thing to do is to revisit your initial goal and hypothesis, and make that all-important final calls on whether the new variation is worth changing. Don’t be too hasty to lock in a new look. If the changes are significant, it’s smart to run the experiment over several time periods to ensure the results aren’t due to seasonality, or other variables.

As you continue to run more tests, remember that even with helpful tools, testing takes time and resources. Don’t waste time testing elements that won’t significantly impact your goal. Use app analytics data to help uncover spots in your app with a lot of opportunity and potential (think: screens with high traffic, high engagement, or large user drop off, for example). A good idea might be to have a devoted team member spend 25% of their time on monitoring analytics, identifying ad optimization ideas, and testing them.

Until next time, be sure to stay connected on all things AdMob by following our Twitter, LinkedIn and Google+ pages.

The AdMob Team

Source: Inside AdMob


Announcing Google Radio PHY Test, aka “Graphyte”, as part of the Chromium Project

With many different Chromebook models for sale from several different OEMs, the Chrome OS Factory team interfaces with many different Contract Manufacturers (CMs), Original Device Manufacturers (ODMs), and factory teams. The Google Chromium OS Factory Software Platform, a suite of factory tools provided to Chrome OS partners, allows any factory team to quickly bring up a Chrome OS manufacturing test line.

Today, we are announcing that this platform has been extended to remove the friction of bringing up wireless verification test systems with a component called Google Radio PHY Test or “Graphyte.” Graphyte is an open source software framework that can be used and extended by the wireless ecosystem of chipset companies, test solution providers, and wireless device manufacturers, as opposed to the traditional approach of vendor-specific solutions. It is developed in Python and capable of running on Linux and Chrome OS test stations with an initial focus on Wi-Fi, Bluetooth, and 802.15.4 physical layer verification.

Verifying that a wireless device is working properly requires chipset- and instrument-specific software which coordinate transmitting and measuring power and signal quality across channels, bandwidths, and data rates. Graphyte provides high-level API abstractions for controlling wireless chipsets and test instruments, allowing anyone to develop a “plugin” for a given chipset or instrument.

Graphyte architecture.

We’ve worked closely with industry leaders like Intel and LitePoint to ensure the Graphyte APIs have the right level of abstraction, and it is already being used in production on multiple manufacturing lines and several different products.

To get started, use Git to clone our repository. You can learn more by reading the Graphyte User Manual and checking out the example of how to use Graphyte in a real test. You can get involved by joining our mailing list. If you’d like to contribute, please follow the Chromium OS Developer Guide.

To get started with the LitePoint Graphyte plugin, please contact LitePoint directly. To get started with the Intel Graphyte plugin, please contact Intel directly.

Happy testing!

By Kurt Williams, Technical Program Manager

Tips from developers Peak and Soundcloud on how to grow your startup on Google Play


Posted by Francesca Di Felice, Developer Marketing at Google Play

At Playtime 2016, Google Play's series of developer events, we met with top app and game developers from around the world to share learnings on how to build successful businesses on Google Play. Several startups, including game developer Peaklabs and audio platform SoundCloud, presented on stage their own best practices for growth, which you might find helpful.

Testing for growth, by Peak

Hear from Kevin Shanahan, Product Manager from Peak, a brain training app, on how to grow sustainably.



  • Test lots of ideas: You can't be sure of what will work and what won't, so you need to test lots of ideas. Peak ran four different tests to try to increase conversions to Pro (their subscriber offering):
  1. Made the ability to replay games a Pro feature
  2. Reduced price of Pro by 25% in top 2 markets
  3. Bundled add-on modules from partners into Pro
  4. Showed a preview of Pro-only content
          One of these tests resulted in a 50% increase in conversions.

  • Get the basics right: Start with a great product and have a data-informed culture. Don't only test app features, experimenting your store listing using store listing experiments is also important.
  • Build a robust A/B testing process: Having a well-defined A/B testing process and a system for tracking your experiments is key to testing quickly and effectively.

Improving user retention, by SoundCloud

Andy Carvell, former Product Manager at SoundCloud, an online audio distribution platform that enables its users to upload, record, promote, and share their originally-created sounds, explains how they focus on retention to improve growth.

 

  • Design your retention strategy: Apps with poor retention grow slowly. To increase your retention you should:
    • Convert new users to repeat visitors by providing a strong onboarding experience for new users and taking a high-touch approach during the first days and weeks.
    • Increase visit frequency within this group by providing frequent, timely, and relevant messaging about content or activity on the platform.
    • Target returning users who were not seen over the last period, who are 'at risk of churn' users, by giving them reasons to come back for another session before losing them.
    • Re-activate lapsed (long-term churned) users with campaigns to remind them about your app and offer an incentive to return.
  • Build 'growth machines': Create repeatable processes that testing has proven to positively impact retention, retaining users, and preventing churn.
  • Use activity notifications in a personalised and effective way: At SoundCloud there are plenty of things that happen when users are not in the app that might be relevant to them, for example new content releases or social interactions. They tested 5 new notification types, always keeping a control group to better keep track of the impact, and managed to increase retention in a 5%. Watch the video above for more of Andy's tips on making better use of notifications.

Other speakers, such as Silicon Valley VC Greylock, have also shared their tips for startup growth. Watch more sessions from this year's Playtime events to learn best practices from other apps and game partners, and the Google Play team. Get the Playbook for Developers app to stay up to date with news and tips to help you grow a successful business on Google Play.

How useful did you find this blogpost?
   

Announcing OSS-Fuzz: Continuous fuzzing for open source software

We are happy to announce OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software.

Open source software is the backbone of the many apps, sites, services, and networked things that make up “the internet.” It is important that the open source foundation be stable, secure, and reliable, as cracks and weaknesses impact all who build on it.

Recent security stories confirm that errors like buffer overflow and use-after-free can have serious, widespread consequences when they occur in critical open source software. These errors are not only serious, but notoriously difficult to find via routine code audits, even for experienced developers. That's where fuzz testing comes in. By generating random inputs to a given program, fuzzing triggers and helps uncover errors quickly and thoroughly.

In recent years, several efficient general purpose fuzzing engines have been implemented (e.g. AFL and libFuzzer), and we use them to fuzz various components of the Chrome browser. These fuzzers, when combined with Sanitizers, can help find security vulnerabilities (e.g. buffer overflows, use-after-free, bad casts, integer overflows, etc), stability bugs (e.g. null dereferences, memory leaks, out-of-memory, assertion failures, etc) and sometimes even logical bugs.

OSS-Fuzz’s goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution. OSS-Fuzz combines various fuzzing engines (initially, libFuzzer) with Sanitizers (initially, AddressSanitizer) and provides a massive distributed execution environment powered by ClusterFuzz.

Early successes

Our initial trials with OSS-Fuzz have had good results. An example is the FreeType library, which is used on over a billion devices to display text (and which might even be rendering the characters you are reading now). It is important for FreeType to be stable and secure in an age when fonts are loaded over the Internet. Werner Lemberg, one of the FreeType developers, was an early adopter of OSS-Fuzz. Recently the FreeType fuzzer found a new heap buffer overflow only a few hours after the source change:

ERROR: AddressSanitizer: heap-buffer-overflow on address 0x615000000ffa
READ of size 2 at 0x615000000ffa thread T0
SCARINESS: 24 (2-byte-read-heap-buffer-overflow-far-from-bounds)
   #0 0x885e06 in tt_face_vary_cvtsrc/truetype/ttgxvar.c:1556:31

OSS-Fuzz automatically notified the maintainer, who fixed the bug; then OSS-Fuzz automatically confirmed the fix. All in one day! You can see the full list of fixed and disclosed bugs found by OSS-Fuzz so far.

Contributions and feedback are welcome

OSS-Fuzz has already found 150 bugs in several widely used open source projects (and churns ~4 trillion test cases a week). With your help, we can make fuzzing a standard part of open source development, and work with the broader community of developers and security testers to ensure that bugs in critical open source applications, libraries, and APIs are discovered and fixed. We believe that this approach to automated security testing will result in real improvements to the security and stability of open source software.

OSS-Fuzz is launching in Beta right now, and will be accepting suggestions for candidate open source projects. In order for a project to be accepted to OSS-Fuzz, it needs to have a large user base and/or be critical to Global IT infrastructure, a general heuristic that we are intentionally leaving open to interpretation at this early stage. See more details and instructions on how to apply here.

Once a project is signed up for OSS-Fuzz, it is automatically subject to the 90-day disclosure deadline for newly reported bugs in our tracker (see details here). This matches industry’s best practices and improves end-user security and stability by getting patches to users faster.

Help us ensure this program is truly serving the open source community and the internet which relies on this critical software, contribute and leave your feedback on GitHub.

By Mike Aizatsky, Kostya Serebryany (Software Engineers, Dynamic Tools); Oliver Chang, Abhishek Arya (Security Engineers, Google Chrome); and Meredith Whittaker (Open Research Lead).

Test on Android 7.1 Developer Preview in Firebase Test Lab

By Ahmed Mounir Gad, Product Manager, Firebase Test Lab

To deliver the best user experience right out of the gate, Firebase Test Lab for Android allows you to test your apps and ensure their compatibility with multiple device configurations, across OS versions, screen orientations, and locales. With a single click, you can run your tests on hundreds of device configurations in Google Cloud and receive your results quickly.

Today, we’re excited to announce the availability of the Android 7.1 Developer Preview on Firebase Test Lab virtual devices. In addition to testing the Android 7.1 Developer Preview on your physical Android Device with the Android Beta program, or on your local Android Emulator, you can use the Firebase Test Lab to scale your app testing to hundreds of Android virtual devices.

You can also use Firebase Test Lab to perform your own testing. If you don’t have any test scripts, Robo test is ideal for doing your basic compatibility testing on the new platform. It crawls your app in an attempt to find crashes. You can also use the Espresso Test Recorder in Android Studio to record your own instrumentation tests without writing any code.

From now until the end of December (12/31/2016), Firebase Test Lab will be offered at no charge on the Firebase Blaze plan for all virtual devices, to help you ensure the compatibility of your app with the Android 7.1 Developer Preview release, as well as with other Android releases.

Prepare your app for API level 25, then go to the Firebase Test Lab console to run your first test.

Happy testing!

Robo tests uncovering a crash on Android 7.1 Developer Preview for the Flood-It! app.

Testing your app for Android for Work

Posted by, Rich Hyndman, Developer Advocate

Testing is important whether you’re building a dedicated app for the workplace, rolling out new features, or making it easy for IT departments to deploy.

Test DPC is now available for you and is a fully featured, open-source, sample Device Policy Controller (DPC) which allows you to test your apps with any Android for Work feature. A DPC manages the security policies and work apps on devices using Android for Work. You can configure Test DPC to be either a device or profile owner to test all the Android for Work scenarios:

  • Profile Owner: Employees using their personal phones for work and allowing their company to own the work applications and data (i.e. bring your own device or BYOD)
  • Device Owner: Enterprises providing devices to employees and managing the entire device
  • Device Owner: Enterprises deploying devices for a narrow use case, such as a mall directory or restaurant menu (i.e. corporate owned, single use devices)

Test DPC simplifies testing and development because you can use it to set the kinds of policies an IT administrator might enforce. You can establish app and intent restrictions, set up managed work profiles, enforce policies, and can even set up fully managed Android devices — something you might find as an info board or kiosk in a public place.

The Test DPC app can be found on Google Play with the source on GitHub. Set up Test DPC as a device/profile owner on your device by checking out this user guide.

If you want to learn more about Android for Work and its capabilities, check out Android for Work Application Developer Guide for full guidance on optimizing your app for Android for Work.

Note: Your test Android device needs to run Android 5.0 or later and be able to support Android for Work natively.

Iterate faster on Google Play with improved beta testing

Posted by Ellie Powers, Product Manager, Google Play

Today, Google Play is making it easier for you to manage beta tests and get your users to join them. Since we launched beta testing two years ago, developers have told us that it’s become a critical part of their workflow in testing ideas, gathering rapid feedback, and improving their apps. In fact, we’ve found that 80 percent of developers with popular apps routinely run beta tests as part of their workflow.

Improvements to managing a beta test in the Developer Console

Currently, the Google Play Developer Console lets developers release early versions of their app to selected users as an alpha or beta test before pushing updates to full production. The select user group downloads the app on Google Play as normal, but can’t review or rate it on the store. This gives you time to address bugs and other issues without negatively impacting your app listing.

Based on your feedback, we’re launching new features to more effectively manage your beta tests, and enable users to join with one click.

  • NEW! Open beta – Use an open beta when you want any user who has the link to be able to join your beta with just one click. One of the advantages of an open beta is that it allows you to scale to a large number of testers. However, you can also limit the maximum number of users who can join.
  • NEW! Closed beta using email addresses – If you want to restrict which users can access your beta, you have a new option: you can now set up a closed beta using lists of individual email addresses which you can add individually or upload as a .csv file. These users will be able to join your beta via a one-click opt-in link.
  • Closed beta with Google+ community or Google Group – This is the option that you’ve been using today, and you can continue to use betas with Google+ communities or Google Groups. You will also be able to move to an open beta while maintaining your existing testers.

How developers are finding success with beta testing

Beta testing is one of the fast iteration features of Google Play and Android that help drive success for developers like Wooga, the creators of hit games Diamond Dash, Jelly Splash, and Agent Alice. Find out more about how Wooga iterates on Android first from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering.


Kabam is a global leader in AAA quality mobile games developed in partnership with Hollywood studios for such franchises such as Fast & Furious, Marvel, Star Wars and The Hobbit. Beta testing helps Kabam engineers perfect the gameplay for Android devices before launch. “The ability to receive pointed feedback and rapidly reiterate via alpha/beta testing on Google Play has been extremely beneficial to our worldwide launches,” said Kabam VP Rob Oshima.

Matt Small, Co-Founder of Vector Unit recently told us how they’ve been using beta testing extensively to improve Beach Buggy Racing and uncover issues they may not have found otherwise. You can read Matt’s blog post about beta testing on Google Play on Gamasutra to hear about their experience. We’ve picked a few of Matt’s tips and shared them below:

  1. Limit more sensitive builds to a closed beta where you invite individual testers via email addresses. Once glaring problems are ironed out, publish your app to an open beta to gather feedback from a wider audience before going to production.
  2. Set expectations early. Let users know about the risks of beta testing (e.g. the software may not be stable) and tell them what you’re looking for in their feedback.
  3. Encourage critical feedback. Thank people when their criticisms are thoughtful and clearly explained and try to steer less-helpful feedback in a more productive direction.
  4. Respond quickly. The more people see actual responses from the game developer, the more encouraged they are to participate.
  5. Enable Google Play game services. To let testers access features like Achievements and Leaderboards before they are published, go into the Google Play game services testing panel and enable them.

We hope this update to beta testing makes it easier for you to test your app and gather valuable feedback and that these tips help you conduct successful tests. Visit the Developer Console Help Center to find out more about setting up beta testing for your app.