Author Archives: Reto Meier

Interactive watch faces with the latest Android Wear update

Posted by Wayne Piekarski, Developer Advocate

The Android Wear team is rolling out a new update that includes support for interactive watch faces. Now, you can detect taps on the watch face to provide information quickly, without having to open an app. This gives you new opportunities to make your watch face more engaging and interesting. For example, in this animation for the Pujie Black watch face, you can see that just touching the calendar indicator quickly changes the watch face to show the agenda for the day, making the watch face more helpful and engaging.

Interactive watch face API

The first step in building an interactive watch face is to update your build.gradle to use version 1.3.0 of the Wearable Support library. Then, you enable interactive watch faces in your watch face style using setAcceptsTapEvents(true):

setWatchFaceStyle(new WatchFaceStyle.Builder(mService)
    .setAcceptsTapEvents(true)
    // other style customizations
    .build());

To receive taps, you can override the following method:

@Override
public void onTapCommand(int tapType, int x, int y, long eventTime) { }

You will receive events TAP_TYPE_TOUCH when the user initially taps on the screen, TAP_TYPE_TAP when the user releases their finger, and TAP_TYPE_TOUCH_CANCEL if the user moves their finger while touching the screen. The events will contain (x,y) coordinates of where the touch event occurred. You should note that other interactions such as swipes and long presses are reserved for use by the Android Wear system user interface.

And that’s it! Adding interaction to your existing watch faces is really easy with just a few extra lines of code. We have updated the WatchFace sample to show a complete implementation, and design and development documentation describing the API in detail.

Wi-Fi added to LG G Watch R

This release also brings Wi-Fi support to the LG G Watch R. Wi-Fi support is already available in many Android Wear watches and allows the watch to communicate with the companion phone without requiring a direct Bluetooth connection. So, you can leave your phone at home, and as long as you have Wi-Fi, you can use your watch to receive notifications, send messages, make notes, or ask Google a question. As a developer, you should ensure that you use the Data API to abstract away your communications, so that your application will work on any kind of Android Wear watch, even those without Wi-Fi.

Updates to existing watches

This update to Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. The wearable support library version 1.3 provides the implementation for touch interactions, and is designed to continue working on devices which have not been updated. However, the touch support will only work on updated devices, so you should wait to update your apps on Google Play until the OTA rollout is complete, which we’ll announce on the Android Wear Developers Google+ community. If you want to release immediately but check if touch interactions are available, you can use this code snippet:

PackageInfo packageInfo = PackageManager.getPackageInfo("com.google.android.wearable.app", 0);
if (packageInfo.versionCode > 720000000) {
  // Supports taps - cache this result to avoid calling PackageManager again
} else {
  // Device does not support taps yet
}

Android Wear developers have created thousands of amazing apps for the platform and we can’t wait to see the interactive watch faces you build. If you’re looking for a little inspiration, or just a cool new watch face, check out the Interactive Watch Faces collection on Google Play.

Barcode Detection in Google Play services

Posted by Laurence Moroney, Developer Advocate

With the release of Google Play services 7.8 we’re excited to announce that we’ve added new Mobile Vision APIs which provides the Barcode Scanner API to read and decode a myriad of different barcode types quickly, easily and locally.

Barcode detection

Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse -- processing Frame objects to return a SparseArray<Barcode> types.

The Barcode type represents a single recognized barcode and its value. In the case of 1D barcode such as UPC codes, this will simply be the number that is encoded in the barcode. This is available in the rawValue property, with the detected encoding type set in the format field.

For 2D barcodes that contain structured data, such as QR codes, the valueFormat field is set to the detected value type, and the corresponding data field is set. So, for example, if the URL type is detected, the constant URL will be loaded into the valueFormat, and the URL property will contain the desired value. Beyond URLs, there are lots of different data types that the QR code can support -- check them out in the documentation here.

When using the API, you can read barcodes in any orientation. They don’t always need to be straight on, and oriented upwards!

Importantly, all barcode parsing is done locally, making it really fast, and in some cases, such as PDF-417, all the information you need might be contained within the barcode itself, so you don’t need any further lookups.

You can learn more about using the API by checking out the sample on GitHub. This uses the Mobile Vision APIs along with a Camera preview to detect both faces and barcodes in the same image.

Supported Bar Code Types

The API supports both 1D and 2D bar codes, in a number of sub formats.

For 1D Bar Codes, these are:

AN-13
EAN-8
UPC-A
UPC-E
Code-39
Code-93
Code-128
ITF
Codabar

For 2D Bar Codes, these are:

QR Code
Data Matrix
PDF 417

Learn More

It’s easy to build applications that use bar code detection using the Barcode Scanner API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:

Follow the Code Lab

Read the Mobile Vision Documentation

Explore the sample

Face Detection in Google Play services

Posted by Laurence Moroney, Developer Advocate

With the release of Google Play services 7.8, we announced the addition of new Mobile Vision APIs, which includes a new Face API that finds human faces in images and video better and faster than before. This API is also smarter at distinguishing faces at different orientations and with different facial features facial expressions.

Face Detection

Face Detection is a leap forward from the previous Android FaceDetector.Face API. It’s designed to better detect human faces in images and video for easier editing. It’s smart enough to detect faces even at different orientations -- so if your subject’s head is turned sideways, it can detect it. Specific landmarks can also be detected on faces, such as the eyes, the nose, and the edges of the lips.

Important Note

This is not a face recognition API. Instead, the new API simply detects areas in the image or video that are human faces. It also infers from changes in the position frame to frame that faces in consecutive frames of video are the same face. If a face leaves the field of view, and re-enters, it isn’t recognized as a previously detected face.


Detecting a face

When the API detects a human face, it is returned as a Face object. The Face object provides the spatial data for the face so you can, for example, draw bounding rectangles around a face, or, if you use landmarks on the face, you can add features to the face in the correct place, such as giving a person a new hat.

  • getPosition() - Returns the top left coordinates of the area where a face was detected
  • getWidth() - Returns the width of the area where a face was detected
  • getHeight() - Returns the height of the area where a face was detected
  • getId() - Returns an ID that the system associated with a detected face

Orientation

The Face API is smart enough to detect faces in multiple orientations. As the head is a solid object that is capable of moving and rotating around multiple axes, the view of a face in an image can vary wildly.

Here’s an example of a human face, instantly recognizable to a human, despite being oriented in greatly different ways:

The API is capable of detecting this as a face, even in the circumstances where as much as half of the facial data is missing, and the face is oriented at an angle, such as in the corners of the above image.

Here are the method calls available to a face object:

  • getEulerY() - Returns the rotation of the face around the vertical axis -- i.e. has the neck turned so that the face is looking left or right [The y degree in the above image]
  • getEulerZ() - Returns the rotation of the face around the Z azis -- i.e. has the user tilted their neck to cock the head sideways [The r degree in the above image]

Landmarks

A landmark is a point of interest within a face. The API provides a getLandmarks() method which returns a List , where a Landmark object returns the coordinates of the landmark, where a landmark is one of the following: Bottom of mouth, left cheek, left ear, left ear tip, left eye, left mouth, base of nose, right cheek, right ear, right ear tip, right eye or right mouth.

Activity

In addition to detecting the landmark, the API offers the following function calls to allow you to smartly detect various facial states:

  • getIsLeftEyeOpenProbability() - Returns a value between 0 and 1, giving probability that the left eye is open
  • getIsRighteyeOpenProbability() - Same but for right eye
  • getIsSmilingProbability() - Returns a value between 0 and 1 giving a probability that the face is smiling

Thus, for example, you could write an app that only takes a photo when all of the subjects in the image are smiling.

Learn More

It’s easy to build applications that use facial detection using the Face API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:

Follow the Code Lab

Read the Documentation

Explore the sample

Google Play services 7.8 – Let’s see what’s Nearby!

Posted by Magnus Hyttsten, Developer Advocate, Play services team

Today we’ve finished the roll-out of Google Play services 7.8. In this release, we’ve added two new APIs. The Nearby Messages API allows you to build simple interactions between nearby devices and people, while the Mobile Vision API helps you create apps that make sense of the visual world, using real-time on-device vision technology. We’ve also added optimization and new features to existing APIs. Check out the highlights in the video or read about them below.

Nearby Messages

Nearby Messages introduces a cross-platform API to find and communicate with mobile devices and beacons, based on proximity. Nearby uses a combination of Bluetooth, Wi-Fi, and an ultrasonic audio modem to connect devices. And it works across Android and iOS. For more info on Nearby Messages, check out the documentation and the launch blog post.

Mobile Vision API

We’re happy to announce a new Mobile Vision API. Mobile Vision has two components.

The Face API allows developers to find human faces in images and video. It’s faster, more accurate and provides more information than the Android FaceDetector.Face API. It finds faces in any orientation, allows developers to find landmarks such as the eyes, nose, and mouth, and identifies faces that are smiling and/or have their eyes open. Applications include photography, games, and hands-free user interfaces.

The Barcode API allows apps to recognize barcodes in real-time, on device, in any orientation. It supports a range of barcodes and can detect multiple barcodes at once. For more information, check out the Mobile Vision documentation.

Google Cloud Messaging

And finally, Google Cloud Messaging - Google’s simple and reliable messaging service - has expanded notification to support localization for Android. When composing the notification from the server, set the appropriate body_loc_key, body_loc_args, title_loc_key, and title_loc_args. GCM will handle displaying the notification based on current device locale, which saves you having to figure out which messages to display on which devices! Check out the docs for more info.

And getting ready for the Android M release, we've added high and normal priority to GCM messaging, giving you additional control over message delivery through GCM. Set messages that need immediate users attention to high priority, e.g., chat message alert, incoming voice call alert. And keep the remaining messages at normal priority so that it can be handled in the most battery efficient way without impeding your app performance.

SDK Now Available!

You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.

To learn more about Google Play services and the APIs available to you through it, visit our documentation on Google Developers.

Android Developer Story: Zabob Studio and Buff Studio reach global users with Google Play

Posted by Lily Sheringham, Google Play team

South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.

Established in 2013, Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business but they have already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.

Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.

In this video, Kwon Dae-hyeon, CEO of Zabob Studio ,and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.

Android Developer Story: Buff Studio - Reaching global users with Google Play

Android Developer Story: Zabob Studio - Growing revenue with Google Play

Check Zabob Studio apps and Buff Knight on Google Play!

We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.

Android Experiments: A celebration of creativity and code

Posted by Roman Nurik, Design Advocate, and Richard The, Google Creative Lab

Android was created as an open and flexible platform, giving people more ways to come together to imagine and create. This spirit of invention has allowed developers to push the boundaries of mobile development and has helped make Android the go-to platform for creative projects in more places—from phones, to tablets, to watches, and beyond. We set out to find a way to celebrate the creative, experimental Android work of developers everywhere and inspire more developers to get creative with technology and code.

Today, we’re excited to launch Android Experiments: a showcase of inspiring projects on Android and an open invitation for all developers to submit their own experiments to the gallery.


The 20 initial experiments show a broad range of creative work–from camera experiments to innovative Android Wear apps to hardware hacks to cutting edge OpenGL demos. All are built using platforms such as the Android SDK and NDK, Android Wear, the IOIO board, Cinder, Processing, OpenFrameworks and Unity. Each project creatively examines in small and big ways how we think of the devices we interact with every day.

Today is just the beginning as we’re opening up experiment submissions to creators everywhere. Whether you’re a student just starting out, or you’ve been at it for a while, and no matter the framework it uses or the device it runs on, Android Experiments is open to everybody.

Check out Android Experiments to view the completed projects, or to submit one of your own. While we can’t post every submission, we’d love to see what you’ve created.

Follow along to see what others build at AndroidExperiments.com.

Low-overhead rendering with Vulkan

Posted by Shannon Woods, Technical Program Manager

Developers of games and 3D graphics applications have one key challenge to meet: How complex a scene can they draw in a small fraction of a second? Much of the work in graphics development goes into organizing data so it can be efficiently consumed by the GPU for rendering. But even the most careful developers can hit unforeseen bottlenecks, in part because the drivers for some graphics processors may reorganize all of that data before it can actually be processed. The APIs used to control these drivers are also not designed for multi-threaded use, requiring synchronization with locks around calls that could be more efficiently done in parallel. All of this results in CPU overhead, which consumes time and power that you’d probably prefer to spend drawing your scene.

Lowering overhead and handing control to developers

In order to address some of the sources of CPU overhead and provide developers with more explicit control over rendering, we’ve been working to bring a new 3D rendering API, Vulkan™, to Android. Like OpenGL™ ES, Vulkan is an open standard for 3D graphics and rendering maintained by Khronos. Vulkan is being designed from the ground up to minimize CPU overhead in the driver, and allow your application to control GPU operation more directly. Vulkan also enables better parallelization by allowing multiple threads to perform work such as command buffer construction at once.

An API is only useful if it does what you expect

To make it easier to write an application once that works across a variety of devices, Android 5.0 Lollipop significantly expanded the Android Compatibility Test Suite (CTS) with over fifty thousand new tests for OpenGL ES, and many more have been added since. This provides an extensive open source test suite for identifying problems in drivers so that they can be fixed, creating a more robust and reliable experience for both developers and end users. For Vulkan, we’ll not only develop similar tests for use in the Android CTS, but we’ll also contribute them to Khronos for use in Vulkan’s own open source Conformance Test Suite. This will enable Khronos to test Vulkan drivers across platforms and hardware, and improve the 3D graphics ecosystem as a whole.

It’s all about developer choice

We’ll be working hard to help create, test, and ship Vulkan, but at the same time, we’re also going to contribute to and support OpenGL ES. As a developer, you’ll be able to choose which API is right for you: the simplicity of OpenGL ES, or the explicit control of Vulkan. We’re committed to providing an excellent developer experience, no matter which API you choose.

Vulkan is still under development, but you’ll be able to find specifications, tests, and tools once they are released at http://www.khronos.org/vulkan.

Get your hands on Android Studio 1.3

Posted by Jamal Eason, Product Manager, Android

Previewed earlier this summer at Google I/O, Android Studio 1.3 is now available on the stable release channel. We appreciated the early feedback from those developers on our canary and beta channels to help ship a great product.

Android Studio 1.3 is our biggest feature release for the year so far, which includes a new memory profiler, improved testing support, and full editing and debugging support for C++. Let’s take a closer look.

New Features in Android Studio 1.3

Performance & Testing Tools

  • Android Memory (HPROF) Viewer

    Android Studio now allows you to capture and analyze memory snapshots in the native Android HPROF format.

  • Allocation Tracker

    In addition to displaying a table of memory allocations that your app uses, the updated allocation tracker now includes a visual way to view the your app allocations.

  • APK Tests in Modules

    For more flexibility in app testing, you now have the option to place your code tests in a separate module and use the new test plugin (‘com.android.test’) instead of keeping your tests right next to your app code. This feature does require your app project to use the Gradle Plugin 1.3.

Code and SDK Management

  • App permission annotations

    Android Studio now has inline code annotation support to help you manage the new app permissions model in the M release of Android. Learn more about code annotations.

  • Data Binding Support

    New data brinding features allow you to create declarative layouts in order to minimize boilerplate code by binding your application logic into your layouts. Learn more about data binding.

  • SDK Auto Update & SDK Manager

    Managing Android SDK updates is now a part of the Android Studio. By default, Android Studio will now prompt you about new SDK & Tool updates. You can still adjust your preferences with the new & integrated Android SDK Manager.

  • C++ Support

    As a part of the Android 1.3 stable release, we included an Early Access Preview of the C++ editor & debugger support paired with an experimental build plugin. See the Android C++ Preview page for information on how to get started. Support for more complex projects and build configurations is in development, but let us know your feedback.

Time to Update

An important thing to remember is that an update to Android Studio does not require you to change your Android app projects. With updating, you get the latest features but still have control of which build tools and app dependency versions you want to use for your Android app.

For current developers on Android Studio, you can check for updates from the navigation menu. For new users, you can learn more about Android Studio on the product overview page or download the stable version from the Android Studio download site.

We are excited to launch this set of features in Android Studio and we are hard at work developing the next set of tools to make develop Android development easier on Android Studio. As always we welcome feedback on how we can help you. Connect with the Android developer tools team on Google+.

Iterate faster on Google Play with improved beta testing

Posted by Ellie Powers, Product Manager, Google Play

Today, Google Play is making it easier for you to manage beta tests and get your users to join them. Since we launched beta testing two years ago, developers have told us that it’s become a critical part of their workflow in testing ideas, gathering rapid feedback, and improving their apps. In fact, we’ve found that 80 percent of developers with popular apps routinely run beta tests as part of their workflow.

Improvements to managing a beta test in the Developer Console

Currently, the Google Play Developer Console lets developers release early versions of their app to selected users as an alpha or beta test before pushing updates to full production. The select user group downloads the app on Google Play as normal, but can’t review or rate it on the store. This gives you time to address bugs and other issues without negatively impacting your app listing.

Based on your feedback, we’re launching new features to more effectively manage your beta tests, and enable users to join with one click.

  • NEW! Open beta – Use an open beta when you want any user who has the link to be able to join your beta with just one click. One of the advantages of an open beta is that it allows you to scale to a large number of testers. However, you can also limit the maximum number of users who can join.
  • NEW! Closed beta using email addresses – If you want to restrict which users can access your beta, you have a new option: you can now set up a closed beta using lists of individual email addresses which you can add individually or upload as a .csv file. These users will be able to join your beta via a one-click opt-in link.
  • Closed beta with Google+ community or Google Group – This is the option that you’ve been using today, and you can continue to use betas with Google+ communities or Google Groups. You will also be able to move to an open beta while maintaining your existing testers.

How developers are finding success with beta testing

Beta testing is one of the fast iteration features of Google Play and Android that help drive success for developers like Wooga, the creators of hit games Diamond Dash, Jelly Splash, and Agent Alice. Find out more about how Wooga iterates on Android first from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering.


Kabam is a global leader in AAA quality mobile games developed in partnership with Hollywood studios for such franchises such as Fast & Furious, Marvel, Star Wars and The Hobbit. Beta testing helps Kabam engineers perfect the gameplay for Android devices before launch. “The ability to receive pointed feedback and rapidly reiterate via alpha/beta testing on Google Play has been extremely beneficial to our worldwide launches,” said Kabam VP Rob Oshima.

Matt Small, Co-Founder of Vector Unit recently told us how they’ve been using beta testing extensively to improve Beach Buggy Racing and uncover issues they may not have found otherwise. You can read Matt’s blog post about beta testing on Google Play on Gamasutra to hear about their experience. We’ve picked a few of Matt’s tips and shared them below:

  1. Limit more sensitive builds to a closed beta where you invite individual testers via email addresses. Once glaring problems are ironed out, publish your app to an open beta to gather feedback from a wider audience before going to production.
  2. Set expectations early. Let users know about the risks of beta testing (e.g. the software may not be stable) and tell them what you’re looking for in their feedback.
  3. Encourage critical feedback. Thank people when their criticisms are thoughtful and clearly explained and try to steer less-helpful feedback in a more productive direction.
  4. Respond quickly. The more people see actual responses from the game developer, the more encouraged they are to participate.
  5. Enable Google Play game services. To let testers access features like Achievements and Leaderboards before they are published, go into the Google Play game services testing panel and enable them.

We hope this update to beta testing makes it easier for you to test your app and gather valuable feedback and that these tips help you conduct successful tests. Visit the Developer Console Help Center to find out more about setting up beta testing for your app.

Auto Backup for Apps made simple

Posted by Wojtek Kaliciński, Developer Advocate, Android

Auto Backup for Apps makes seamless app data backup and restore possible with zero lines of application code. This feature will be available on Android devices running the upcoming M release. All you need to do to enable it for your app is update the targetSdkVersion to 23. You can test it now on the M Developer Preview, where we’ve enabled Auto Backup for all apps regardless of targetSdkVersion.

Auto Backup for Apps is provided by Google to both users and developers at no charge. Even better, the backup data stored in Google Drive does not count against the user's quota. Please note that data transferred may still incur charges from the user's cellular / internet provider.


What is Auto-Backup for Apps?

By default, for users that have opted in to backup, all of the data files of an app are automatically copied out to a user’s Drive. That includes databases, shared preferences and other content in the application’s private directory, up to a limit of 25 megabytes per app. Any data residing in the locations denoted by Context.getCacheDir(), Context.getCodeCacheDir() and Context.getNoBackupFilesDir() is excluded from backup. As for files on external storage, only those in Context.getExternalFilesDir() are backed up.

How to control what is backed up

You can customize what app data is available for backup by creating a backup configuration file in the res/xml folder and referencing it in your app’s manifest:


<application
        android:fullBackupContent="@xml/mybackupscheme">

In the configuration file, specify <include/> or <exclude/> rules that you need to fine tune the behavior of the default backup agent. Please refer to a detailed explanation of the rules syntax available in the documentation.

What to exclude from backup

You may not want to have certain app data eligible for backup. For such data, please use one of the mechanisms above. For example:

  • You must exclude any device specific identifiers, either issued by a server or generated on the device. This includes the Google Cloud Messaging (GCM) registration token which, when restored to another device, can render your app on that device unable to receive GCM messages.
  • Consider excluding account credentials or other sensitive information information, e.g., by asking the user to reauthenticate the first time they launch a restored app rather than allowing for storage of such information in the backup.

With such a diverse landscape of apps, it’s important that developers consider how to maximise the benefits to the user of automatic backups. The goal is to reduce the friction of setting up a new device, which in most cases means transferring over user preferences and locally saved content.

For example, if you have the user’s account stored in shared preferences such that it can be restored on install, they won’t have to even think about which account they used to sign in with previously - they can submit their password and get going!

If you support a variety of log-ins (Google Sign-In and other providers, username/password), it’s simple to keep track of which log-in method was used previously so the user doesn’t have to.

Transitioning from key/value backups

If you have previously implemented the legacy, key/value backup by subclassing BackupAgent and setting it in your Manifest (android:backupAgent), you’re just one step away from transitioning to full-data backups. Simply add the android:fullBackupOnly="true" attribute on <application/>. This is ignored on pre-M versions of Android, meaning onBackup/onRestore will still be called, while on M+ devices it lets the system know you wish to use full-data backups while still providing your own BackupAgent.

You can use the same approach even if you’re not using key/value backups, but want to do any custom processing in onCreate(), onFullBackup() or be notified when a restore operation happens in onRestoreFinished(). Just remember to call super.onFullBackup() if you want to retain the system implementation of XML include/exclude rules handling.

What is the backup/restore lifecycle?

The data restore happens as part of the package installation, before the user has a chance to launch your app. Backup runs at most once a day, when your device is charging and connected to Wi-Fi. If your app exceeds the data limit (currently set at 25 MB), no more backups will take place and the last saved snapshot will be used for subsequent restores. Your app’s process is killed after a full backup happens and before a restore if you invoke it manually through the bmgr command (more about that below).

Test your apps now

Before you begin testing Auto Backup, make sure you have the latest M Developer Preview on your device or emulator. After you’ve installed your APK, use the adb shell command to access the bmgr tool.

Bmgr is a tool you can use to interact with the Backup Manager:

  • bmgr run schedules an immediate backup pass; you need to run this command once after installing your app on the device so that the Backup Manager has a chance to initialize properly
  • bmgr fullbackup <packagename> starts a full-data backup operation.
  • bmgr restore <packagename> restores previously backed up data

If you forget to invoke bmgr run, you might see errors in Logcat when trying the fullbackup and restore commands. If you are still having problems, make sure you have Backup enabled and a Google account set up in system Settings -> Backup & reset.

Learn more

You can find a sample application that shows how to use Auto Backup on our GitHub. The full documentation is available on developer.android.com

Join the Android M Developer Preview Community on Google+ for more information on Android M features and remember to report any bugs you find with Auto Backup in the bug tracker.