Author Archives: Android Developers

Wear OS by Google developer preview

Posted by Hoi Lam, Lead Developer Advocate, Wear OS by Google

Today we launched the Wear OS by Google developer preview and brought Android P platform features to wearables. The developer preview includes updated system images on the official Android Emulator and a downloadable system image for the Huawei Watch 2 Bluetooth or Huawei Watch 2 Classic Bluetooth. This initial release is intended for developers only and is not for daily or consumer use. Therefore, it is only available via manual download and flash. Please refer to the release notes for known issues before downloading and flashing your device.

In this release, we would like to highlight the following features that developers should pay attention to:

  • Restriction related to non-SDK methods and fields: To improve app compatibility, Android P has started the process of restricting access to non-SDK methods and fields. Developers should make plans to migrate away from these. If there is no public equivalent for your use case, please let us know.
  • Dark UI system theme: To enhance glanceability, Wear OS has switched to a UI theme with a darker / black background for the notifications stream and system launcher since the start of the year. This is now also the default for the system theme and should improve the glanceability for wear apps. Developers should check the accessibility of their app's UI after this change.
  • Limited background activity: To improve power, apps will no longer be allowed to run in the background unless the watch is on the charger. Developers should note that Wear OS is going further with Android's app standby feature than some other form factors. Exceptions to this include watch faces and complications that the user currently has selected. This feature will be rolled out gradually in the developer preview, so you may not see it immediately on your device, but should build your apps accordingly by removing background services.
  • Turning off radios when off body: To improve power, bluetooth, WiFi, and cellular radios will be turned off when the watch is detected to be off body for an extended period of time. Again, this feature will be rolled out gradually so you may not initially see it on your device. If this feature causes challenges in your development process, you can disable the feature via adb; please follow the instructions in the release notes.
  • WiFi off when BT is disconnected: To improve power, the device will no longer automatically connect to wifi when disconnected from bluetooth. Exceptions include if an app is requesting a high bandwidth network or if the watch is on the charger. This feature will be rolled out gradually so you may not initially see it on your device.

Please give us your feedback

We expect to provide several updates to this preview before the final production release. Please submit any bugs you find via the Wear OS by Google issue tracker. The earlier you submit them, the higher the likelihood that we can include the fixes in the final release.

Android Studio 3.1

Posted by Jamal Eason, Product Manager, Android

We are excited to announce that Android Studio 3.1 is now available to download in the stable release channel. The focus areas for this release are around product quality and app development productivity. In addition to many underlying quality changes, we added several new features into Android Studio 3.1 that you should integrate into your development flow.

New to Android Studio 3.1 is a C++ performance profiler to help troubleshoot performance bottlenecks in your app code. For those of you with a Room or SQLite database in their your app, we added better code editor support to aid in your SQL table and query creation statements. We also added better lint support for your Kotlin code, and accelerated your testing with an updated Android Emulator with Quick Boot. If any of these features sound exciting or you are looking for the next stable version of Android Studio, you should download AndroStudio 3.1 today!

Check out the list of new features in Android Studio 3.1 below, organized by key developer flows.

What's new in Android Studio 3.1

Develop

  • Kotlin Lint Checks - Since announcing official Kotlin language support last year on the Android platform, we continue to invest in Kotlin language support in Android Studio. In Android Studio 3.1, we enhanced the Lint code quality checks so that now you can run them via the command line as well as from the IDE. Just open a Android Studio project, and run gradlew lint via command line. Learn more.

Kotlin Lint checks via command line

  • Database Code Editing - Editing inline SQL/Room Database code in your Android project is now even easier with Android Studio 3.1. This release has SQL code completion in your @Query declarations, better SQL statement refactoring, and SQL code navigation across your project. Learn more.

Room Database code completion

  • IntelliJ Platform Update: Android Studio 3.1 includes the IntelliJ 2017.3.3 platform release, which has many new features such as new Kotlin language intentions and built-in support for SVG image preview. Learn more.

Build

  • D8 Dex Compiler - D8 is now the default dex compiler in Android Studio 3.1. Replacing the legacy DX compiler, D8 dexing is an under the hood APK compilation step that makes your app size smaller, enables accurate step debugging, and many times leads to faster builds. Ensure that your gradle.properties either has no android.enableD8 flag, or if it does ensure that it is set to true. Learn more.
  • New Build Output Window - Android Studio 3.1 has an updated Build output window which organizes build status and errors in a new tree view. This change also consolidates the legacy Gradle output into this new window. Learn more.

New Build Output Window

Test

  • Quick Boot - Quick Boot allows you to resume your Android Emulator session in under 6 seconds. Slow start time on the Android Emulator was a major pain point we heard from you and Quick Boot solves this issue. Like a physical Android device, the emulator must perform an initial cold boot, but subsequent starts are fast. The feature is enabled by default for all Android Virtual Devices. Additionally, in this release, you have finer grain controls of when to use Quick Boot and the ability to save the quick boot state on demand under the emulator settings page. Learn more of other top Android Emulator Features.

Quick Boot On Demand Setting

  • System Images and Frameless Device Skins - The latest version of the Android Emulator now supports the Google Play Store and Google APIs on API 24 (Nougat) - API 27 (Oreo) emulator systems images as well as the P Developer Preview. Additionally the device emulator skins are updated to work in a new frameless mode, which can help you test your app with 18:9 screen aspect ratios, or Android P Developer Preview DisplayCutout APIs. Learn more.

Window frameless mode in the Android Emulator

Optimize

  • C++ CPU Profiling - Last year with Android Studio 3.0, we launched a brand new set of Android profilers to measure the CPU, Memory, and Network Activity in your app. With Android Studio 3.1, in addition to performance profiling your Kotlin and Java language app code, you can now profile your C++ code in your app. Using simpleperf as backend, the C++ profiler allows you to record C++ method traces. Learn more.

C++ CPU Profiler

  • Network Profiler Updates: Threads & Network Request - To aid with analyzing network traffic in your app, we added a new Network Thread view to inspect multithreaded network traffic, and we also added a new Network Request tab to dig into the network requests over time. With these updates to the Network Profiler you will have additional tools to trace the network traffic from each thread and network request all the way down through the network call stack. Learn more.

Network Profiler with thread support

To recap, Android Studio 3.1 includes these new major features:

Develop

  • Kotlin Lint Checks
  • Database Code Editing
  • IntelliJ Platform Update

Build

  • D8 Dex Compiler
  • New Build Output Window

Test & Debug

  • Quick Boot for Android Emulator
  • API 27 with Google Play Emulator System Images
  • Window frameless mode for Android Emulator

Optimize

  • C++ Profiler
  • Network Profiler - Thread Support
  • Network Profiler - Request Support

Check out the release notes for more details.

Getting Started

Download

If you are using a previous version of Android Studio, you can upgrade to Android Studio 3.1 today or you can download the update from the official Android Studio download page.

We appreciate any feedback on things you like, issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Connect with us -- the Android Studio development team ‐ on our Google+ page or on Twitter.

Activity Recognition’s new Transition API makes context-aware features accessible to all developers

Posted by Marc Stogaitis, Tajinder Gadh, and Michael Cai, Android Activity Recognition Team

Phones are our most personal devices we bring with us everywhere, but until now it's been hard for apps to adjust their experience to a user's continually changing environment and activity. We've heard from developer after developer that they're spending valuable engineering time to combine various signals like location and sensor data just to determine when the user has started or ended an activity like walking or driving. Even worse, when apps are independently and continuously checking for changes in user activity, battery life suffers. That's why today, we're excited to make the Activity Recognition Transition API available to all Android developers - a simple API that does all the processing for you and just tells you what you actually care about: when a user's activity has changed.

Since November of last year, the Transition API has been working behind the scenes to power the Driving Do-Not-Disturb feature launched on the Pixel 2. While it might seem simple to turn on Do-Not-Disturb when car motion is detected by the phone's sensors, many tricky challenges arise in practice. How can you tell if stillness means the user parked their car and ended a drive or simply stopped at a traffic light and will continue on? Should you trust a spike in a non-driving activity or is it a momentary classification error? With the Transition API, all Android developers can now leverage the same sets of training data and algorithmic filtering used by Google to confidently detect these changes in user activity.

Intuit partnered with us to test the Transition API and found it an ideal solution for their QuickBooks Self-Employed app:

"QuickBooks Self-Employed helps self-employed workers maximize their deductions at tax time by importing transactions and automatically tracking car mileage. Before the Transition API, we created our own solution to track mileage that combined GPS, phone sensors, and other metadata, but due to the wide variability in Android devices, our algorithm wasn't 100% accurate and some users reported missing or incomplete trips. We were able to build a proof-of-concept using the Transition API in a matter of days and it has now replaced our existing solution, offering a more reliable solution that also reduced our battery consumption. The Transition API frees us up to focus our efforts on being the best possible tax solution," say Pranay Airan and Mithun Mahadevan from Intuit.

Automatic mileage tracking in QuickBooks Self-Employed

Life360 similarly implemented the Transition API in their app with significant improvements in activity detection latency and battery consumption:

"With over 10 million active families, Life360 is the world's largest mobile app for families. Our mission is to become the must-have Family Membership that gives families peace of mind anytime and anywhere. Today we do that through location sharing and 24/7 safety features like monitoring driving behavior of family members, so measuring activities accurately and with minimal battery drain is critical. To determine when a user has started or finished a drive, our app previously relied on a combination of geofences, the Fused Location Provider API and the Activity Recognition API, but there were many challenges with that approach including how to quickly detect the start of the drive without excessively draining battery and interpreting the granular and rapidly changing reading from the raw Activity Recognition API. But in testing the Transition API, we are seeing higher accuracy and reduced battery drain over our previous solution, more than meeting our needs," says Dylan Keil from Life360.

Live location sharing in Life360

In the coming months, we will continue adding new activities to the Transition API to support even more kinds of context-aware features on Android like differentiating between road and rail vehicles. If you're ready to use the Transition API in your app, check out our API guide.

Our big bet on mobile games at Game Developers Conference 2018

Posted by Benjamin Frenkel, Product Manager, Google Play Instant

We've been working hard to make Google Play the premier platform for game discovery and a place for you to grow your business. In the last year, the number of Android users who installed a game has more than doubled. Nearly 40% of that growth came from emerging markets, including Brazil, India, Indonesia and Mexico. Our investments extend beyond the Play Store and include many key Google products:

  • Last week, we introduced a gaming solution from Google Maps APIs that enables you to build game worlds based on real world data to find the best places for gameplay.
  • We also launched Agones, an open source, dedicated game server hosting product built on Google Cloud Platform, in collaboration with Ubisoft, to support multiplayer games.
  • At last month's Mobile World Congress, we released version 1.0 of ARCore, our augmented reality SDK for Android, enabling you to publish AR apps and games to Google Play for the first time and reach 100M devices across the Android ecosystem.
  • Over the next few months, we'll roll out a beta for click-to-play video ads on Google Play—a new way to reach players with sight, sound and motion. These placements will help you showcase your games.

Today, during our annual Google Developer Day at the Game Developers Conference, we introduced new tools and platforms to improve the overall game discovery on Google Play and give you more ways to deliver engaging player experiences.

Introducing Google Play Instant

With all the great games available on Google Play, we want to make discovery easier and remove friction during the install process. Installing and opening a game takes time and results in many players never getting to experience your game. We're thrilled to announce that instant apps is now available for games.

This means that with a tap, players can try a game without having to download it first. Games available instantly today include: Clash Royale, Words with Friends 2, and Bubble Witch 3 Saga, and other titles from Playtika, MZ, Jam City, and Hothead Games.

[Insert YouTube vido] https://www.youtube.com/watch?v=u_STBSPQxYA&feature=youtu.be

We're calling this new experience Google Play Instant. To try it out, simply launch the Google Play Store on your Android device and visit the Instant Gameplay collection. Or, you can visit the "Arcade" in our redesigned Google Play Games app and launch any of the "Instant Gameplay" collection games. Google Play Instant makes it easier to have your players invite their friends to try out games right away through social invites and lets you share games through marketing campaigns.

Google Play Instant is still in closed beta and we look forward to opening it more broadly later this year. It provides a collection of extensions to the instant apps framework that better support the needs of game developers; including a higher APK size limit to 10MB, progressive download support for executable code and game assets, and support for NDK and game engines using existing tool chains. We're also working with popular game development platform Unity, and others including Cocos, to add IDE support making it easy for developers to build instant apps. Developers can sign up for more information about Google Play Instant as it becomes available.

Discover insights from game developers who have successfully benefited from Google Play Instant. Read how Zynga, King, Hothead Games, Jam City, Playtika, MZ and Magma Mobile successfully used instant apps to acquire new users, improve retention, and effectively cross-promote their games.

Google Play Console tools to build high quality games

We also added some useful tools to the Play Console to help you build great games, including:

  • A new internal testing track that will allow you to quickly test and iterate on new games and features. The track is additional to the alpha and beta testing tracks, and makes your game available for up to 100 testers within seconds.
  • Demo loops for the pre-launch report, a new feature that lets you predefine a likely series of actions in a game and have this "loop" run on on live devices in the Test Lab (bypassing the robo crawler).

This is just the start of what we have planned for 2018. We can't wait to see Google Play Instant bring new audiences to your games.

Android Security 2017 Year in Review

Originally posted by Dave Kleidermacher, Vice President of Security for Android, Play, ChromeOS, on the Google Security Blog

Our team's goal is simple: secure more than two billion Android devices. It's our entire focus, and we're constantly working to improve our protections to keep users safe.

Today, we're releasing our fourth annual Android security year in review. We compile these reports to help educate the public about the many different layers of Android security, and also to hold ourselves accountable so that anyone can track our security work over time.

We saw some really positive momentum last year and this post includes some, but not nearly all, of the major moments from 2017. To dive into all the details, you can read the full report at: g.co/AndroidSecurityReport2017

Google Play Protect

In May, we announced Google Play Protect, a new home for the suite of Android security services on nearly two billion devices. While many of Play Protect's features had been securing Android devices for years, we wanted to make these more visible to help assure people that our security protections are constantly working to keep them safe.

Play Protect's core objective is to shield users from Potentially Harmful Apps, or PHAs. Every day, it automatically reviews more than 50 billion apps, other potential sources of PHAs, and devices themselves and takes action when it finds any.

Play Protect uses a variety of different tactics to keep users and their data safe, but the impact of machine learning is already quite significant: 60.3% of all Potentially Harmful Apps were detected via machine learning, and we expect this to increase in the future.

Protecting users' devices

Play Protect automatically checks Android devices for PHAs at least once every day, and users can conduct an additional review at any time for some extra peace of mind. These automatic reviews enabled us to remove nearly 39 million PHAs last year.

We also update Play Protect to respond to trends that we detect across the ecosystem. For instance, we recognized that nearly 35% of new PHA installations were occurring when a device was offline or had lost network connectivity. As a result, in October 2017, we enabled offline scanning in Play Protect, and have since prevented 10 million more PHA installs.

Preventing PHA downloads

Devices that downloaded apps exclusively from Google Play were nine times less likely to get a PHA than devices that downloaded apps from other sources. And these security protections continue to improve, partially because of Play Protect's increased visibility into newly submitted apps to Play. It reviewed 65% more Play apps compared to 2016.

Play Protect also doesn't just secure Google Play—it helps protect the broader Android ecosystem as well. Thanks in large part to Play Protect, the installation rates of PHAs from outside of Google Play dropped by more than 60%.

Security updates

While Google Play Protect is a great shield against harmful PHAs, we also partner with device manufacturers to make sure that the version of Android running on user devices is up-to-date and secure.

Throughout the year, we worked to improve the process for releasing security updates, and 30% more devices received security patches than in 2016. Furthermore, no critical security vulnerabilities affecting the Android platform were publicly disclosed without an update or mitigation available for Android devices. This was possible due to the Android Security Rewards Program, enhanced collaboration with the security researcher community, coordination with industry partners, and built-in security features of the Android platform.

New security features in Android Oreo

We introduced a slew of new security features in Android Oreo: making it safer to get apps, dropping insecure network protocols, providing more user control over identifiers, hardening the kernel, and more.

We highlighted many of these over the course of the year, but some may have flown under the radar. For example, we updated the overlay API so that apps can no longer block the entire screen and prevent you from dismissing them, a common tactic employed by ransomware.

Openness makes Android security stronger

We've long said it, but it remains truer than ever: Android's openness helps strengthen our security protections. For years, the Android ecosystem has benefitted from researchers' findings, and 2017 was no different.

Security reward programs

We continued to see great momentum with our Android Security Rewards program: we paid researchers $1.28 million, totalling more than two million dollars since the start of the program. We also increased our top-line payouts for exploits that compromise TrustZone or Verified Boot from $50,000 to $200,000, and remote kernel exploits from $30,000 to $150,000.

In parallel, we also introduced Google Play Security Rewards program and offered a bonus bounty to developers that discover and disclose select critical vulnerabilities in apps hosted on Play to their developers.

External security competitions

Our teams also participated in external vulnerability discovery and disclosure competitions, such as Mobile Pwn2Own. At the 2017 Mobile Pwn2Own competition, no exploits successfully compromised the Google Pixel. And of the exploits demonstrated against devices running Android, none could be reproduced on a device running unmodified Android source code from the Android Open Source Project (AOSP).

We're pleased to see the positive momentum behind Android security, and we'll continue our work to improve our protections this year, and beyond. We will never stop our work to ensure the security of Android users.

Cryptography Changes in Android P

Posted by Adam Vartanian, Software Engineer

We hope you're enjoying the first developer preview of Android P. We wanted to specifically call out some backward-incompatible changes we plan to make to the cryptographic capabilities in Android P, which you can see in the developer preview.

Changes to providers

Starting in Android P, we plan to deprecate some functionality from the BC provider that's duplicated by the AndroidOpenSSL (also known as Conscrypt) provider. This will only affect applications that specify the BC provider explicitly when calling getInstance() methods. To be clear, we aren't doing this because we are concerned about the security of the implementations from the BC provider, rather because having duplicated functionality imposes additional costs and risks while not providing much benefit.

If you don't specify a provider in your getInstance() calls, no changes are required.

If you specify the provider by name or by instance—for example, Cipher.getInstance("AES/CBC/PKCS7PADDING", "BC") or Cipher.getInstance("AES/CBC/PKCS7PADDING", Security.getProvider("BC"))—the behavior you get in Android P will depend on what API level your application targets. For apps targeting an API level before P, the call will return the BC implementation and log a warning in the application log. For apps targeting Android P or later, the call will throw NoSuchAlgorithmException.

To resolve this, you should stop specifying a provider and use the default implementation.

In a later Android release, we plan to remove the deprecated functionality from the BC provider entirely. Once removed, any call that requests that functionality from the BC provider (whether by name or instance) will throw NoSuchAlgorithmException.

Removal of the Crypto provider

In a previous post, we announced that the Crypto provider was deprecated beginning in Android Nougat. Since then, any request for the Crypto provider by an application targeting API 23 (Marshmallow) or before would succeed, but requests by applications targeting API 24 (Nougat) or later would fail. In Android P, we plan to remove the Crypto provider entirely. Once removed, any call to SecureRandom.getInstance("SHA1PRNG", "Crypto") will throw NoSuchProviderException. Please ensure your apps have been updated.

Previewing Android P

Posted by Dave Burke, VP of Engineering

Last week at Mobile World Congress we saw that Android's ecosystem of developers, device makers, and silicon partners continues to bring amazing experiences to users worldwide.

Looking ahead, today we're sharing the first developer preview of Android P, the newest version of Android. It's an early baseline build for developers only -- you're our most trusted reviewers and testers ;-) Early feedback from our developer community is crucial in helping us evolve the platform to meet your needs. We'd love to get you started exercising the new features and APIs in P, and as always, we depend on your early feedback and ideas, so please give us your input!

This first developer preview of Android P is just the start - we'll have lots more to share at Google I/O in May, stay tuned!

New features to try in your apps

Here's a look at some of the cool features in this first preview of Android P that we want you to try and give feedback on.

Indoor positioning with Wi-Fi RTT

Accurate indoor positioning has been a long-standing challenge that opens new opportunities for location-based services. Android P adds platform support for the IEEE 802.11mc WiFi protocol -- also known as WiFi Round-Trip-Time (RTT) -- to let you take advantage of indoor positioning in your apps.

On Android P devices with hardware support, location permission, and location enabled, your apps can use RTT APIs to measure the distance to nearby WiFi Access Points (APs). The device doesn't need to connect to the APs to use RTT, and to maintain privacy, only the phone is able to determine the distance, not the APs.

Knowing the distance to 3 or more APs, you can calculate the device position with an accuracy of 1 to 2 meters. With this accuracy, you can build new experiences like in-building navigation; fine-grained location-based services such as disambiguated voice control (e.g.,'Turn on this light'); and location-based information (e.g., 'Are there special offers for this product?').

Display cutout support

Now apps can take full advantage of the latest device screens with fullscreen content. We've added display cutout into the platform, along with APIs that you can use to manage how your content is displayed.

Cutout support works seamlessly for apps, with the system managing status bar height to separate your content from the cutout. If you have critical, immersive content, you can also use new APIs to check the cutout shape and request full-screen layout around it. You can check whether the current device has a cutout by calling getDisplayCutout(), and then determine the location and shape of the cutout area using DisplayCutout. A new window layout attribute, layoutInDisplayCutoutMode, lets you tell the system how and when lay out your content relative to the cutout area. Details are here.

To make it easier to build and test cutout support in your app, we've added a Developer Option that simulates a cutout on any device. We recommend testing your existing apps with display cutout enabled to ensure that your content displays properly.

Apps with immersive content can display content fullscreen on devices with a display cutout.

Improved messaging notifications

In Android P we've put a priority on improving visibility and function in notifications. Try the new MessagingStyle notification style -- it highlights who is messaging and how you can reply. You can show conversations, attach photos and stickers, and even suggest smart replies. See the details here.

In MessagingStyle notifications you can now show conversations and smart replies [left] and even attach images and stickers [right].

Multi-camera API

You can now access streams simultaneously from two or more physical cameras on devices running Android P. On devices with either dual-front or dual-back cameras, you can create innovative features not possible with just a single camera, such as seamless zoom, bokeh, and stereo vision. The API also lets you call a logical or fused camera stream that automatically switches between two or more cameras. We're looking forward to seeing your new and exciting creations as Android P devices supporting multiple cameras reach the market in the year ahead.

Other improvements in camera include new Session parameters that help to reduce delays during initial capture, and Surface sharing that lets camera clients handle various use-cases without the need to stop and start camera streaming. We've also added APIs for display-based flash support and access to OIS timestamps for app-level image stabilization and special effects.

ImageDecoder for bitmaps and drawables

Android P gives you an easier way to decode images to bitmaps or drawables -- ImageDecoder, which deprecates BitmapFactory. ImageDecoder lets you create a bitmap or drawable from a byte buffer, file, or URI. It offers several advantages over BitmapFactory, including support for exact scaling, single-step decoding to hardware memory, support for post-processing in decode, and decoding of animated images.

You can decode and scale to an exact size just by calling setResize() with the target dimensions. You can also call getSampledSize() to get the image dimensions at a specific sample rate, then scale to those dimensions. If you want post-process an image -- such as applying rounded corners for circle masks or more complicated effects -- you can pass ImageDecoder any android.graphics.PostProcessor. You can also create Drawables directly, with ImageDecoder.decodeDrawable(). If the encoded image is an animated GIF or WebP, the Drawable will be an instance of the new AnimatedImageDrawable.

HDR VP9 Video, HEIF image compression, and Media APIs

Android P adds built-in support for HDR VP9 Profile 2, so you can now deliver HDR-enabled movies to your users from YouTube, Play Movies, and other sources on HDR-capable devices.

We're excited to add HEIF (heic) image encoding to the platform. HEIF is a popular format for photos that improves compression to save on storage and network data. With platform support on Android P devices, it's easy to send and utilize HEIF images from your backend server. Once you've made sure that your app is compatible with this data format for sharing and display, give HEIF a try as an image storage format in your app. You can do a jpeg-to-heic conversion using ImageDecoder or BitmapFactory to obtain a bitmap from jpeg, and you can use HeifWriter in the new Support Library alpha to write HEIF still images from YUV byte buffer, Surface, or Bitmap.

We're also in the process of enhancing and refactoring the media APIs to make them easier to develop and integrate with -- watch for details coming later this year.

Data cost sensitivity in JobScheduler

JobScheduler is Android's central service to help you manage scheduled tasks or work across Doze, App Standby, and Background Limits changes. In Android P, JobScheduler handles network-related jobs better for the user, coordinating with network status signals provided separately by carriers.

Jobs can now declare their estimated data size, signal prefetching, and specify detailed network requirements—carriers can report networks as being congested or unmetered. JobScheduler then manages work according to the network status. For example, when a network is congested, JobScheduler might defer large network requests. When unmetered, it can run prefetch jobs to improve the user experience, such as by prefetching headlines.

When you are adding jobs, try using setEstimatedNetworkBytes(), setIsPrefetch() and setRequiredNetwork() to let JobScheduler handle the work properly. When your job executes, be sure to use the Network object returned by JobParameters.getNetwork(), otherwise you'll implicitly use the device's default network which may not meet your requirements, causing unintended data usage.

Neural Networks API 1.1

We introduced the Neural Networks API in Android 8.1 to accelerate on-device machine learning on Android. In Android P we're expanding and improving this API, adding support for nine new ops -- Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze. If you have a Pixel 2 device, the DP1 build now includes an Qualcomm Hexagon HVX driver with acceleration for quantized models.

Autofill improvements

In Android P we're continuing to improve the Autofill Framework based on feedback from users and developers. Along with key bugfixes, this release includes new APIs that allow password managers to improve the Autofill user experience, such as better dataset filtering, input sanitization, and compatibility mode. Compatibility mode in particular has a high impact on end users because it lets password managers take the accessibility-based approach in apps that don't yet have full Autofill support, but without impacts on performance or security. See all the details on what's new here.

Open Mobile API for NFC payments and secure transactions

Android P adds an implementation of the GlobalPlatform Open Mobile API to Android. On supported devices, apps can use the OMAPI API to access secure elements (SE) to enable smart-card payments and other secure services. A hardware abstraction layer (HAL) provides the underlying API for enumerating a variety of Secure Elements (eSE, UICC, and others) available.

Strengthening Android's foundations

In Android P we're continuing our long-term investment to make Android the best platform for developers.

Security for apps

In Android P we're moving to a more consistent UI for fingerprint authentication across apps and devices. Android now provides a standard system dialog to prompt the user to touch the fingerprint sensor, managing text and placement as appropriate for the device. Apps can trigger the system fingerprint dialog using a new FingerprintDialog API. We recommend switching to the new system dialog as soon as possible.

As part of a larger effort to move all network traffic away from cleartext (unencrypted HTTP) to TLS, we're also changing the defaults for Network Security Configuration to block all cleartext traffic. If you are using a Network Security Configuration, you'll now need to make connections over TLS, unless you explicitly opt-in to cleartext for specific domains.

Privacy for users

To better ensure privacy, Android P restricts access to mic, camera, and all SensorManager sensors from apps that are idle. While your app's UID is idle, the mic reports empty audio and sensors stop reporting events. Cameras used by your app are disconnected and will generate an error if the app tries to use them. In most cases, these restrictions should not introduce new issues for existing apps, but we recommend removing these requests from your apps.

We will also enable encryption of Android backups with a client-side secret. This feature is still in active development and will be launched in a future Android P preview release.

Longer term we're working to bring support for per-network randomization of associated MAC addresses to the platform. On supported devices running Android P, you can enable this experimentally for testing as a new developer option.

Android P also gives the user control over access to the platform's build.serial identifier by putting it behind the READ_PHONE_STATE permission. Direct access to this identifier has been deprecated since Android 8.0. In order to access the build.serial identifier, you should use the Build.getSerial() method.

ART performance

We're working to bringing performance and efficiency improvements to all apps through the ART runtime. We've expanded ART's use of execution profiles to optimize apps and reduce in-memory footprint of compiled app code. ART now uses profile information for on-device rewriting of DEX files, with reductions up to 11% across a range of popular apps. We expect these to correlate closely with reductions in system DEX memory usage and faster startup times for your apps.

Optimized Kotlin

Kotlin is a first-class language on Android, and if you haven't tried it yet, you should! We've made an enduring commitment to Kotlin in Android and continue to expand support including optimizing the performance of Kotlin code. In P you'll see the first results of this work -- we've improved several compiler optimizations, especially those that target loops, to extract better performance. We're also continuing to work in partnership with JetBrains to optimize Kotlin's generated code. You can get all of the latest Kotlin performance improvements just by keeping Android Studio's Kotlin plugin up-to-date.

Power efficiency

In Android P we continue to refine Doze, App Standby, and Background Limits to further improve battery life; please be sure to try your apps with these and send feedback.

Targeting modern Android

Android P is shaped by our longer-term initiatives to modernize the foundations of Android and the apps that run on it. As we announced recently, Google Play will require all app updates to target Android Oreo (targetSdkVersion 26 or higher) by November 2018, with support for 64-bit hardware on the horizon for 2019.

In line with these changes, Android P will warn users with a dialog when they install an app that targets a platform earlier than Android 4.2 (targetSdkVersion less than 17), and future platform versions will continue to increment that lower bound. We're encouraging every Android developer to start planning the migration to target API 26 now, and to start the migration work as soon as possible. Here's a checklist of resources for help and support -- we're looking forward to seeing your apps getting the most from modern Android.

Improving app compatibility through public APIs

A key issue for users and developers is app compatibility -- making sure that apps are ready for new platform versions as they arrive, without risk of crashes for users and emergency rollouts for developers. Apps that use Android's public APIs from the SDK or NDK are in a good position to be compatible, but apps that use private Android interfaces and libraries are not.

So with Android P we're starting a gradual process to restrict access to selected non-SDK interfaces, asking developers -- including app teams inside Google -- to use the public equivalents instead. In cases where there is no public equivalent for your use-case, please let us know -- we want to make sure that this process is as smooth as possible for developers, so we'll use your feedback to ensure the initial rollout only affects APIs where developers can easily migrate to public alternatives. More about the restrictions is here.

Get started in a few simple steps

First, make your app compatible to give your users a seamless transition to Android P. Just download a device system image or emulator system image, install your current app, and test -- the app should run and look great, and handle behavior changes properly. After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting.

Remember, you don't need a supported Pixel device to test or develop on Android P. For most uses we highly recommend setting up an Android Virtual Device on the Android Emulator as a test environment instead. If you haven't tried the emulator recently, you'll find that it's incredibly fast , boots in under 6 seconds, convenient to use, and you can even model next-gen screens -- such as long screens and screens with camera cutout.

Next, change your app's targeting to "P" and run it with the full Android P experience. Set your app's targetSdkVersion to 'P' and compileSdkVersion to android-P, build, and test. Make sure to read the behavior changes for apps targeting P to find areas you will want to test and might need to adjust.

When you're ready, dive into Android P and learn about the many new features and APIs you can take advantage of in your app. To make it easier to explore the new APIs, take a look at the API diff report, along with the Android P API reference. Visit the P Developer Preview site for details on the preview timeline and support resources. Also check out this video highlighting what's new in Android P for developers.

To get started building with Android P, download the P Developer Preview SDK and tools into Android Studio 3.1 or use the latest Android Studio 3.2 canary version. We're also releasing an alpha version of the 28.0.0 support library for you to try.

What's ahead?

The Android P Developer Preview includes an updated SDK with system images for testing on the official Android Emulator and on Pixel, Pixel XL Pixel 2, and Pixel 2 XL devices.

We plan to update the preview system images and SDK regularly throughout the preview. This initial release is for developers only and not intended for daily or consumer use, so we're making it available by manual download and flash only. Downloads and instructions are here.

As we get closer to a final product, we'll be inviting consumers to try it out as well, and we'll open up enrollments through Android Beta at that time. Stay tuned for details, but for now please note that Android Beta is not currently available for Android P.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

Android Things Developer Preview 7

Posted by Dave Smith, Developer Advocate for IoT

Today we're releasing Developer Preview 7 (DP7) of Android Things, Google's platform that enables Android developers to create Internet of Things (IoT) devices. The platform also supports powerful applications such as video and audio processing and on-board machine learning with TensorFlow.

The latest preview is based on Android 8.1 and is updated to support version 11.8.0 of Google Play Services. For all the details of what's included in DP7, see the release notes. Here are some of the highlights:

Console enhancements and device updates

New features are also available in the Android Things Console to enhance product management from prototype to production:

  • Product Models. Create multiple software variations of the same hardware product, and manage the builds and updates for each independently.
  • Product Sharing. Grant additional user accounts access to view and manage the models, builds, and updates for a given product.
  • Analytics. View metrics on device activations and update statistics for your products.
  • Update Channels. Deploy software builds to groups of devices for development or beta testing, without disrupting production devices in the field.

Devices can subscribe to different update channels using new APIs added to UpdateManager. See the updated Device Updates API guide and console documentation to learn more about configuring update channel subscriptions.

Addressing developer feedback

We've received tons of amazing feedback from developers so far, and focused heavily on addressing many of the top reported issues in this release:

  • Improved camera resolution support. Apps can now capture image data up to the full native resolution of the camera hardware.
  • Support for MIDI. Use the MidiManager API to build a virtual MIDI device in your app or interface with external MIDI controllers.
  • Better testability of Android Things apps. The Peripheral I/O API now exposes interfaces instead of abstract classes, allowing local unit tests to replace these objects with mocks and stubs more easily.
  • Consistent API naming. This release renames many of the existing Android Things API classes to provide a more consistent developer experience across the entire surface. See the updated API reference to review how package and class names have changed.

New Bluetooth APIs

Android mobile devices expose controls to users for pairing with and connecting to Bluetooth devices through the Settings app. IoT devices running Android Things need to programmatically perform these same operations. The new BluetoothConnectionManager API enables apps to take control of the pairing and connection process. See the new Bluetooth API guide for more details.

Sample updates

Last year at Google I/O, we demonstrated building an app using Kotlin on Android Things. For developers using Kotlin, we have started publishing Kotlin versions of the Android Things samples. Today you can download the Button and LED sample in both Kotlin and Java, with more samples to follow very soon.

We have also migrated the TensorFlow Image Classifier sample app to use the TensorFlow Lite library, reducing the size of the pre-trained TensorFlow model by over 90% and the time required to classify the images by approximately 50%.

Feedback

Please send us your feedback by filing bug reports and feature requests, as well as asking any questions on Stack Overflow. You can also join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas. We look forward to seeing what you build with Android Things!

Improving Stability by Reducing Usage of non-SDK Interfaces

Posted by David Brazdil, Software Engineer

In Android, we're always looking for ways to improve the user and developer experience by making those experiences as stable as possible. In this spirit, we've been working to ensure that apps don't use non-SDK interfaces, since doing so risks crashes for users and emergency rollouts for developers. In Android N, we restricted the set of symbols that C/C++ code could use. This change ensured that apps using C++ rely on stable NDK interfaces rather than incur the incremental crashes caused by reliance on unstable, non-NDK interfaces. Starting in the next release of Android, we will further increase stability by expanding these restrictions to cover the Java language interfaces of the SDK.

What behavior will I see?

Starting in the next release of Android, some non-SDK methods and fields will be restricted so that you cannot access them -- either directly, via reflection, or JNI. If you try, you can see errors such as NoSuchFieldException or NoSuchMethodException.

Initially, this restriction will impact interfaces with low or no usage. It is an explicit goal of our planning and design to respect our developer community and create the absolute minimum of change while addressing app stability issues flagged by our users and device manufacturers. In cases where a migration to SDK methods will be possible but is likely to be technically challenging, we'll allow continued usage until your app is updated to target the latest API level. We plan to broaden these restrictions in future platform versions, giving developers time to migrate with long advance warning, and also giving us time to gather feedback about any needed SDK interfaces. We have always said that using non-SDK interfaces is always risky -- they might change in any release as we refactor code to add features or fix bugs. So if your app currently relies on non-SDK interfaces, you should begin planning a migration to SDK alternatives.

Because the Java language has different features from C++, this restriction will take a slightly different form than the previous symbol restriction. You should not access classes that are not part of our SDK, but you also need to be sure that you are only using the officially documented parts of each class. In particular, this means that you should not plan to access methods or fields that are not listed in the SDK when you interact with a class via semantics such as reflection.

What if there isn't a SDK alternative?

We know that some apps may be using non-SDK interfaces in ways that do not have an SDK alternative. We value your feedback about where and how we need to expand and improve the public APIs for you. If you feel that you'll need the SDK API expanded before you can stop using non-SDK ones, please tell us via our bug tracker. We will be monitoring this list closely and using this valuable feedback to prioritize. It is critical for us to get this feedback in a timely manner so that we can continue to both tune the blacklist to minimize developer impact and also begin developing any needed alternatives for future platforms.

What's coming next?

In the next Android developer preview, you'll be able to run your existing apps and see warnings when you use a non-SDK interface that will be subject to blacklist or greylist in the final release. It's always a best practice to make sure your app runs on the developer preview, but you should pay specific attention to the interface compatibility warnings if you are concerned that you may be impacted.

In conjunction with the next developer preview and the new bug tracker category, we'll be monitoring usage of non-SDK interfaces. In cases where official SDK alternatives already exist, we'll publish official guidance on how to migrate away from commonly used non-SDK interfaces.

Continuous Shared Element Transitions: RecyclerView to ViewPager

By Shalom Gibly, Software Engineer, Google's Material Gallery Team

Transitions in Material Design apps provide visual continuity. As the user navigates the app, views in the app change state. Motion and transformation reinforce the idea that interfaces are tangible, connecting common elements from one view to the next.

This post aims to provide guidelines and implementation for a specific continuous transition between Android Fragments. We will demonstrate how to implement a transition from an image in a RecyclerView into an image in a ViewPager and back, using 'Shared Elements' to determine which views participate in the transition and how. We will also handle the tricky case of transitioning back to the grid after paging to an item that was previously offscreen.

This is the result we are aiming for:

If you wish to skip the explanation and go straight to the code, you can find it here.

What are shared elements?

A shared element transition determines how views that are present in two fragments transition between them. For example, an image that is displayed on an ImageView on both Fragment A and Fragment B transitions from A to B when B becomes visible.

There are numerous previously published examples which explain how shared elements work and how to implement a basic Fragment transition. This post will skip most of the basics and will walk through the specifics on how to create a working transition into a ViewPager and back. However, if you'd like to learn more about transitions, I recommend starting by reading about transitions at the Android's developers website, and take the time to watch this 2016 Google I/O presentation.

The challenges

Shared Element mapping

We would like to support a seamless back and forth transition. This includes a transition from the grid to the pager, and then a transition back to the relevant image, even when the user paged to a different image.

To do so, we will need to find a way to dynamically remap the shared elements in order to provide the Android's transition system what it needs to do its magic!

Delayed loading

Shared element transitions are powerful, but can be tricky when dealing with elements that need to be loaded before we can transition to them. The transition may simply not work as expected when views at the target fragment are not laid out and ready.

In this project, there are two areas where a loading time affects the shared element transition:

  1. It takes a few milliseconds for the ViewPager to load its internal fragments. Additionally, it takes time to load an image into the displayed pager fragment (may even include a download time for the asset).
  2. The RecyclerView also faces a similar delay when loading the images into its views.

Demo app design

Basic structure

Before we dive into the juicy transitions, here is a little bit about how the demo app is structured.

The MainActivity loads a GridFragment to present a RecyclerView of images. The RecyclerView adapter loads the image items (a constant array that is defined at the ImageData class), and manages the onClick events by replacing the displayed GridFragment with an ImagePagerFragment.

The ImagePagerFragment adapter loads the nested ImageFragments to display the individual images when paging happens.

Note: The demo app implementation uses Glide, which loads images into views asynchronously. The images in the demo app are bundled with it. However, you may easily convert the ImageData class to hold URL strings that point to online images.

Coordinating a selected/displayed position

To communicate the selected image position between the fragments, we will use the MainActivity as a place to store the position.

When an item is clicked, or when a page is changed, the MainActivity is updated with the relevant item's position.

The stored position is later used in several places:

  • When determining which page to show in the ViewPager.
  • When navigating back to the grid and auto-scrolling to the position to make sure it's visible.
  • And of course, when hooking up the transitions callbacks, as we'll see in the next section.

Setting up the transitions

As mentioned above, we will need to find a way to dynamically remap the shared elements in order to give the transition system what it needs to do its magic.

Using a static mapping by setting up transitionName attributes for the image views at the XML will not work, as we are dealing with an arbitrary amount of views that share the same layout (e.g. views inflated by the RecyclerView adapter, or views inflated by the ImageFragment).

To accomplish this, we'll use some of what the transition system provides us:

  1. We set a transition name on the image views by calling setTransitionName. This will identify the view with a unique name for the transition. setTransitionName is called when binding a view at the grid's RecyclerView adapter, and onCreateView at the ImageFragment. In both locations, we use the unique image resource as a name to identify the view.
  2. We set up SharedElementCallbacks to intercept onMapSharedElements and adjust the mapping of the shared element names to views. This will be done when exiting the GridFragment and when entering the ImagePagerFragment.

Setting the FragmentManager transaction

The first thing we set up to initiate a transition for a fragment replacement is at the FragmentManager transaction preparation. We need to inform the system that we have a shared element transition.

fragment.getFragmentManager()
   .beginTransaction()
   .setReorderingAllowed(true) // setAllowOptimization before 26.1.0
   .addSharedElement(imageView, imageView.getTransitionName())
   .replace(R.id.fragment_container, 
        new ImagePagerFragment(),
        ImagePagerFragment.class.getSimpleName())
   .addToBackStack(null)
   .commit();

The setReorderingAllowed is set to true. It will reorder the state changes of fragments to allow for better shared element transitions. Added fragments will have onCreate(Bundle) called before replaced fragments have onDestroy() called, allowing the shared view to get created and laid out before the transition starts.

Image transition

To define how the image transitions when it animates to its new location, we set up a TransitionSet in an XML file and load it at the ImagePagerFragment.

<ImagePagerFragment.java>
Transition transition =
   TransitionInflater.from(getContext())
       .inflateTransition(R.transition.image_shared_element_transition);
setSharedElementEnterTransition(transition);
<image_shared_element_transition.xml>
<?xml version="1.0" encoding="utf-8"?>
<transitionSet
   xmlns:android="http://schemas.android.com/apk/res/android"
   android:duration="375"
   android:interpolator="@android:interpolator/fast_out_slow_in"
   android:transitionOrdering="together">
 <changeClipBounds/>
 <changeTransform/>
 <changeBounds/>
</transitionSet>

Adjusting the shared element mapping

We'll start by adjusting the shared element mapping when leaving the GridFragment. For that, we will call the setExitSharedElementCallback() and provide it with a SharedElementCallback which will map the element names to the views we'd like to include in the transition.

It's important to note that this callback will be called while exiting the Fragment when the fragment-transaction occurs, and while re-entering the Fragment when it's popped out of the backstack (on back navigation). We will use this behavior to remap the shared view and adjust the transition to handle cases where the view is changed after paging the images.

In this specific case, we are only interested in a single ImageView transition from the grid to the fragment the view-pager holds, so the mapping only needs to be adjusted for the first named element received at the onMapSharedElements callback.

<GridFragment.java>
setExitSharedElementCallback(
   new SharedElementCallback() {
     @Override
     public void onMapSharedElements(
         List<String> names, Map<String, View> sharedElements) {
       // Locate the ViewHolder for the clicked position.
       RecyclerView.ViewHolder selectedViewHolder = recyclerView
           .findViewHolderForAdapterPosition(MainActivity.currentPosition);
       if (selectedViewHolder == null || selectedViewHolder.itemView == null) {
         return;
       }

       // Map the first shared element name to the child ImageView.
       sharedElements
           .put(names.get(0),
                selectedViewHolder.itemView.findViewById(R.id.card_image));
     }
   });

We also need to adjust the shared element mapping when entering the ImagePagerFragment. For that, we will call the setEnterSharedElementCallback().

<ImagePagerFragment.java>
setEnterSharedElementCallback(
   new SharedElementCallback() {
     @Override
     public void onMapSharedElements(
         List<String> names, Map<String, View> sharedElements) {
          // Locate the image view at the primary fragment (the ImageFragment
          // that is currently visible). To locate the fragment, call
          // instantiateItem with the selection position.
          // At this stage, the method will simply return the fragment at the
          // position and will not create a new one.
       Fragment currentFragment = (Fragment) viewPager.getAdapter()
           .instantiateItem(viewPager, MainActivity.currentPosition);
       View view = currentFragment.getView();
       if (view == null) {
         return;
       }

       // Map the first shared element name to the child ImageView.
       sharedElements.put(names.get(0), view.findViewById(R.id.image));
     }
   });

Postponing the transition

The images we would like to transition are loaded into the grid and the pager and take time to load. To make it work properly, we will need to postpone the transition until the participating views are ready (e.g. laid out and loaded with the image data).

To do so, we call a postponeEnterTransition() in our fragments' onCreateView(), and once the image is loaded, we start the transition by calling startPostponedEnterTransition().

Note: postpone is called for both the grid and the pager fragments to support both forward and backward transitions when navigating the app.

Since we are using Glide to load the images, we set up listeners that trigger the enter transition when images are loaded.

This is done in two places:

  1. When an ImageFragment image is loaded, a call is made to its parent ImagePagerFragment to start the transition.
  2. When transitioning back to the grid, a start transition is called after the "selected" image is loaded.

Here is how the ImageFragment loads an image and notifies its parent when it's ready.

Note that the postponeEnterTransition is made at the the ImagePagerFragment, while the startPostponeEnterTransition is called from the child ImageFragment that is created by the pager.

<ImageFragment.java>
Glide.with(this)
   .load(arguments.getInt(KEY_IMAGE_RES)) // Load the image resource
   .listener(new RequestListener<Drawable>() {
     @Override
     public boolean onLoadFailed(@Nullable GlideException e, Object model,
         Target<Drawable> target, boolean isFirstResource) {
       getParentFragment().startPostponedEnterTransition();
       return false;
     }

     @Override
     public boolean onResourceReady(Drawable resource, Object model,
         Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
       getParentFragment().startPostponedEnterTransition();
       return false;
     }
   })
   .into((ImageView) view.findViewById(R.id.image));

As you may have noticed, we also call to start the postponed transition when the loading fails. This is important to prevent the UI from hanging during failure.

Final touches

To make our transitions even smoother, we would like to fade out the grid items when the image transitions to the pager view.

To do that, we create a TransitionSet that is applied as an exit transition for the GridFragment.

<GridFragment.java>
setExitTransition(TransitionInflater.from(getContext())
   .inflateTransition(R.transition.grid_exit_transition));
<grid_exit_transition.xml>
<?xml version="1.0" encoding="utf-8"?>
<transitionSet xmlns:android="http://schemas.android.com/apk/res/android"
   android:duration="375"
   android:interpolator="@android:interpolator/fast_out_slow_in"
   android:startDelay="25">
 <fade>
   <targets android:targetId="@id/card_view"/>
 </fade>
</transitionSet>

This is what the transition looks like after this exit transition is set up:

As you may have noticed, the transition is still not completely polished with this setup. The fade animation is running for all the grid's card views, including the card that holds the image that transitions to the pager.

To fix it, we exclude the clicked card from the exit transition before commiting the fragment transaction at the GridAdapter.

// The 'view' is the card view that was clicked to initiate the transition.
((TransitionSet) fragment.getExitTransition()).excludeTarget(view, true);

After this change, the animation looks much better (the clicked card doesn't fade out as part of the exit transition, while the rest of the cards fade out):

As a final touch, we set up the GridFragment to scroll and reveal the card we transition to when navigating back from the pager (done at the onViewCreated):

<GridFragment.java>
recyclerView.addOnLayoutChangeListener(
   new OnLayoutChangeListener() {
      @Override
      public void onLayoutChange(View view,
                int left, 
                int top, 
                int right, 
                int bottom, 
                int oldLeft, 
                int oldTop, 
                int oldRight, 
                int oldBottom) {
         recyclerView.removeOnLayoutChangeListener(this);
         final RecyclerView.LayoutManager layoutManager =
            recyclerView.getLayoutManager();
         View viewAtPosition = 
            layoutManager.findViewByPosition(MainActivity.currentPosition);
         // Scroll to position if the view for the current position is null (not   
         // currently part of layout manager children), or it's not completely
         // visible.
         if (viewAtPosition == null 
             || layoutManager.isViewPartiallyVisible(viewAtPosition, false, true)){
            recyclerView.post(() 
               -> layoutManager.scrollToPosition(MainActivity.currentPosition));
         }
     }
});

Wrapping up

In this article, we implemented a smooth transition from a RecyclerView to a ViewPager and back.

We showed how to postpone a transition and start it after the views are ready. We also implemented shared element remapping to get the transition going when shared views are changing dynamically while navigating the app.

These changes transformed our app's fragment transitions to provide better visual continuity as users interact with it.

The code for the demo app can be found here.