Category Archives: Android Developers Blog

An Open Handset Alliance Project

#TheAndroidShow: the latest from MWC, Gemini Nano, Android 15 and more!

Posted by Anirudh Dewani, Director of Android Developer Relations


Last week, Android device makers released a slew of new devices, and today we’re unpacking what that means for developers, as well as the latest in Gemini Nano, Android 15, Jetpack Compose and more, in another episode of our quarterly show, #TheAndroidShow:

The lastest wearables and foldables – get building!

Android device makers unveiled their latest wearables and foldables last week at Mobile World Congress, and we were on the ground in Barcelona taking a look at those new devices and how you can get started building on top of them. A few of our favorites:

    • Xiaomi Watch 2,the latest smart watch from the Xiaomi team. This device is powered by Wear OS by Google and provides upgraded camera, fitness, and sleep experiences to allow users to get the most from their device.
    • PORSCHE DESIGN HONOR Magic V2 RSR, the world’s thinnest inward foldable smartphone. This is the latest foldable for Android and was designed with the user experience at the forefront, including human-centric eye comfort technology.

Compose is an amazing way to build apps for your users across form factors. Compose for Wear OS and the upcoming adaptive layouts for large screens help devs bring their apps to life with less code, powerful tools, and intuitive APIs. Check out the Wear OS and Large Screen galleries, where you can find UX inspiration and design guidance tailored to your type of app.




Behind the scenes, with Gemini Nano and AICore

With all of the excitement around generative AI, it could be daunting to know where to start. So in today’s show, we’re taking you behind the scenes with Gemini Nano, Google’s most efficient model built for on-device tasks, and AICore, Android’s system service for on-device foundation models. And we’re spotlighting how the team that builds the Recorder app used Gemini Nano to help summarize users’ voice memos on-device and with privacy in mind. And here’s the best part: the team built the feature in a short time with only a small number of engineers.




Now in Android

We celebrated the 100th episode of Now in Android, covering the latest developer news, including:




And that’s a wrap on another episode of our quarterly show, #TheAndroidShow. But the conversation continues on YouTube, X and LinkedIn: tell us your favorite part, or what you’d like us to dive into next time on our quarterly episode. And before we sign off, you can watch the full playlist, with the latest in Android developer news, here.

Designing your account deletion experience with users in mind

Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, May Smith - Product Manager, and Anita Issagholyan – Policy Lead

With millions of developers relying on our platform, Google Play is committed to keeping our ecosystem safe for everyone. That’s why, in addition to our ongoing investments in app privacy and security, we also continuously update our policies to respond to new challenges and user expectations.

For example, we recently introduced a new account deletion policy with required disclosures within the Data Safety section on the Play Store. Deleting an account should be as easy as creating one, so the new policy requires developers to provide information and web resources that help users to manage their data and understand an app's deletion practices.

To help you build trust and design a user-friendly experience that helps meet our policy requirements, consider these 5 best practices when implementing your account deletion solution.

1.     Make it seamless

Users prefer a simple and straightforward account deletion flow. Although users know that more steps may follow (such as authentication) navigating multiple screens before the deletion page can be a significant barrier and create negative feelings for the user. Consider providing your account deletion option on an account settings page or place a prominent button on the home screen. Design the flow with discoverability in mind by taking the user directly to the deletion process.

2.     Allow automatic deletion

Users feel that if they can create an account without talking to a customer service agent, they should be able to delete their account online, too. If automation is not on your roadmap just yet, consider a step-by-step deletion request form or a dedicated page to connect users with customer support.

3.     Offer guidance and explain potential implications

Users delete their accounts for various reasons, some of which may be better resolved another way. Early in your deletion flow, point your users toward a Help Center article that explains how your deletion process works in simple terms, including any potential consequences. For example, make it clear if your users will need to pause their payment method before deleting the account, or download any account data they want to keep. Helping your users understand the process in advance can prevent accidental deletions. For those who do change their minds, consider offering a way to recover their accounts within a reasonable timeframe.

Here’s an example of how Play Store Developer, Canva, has designed the in-app deletion flow to explain potential consequences of account deletion:

user journey on the Canva app in three panels
User journey on the Canva app
“User data privacy has always been important for us. We’ve always been intentional about our approach in optimizing the Canva app so our users can have more transparency and control over their data. We’re welcoming these new requirements from the Play store as we know this new flow will elevate users’ trust in our app and benefit our business in the long term.” - Will Currie, Software Engineer, Canva

4.     Confirm account deletion

Sometimes users misunderstand whether the account itself or just data collected by the app was deleted in the deletion process. Users often think that the data your app stored in the cloud will automatically be deleted at the same time as account deletion. Since it may take time to remove account data from internal company systems or comply with data retention requirements in different regions, transparency about the process can help you maintain trust in your brand and make it more likely for users to return in the future.

Here’s SYBO Games, has designed their web deletion in-app deletion flow:

user journey on the Sybo Games web resource in four panels
User journey on the SYBO Games web resource
“We are always striving to ensure that our games provide a fun user experience, built on a solid data protection foundation. When we learned about the new account deletion update on Google Play, we thought this was a great step forward to ensure that the entire developer ecosystem optimizes for user safety. We encourage developers to carve out time to embrace these improvements and prioritize regular privacy check-ins.”  - Elizabeth Ann Schweitzer, Games Compliance Manager, SYBO Games

5.     Don’t forget user engagement

This is a great opportunity to connect with your users at a critical moment. Make sure users who have uninstalled your app can easily remove their accounts through a web resource without needing to reinstall the app. You can also invite them to complete a survey or provide feedback on their decision.

Protecting users' data is essential for building trust and loyalty. By updating the Data Safety section on Google Play and continuing to optimize user experience for account deletion, you can strengthen trust in your company while striving for the highest level of user data protection.


Thank you for your continued collaboration and feedback in developing this data transparency feature and in helping make Google Play safe for all.

Introducing a new Text-To-Speech engine on Wear OS

Posted by Ouiam Koubaa – Product Manager and Yingzhe Li – Software Engineer

Today, we’re excited to announce the release of a new Text-To-Speech (TTS) engine that is performant and reliable. Text-to-speech turns text into natural-sounding speech across more than 50 languages powered by Google’s machine learning (ML) technology. The new text-to-speech engine on Wear OS uses decreased prosody ML models to bring faster synthesis on Wear OS devices.

Use cases for Wear OS’s text-to-speech can range from accessibility services, coaching cues for exercise apps, navigation cues, and reading aloud incoming alerts through the watch speaker or Bluetooth connected headphones. The engine is meant for brief interactions, so it shouldn’t be used for reading aloud a long article, or a long summary of a podcast.

How to use Wear OS’s TTS

Text-to-speech has long been supported on Android. Wear OS’s new TTS has been tuned to be performant and reliable on low-memory devices. All the Android APIs are still the same, so developers use the same process to integrate it into a Wear OS app, for example, TextToSpeech#speak can be used to speak specific text. This is available on devices that run Wear OS 4 or higher.

When the user interacts with the Wear OS TTS for the first time following a device boot, the synthesis engine is ready in about 10 seconds. For special cases where developers want the watch to speak immediately after opening an app or launching an experience, the following code can be used to pre-warm the TTS engine before any synthesis requests come in.

private fun initTtsEngine() {
    // Callback when TextToSpeech connection is set up
    val callback = TextToSpeech.OnInitListener { status ->
        if (status == TextToSpeech.SUCCESS) {
            Log.i(TAG, "tts Client Initialized successfully")


            // Get default TTS locale
            val defaultVoice = tts.voice
            if (defaultVoice == null) {
                Log.w(TAG, "defaultVoice == null")
                return@OnInitListener
            }


            // Set TTS engine to use default locale
            tts.language = defaultVoice.locale




            try {
                // Create a temporary file to synthesize sample text
                val tempFile =
                        File.createTempFile("tmpsynthesize", null, applicationContext.cacheDir)


                // Synthesize sample text to our file
                tts.synthesizeToFile(
                        /* text= */ "1 2 3", // Some sample text
                        /* params= */ null, // No params necessary for a sample request
                        /* file= */ tempFile,
                        /* utteranceId= */ "sampletext"
                )


                // And clean up the file
                tempFile.deleteOnExit()
            } catch (e: Exception) {
                Log.e(TAG, "Unhandled exception: ", e)
            }
        }
    }


    tts = TextToSpeech(applicationContext, callback)
}

When you are done using TTS, you can release the engine by calling tts.shutdown() in your activity’s onDestroy() method. This command should also be used when closing an app that TTS is used for.

Languages and Locales

By default, Wear OS TTS includes 7 pre-loaded languages in the system image: English, Spanish, French, Italian, German, Japanese, and Mandarin Chinese. OEMs may choose to preload a different set of languages. You can check what languages are available by using TextToSpeech#getAvailableLanguages(). During watch setup, if the user selects a system language that is not a pre-loaded voice file, the watch automatically downloads the corresponding voice file the first time the user connects to Wi-Fi while charging their watch.

There are limited cases where the speech output may differ from the user’s system language. For example, in a scenario where a safety app uses TTS to call emergency responders, developers might want to synthesize speech in the language of the locale the user is in, not in the language the user has their watch set to. To synthesize text in a different language from system settings, use TextToSpeech#setLanguage(java.util.Locale)

Conclusion

Your Wear OS apps now have the power to talk, either directly from the watch’s speakers or through Bluetooth connected headphones. Learn more about using TTS.

We look forward to seeing how you use Text-to-speech engine to create more helpful and engaging experiences for your users on Wear OS!


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Embracing Android 14: Meta’s Early Adoption Empowered Enhanced User Experience

Posted by Terence Zhang – Developer Relations Engineer, Google; in partnership with Tina Ho - Partner Engineering, TPM and Kun Wang – Partner Engineering, Partner Engineer

With the first Developer Preview of Android 15 now released, another new Android release that brings new features and under-the-hood improvements for billions of users worldwide will be coming shortly. As Android developers, you are key players in this evolution; by staying on top of the targetSDK upgrade cycle, you are making sure that your users have the best possible experience.

The way Meta, the parent company of Instagram, Facebook, WhatsApp, and Messenger, approached Android 14 provides a blueprint for both developer success and user satisfaction. Meta improved their velocity towards targetSDK adoption by 4x, and so to understand more about how they built this, we spoke to the team at Meta, with an eye towards insights that all developers could build into their testing programs.

Meta’s journey on A14: A blueprint for faster adoption

When Android 11 launched, some of Meta’s apps experienced challenges with existing features, such as Chat Heads, and with new requirements, like scoped storage integration. Fixing these issues was complicated by slow developer tooling adoption and a decentralized app strategy. This experience motivated Meta to create an internal Android OS Readiness Program which focuses on prioritizing early and thorough testing throughout the Android release window and accelerating their apps’ targetSDK adoption.

The program officially launched last year. By compiling apps against each Android 14 beta and conducting thorough automated and smoke tests to proactively identify potential issues, Meta was able to seamlessly adopt new Android 14 features, like Foreground Service types and send timely feedback and bug reports to the Android team, contributing to improvements in the OS.

Meta also accelerated their targetSDK adoption for Android 14—updating Messenger, Facebook, and Instagram within one to two months of the AOSP release, compared to seven to nine months for Android 12 (an increase of velocity of more than 4x!). Meta’s newly created readiness program unlocked this achievement by working across each app to adopt latest Android changes while still maintaining compatibility. For example, by automating and simplifying their SDK release process, Meta was able to cut rollout time from three weeks to under three hours, enhancing cooperation between individual app teams by providing immediate access to the latest SDKs and allowing for rapid testing of new OS features. The centralized approach also meant Threads adopted Android 14 support quickly despite the fast-growing new app being supported by a minimal team.

Reaping the rewards: The impact on users

Meta's early targetSDK adoption strategy delivers significant benefits for users as well. Here's how:

    • Improved reliability and compatibility: Early adoption of Android previews and betas prevented surprises near the OS launch, guaranteeing a smooth day-one experience for users upgrading to the latest Android version. For example, with partial media permissions, Meta's extensive experimentation with permission flows ensured “users felt informed about the change and in control over their privacy settings,” while maximizing the app's media-sharing functionality.

    • Robust experimentation with new release features: Early Android release adoption gave Meta ample time to collaborate across privacy, design, and content strategy teams, enabling them to thoughtfully integrate the new Android features that come with every release. This enhanced the collaboration on other features, allowing Meta to roll out Ultra HDR image experience on Instagram within 3 months of platform release in an “Android first” manner is a great example of this, delighting users with brighter and richer colors with a higher dynamic range in their Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.

Embrace the latest Android versions

Meta's journey highlights the compelling reasons for Android developers to adopt a similar forward-thinking mindset in working with the Android betas:

    • Test your apps early: Anticipate Android OS changes, ensuring your apps are prepared for the latest target SDK as soon as they become available to create a seamless transition for users who update to the newest Android version.

    • Utilize latest tools to optimize user experience: Test your apps thoroughly against each beta to identify and address any potential issues. Check the Android Studio Upgrade Assistant to highlight major breaking changes in each targetSDKVersion, and integrate the compatibility framework tool into your testing process to help uncover potential app issues in the new OS version.

    • Collaborate with Google: Provide your valuable feedback and bug reports using the Google issue tracker to contribute directly to the improvement of the Android ecosystem.

We encourage you to take full advantage of the Android Developer Previews & Betas program, starting with the newly-released Android 15 Developer Preview 1.

The team behind the success

A big thank you to the entire Meta team for their collaboration in Android 14 and in writing this blog! We’d especially like to recognize the following folks from Meta for their outstanding contributions in establishing a culture of early adoption:

    • Tushar Varshney - Partner Engineering, Partner Engineer
    • Allen Bae - Partner Engineering, EM
    • Abel Del Pino - Facebook, SWE
    • Matias Hanco - Facebook, SWE
    • Summer Kitahara - Instagram, SWE
    • Tom Rozanski - Messenger, SWE
    • Ashish Gupta - WhatsApp, SWE
    • Daniel Hill - Mobile Infra, SWE
    • Jason Tang - Facebook, SWE
    • Jane Li - Meta Quest, SWE

Android Studio Iguana is stable

Posted by Neville Sicard-Gregory – Senior Product Manager, Android Studio

Today we are launching Android Studio Iguana 🦎 in the stable release channel to make it easier for you to create high quality apps. With features like Version Control System support in App Quality Insights, to the new built-in support to create Baseline Profiles for Jetpack Compose apps, this version should enhance your development workflow as you optimize your app. Download the latest version today!

Check out the list of new features in Android Studio Iguana below, organized by key developer flows.

Debugging

Version control system integration in App Quality Insights

When your release build is several commits behind your local source code, line numbers in Firebase Crashlytics crash reports can easily go stale, making it more difficult to accurately navigate from crash to code when using App Quality Insights. If you’re using git for your version control, there’s now a solution to this problem.

When you build your app using Android Gradle Plugin 8.3 or later and the latest version of the Crashlytics SDK, AGP includes git commit information as part of the build artifact that is published to the Play Store. When a crash occurs, Crashlytics attaches the git information to the report, and Android Studio Iguana uses this information to compare your local checkout with the exact code that caused the crash from your git history.

After you build your app using Android Gradle Plugin 8.3 or higher with the latest Crashlytics SDK, and publish it, new crash reports in the App Quality Insights window let you either navigate to the line of code in your current git checkout or view a diff report between the current checkout and the version of your app codebase that generated the crash report. Learn more.

app quality insights with version control system integration in Android Studio
App Quality Insights with Version Control System Integration

View Crashlytics crash variants in App Quality Insights

app quality insights in Android Studio
Crash variants in App Quality Insights

Today, when you select a Crashlytics issue in App Quality Insights, you see aggregated data from events that share identical points of failure in your code, but may have different root causes. To aid in your analysis of the root causes of a crash, Crashlytics now groups events that share very similar stack traces as issue variants. You can now view events in each variant of a crash report in App Quality Insights by selecting a variant from the dropdown. Alternatively, you can view aggregate information for all variants by selecting All.

Design

Jetpack Compose UI Check

To help developers build adaptive and accessible UI in Jetpack Compose, Iguana introduces a new UI Check mode in Compose Preview. This feature works similarly to visual linting and accessibility checks integrations for views. Activate Compose UI check mode to automatically audit your Compose UI and check for adaptive and accessibility issues across different screen sizes, such as text that's stretched on large screens or low color contrast. The mode highlights issues found in different preview configurations and lists them in the problems panel.

Try it out by clicking the UI Check icon in Compose Preview.

UI Check entry point in Compose Preview
UI Check entry point in Compose Preview

UI Check results of Reply App in Compose Preview
UI Check results of Reply App in Compose Preview

Progressive rendering for Compose Preview

Compose Previews in Android Studio Iguana now implement progressive rendering, allowing you to iterate on your designs with less loading time. This feature automatically lowers the detail of out-of-view previews to boost performance, meaning you can scroll through even the most complex layouts without lag.

moving image showing progressive rendering in Compose
Progressive Rendering in Compose

Develop

Intellij Platform Update

Android Studio Iguana includes the IntelliJ 2023.2 platform release, which has many new features such as support for GitLab, text search in Search Everywhere, color customization updates to the new UI and a host of new improvements. Learn more.

Testing

Baseline Profiles module wizard

Many times when you run an Android app for the first time on a device, the app can appear to have a slow start time because the operating system has to run just-in-time compilation. To improve this situation, you can create Baseline Profiles that help Android improve aspects like app start-up time, scrolling, and navigation speed in your apps. We are simplifying the process of setting up a Baseline Profile by offering a new Baseline Profile Generator template in the new module wizard (File > New > New Module). This template configures your project to support Baseline Profiles and employs the latest Baseline Profiles Gradle plugin, which simplifies setup by automating required tasks with a single Gradle command.

Baseline Profile module wizard - Create New Module
Baseline Profile Generator

Furthermore, the template creates a run configuration that enables you to generate a Baseline Profile with a single click from the "Select Run/Debug Configuration" dropdown list.

Generate Baseline Profile drop-down menu
Generate Baseline Profile drop-down menu

Test against configuration changes with the Espresso Device API

Synchronous testing of window size changes using Espresso Device API
Synchronous testing of window size changes using Espresso Device API

Catch layout problems early and ensure your app delivers a seamless user experience across devices and orientations. The Espresso Device API simulates how your app reacts to configuration changes—such as screen rotation, device folding/unfolding, or window size changes—in a synchronous way on virtual devices. These APIs help you rigorously test and preemptively fix issues that frustrate users so you build more reliable Android apps with confidence. These APIs are built on top of new gRPC endpoints introduced in Android Emulator 34.2, which enables secure bidirectional data streaming and precise sensor simulation.

Pixel 8 and Pixel 8 Pro devices in Android Emulator (34.2)

Test your app on the latest Google Pixel device configurations with the updated Android Virtual Device definitions in Android Studio. With Android Studio Iguana and the latest Android Emulator (34.2+), access the Pixel Fold, Pixel Tablet, Pixel 7a, Pixel 8, and Pixel 8 Pro. Validating your app on these virtual devices is a convenient way to ensure that your app reacts correctly to a variety of screen sizes and device types.

New Pixel Android Virtual Devices in the Android Emulator
New Pixel Android Virtual Devices in the Android Emulator.

Build

Support for Gradle Version Catalogs

Android Studio Iguana streamlines dependency management with its enhanced support for TOML-based Gradle Version Catalogs. You'll benefit from:

    • Centralized dependency management: Keep all your project's dependencies organized in a single file for easier editing and updating.
    • Time-saving features: Enjoy seamless code completion, smart navigation within your code, and the ability to quickly edit project dependencies through the convenient Project Structure dialog.
    • Increased efficiency: Say goodbye to scattered dependencies and manual version updates. Version catalogs give you a more manageable, efficient development workflow.

New projects will automatically use version catalogs for dependency management. If you have an existing project, consider making the switch to benefit from these workflow improvements. To learn how to update to Gradle version catalogs, see Migrate your build to version catalogs.

Additional SDK insights: policy issues

Android Studio Iguana now proactively alerts you to potential Google Play policy violations through integration with the Google Play SDK Index. Easily see Play policy issues right in your build files and Project Structure Dialog. This streamlines compliance, helping you avoid unexpected publishing delays or rejections on the Google Play Store.

Android Studio's project structure dialog showing a warning from the Google Play SDK Index
A warning from the Google Play SDK Index in Android Studio’s Project Structure dialog

Android Studio compileSdk version support

Using Android Studio to develop a project that has an unsupported compileSdk version can lead to unexpected errors because older versions of Android Studio may not handle the new Android SDK correctly. To avoid these issues, Android Studio Iguana now explicitly warns you if your project’s intended compileSdk is for a newer version that it does not officially support. If available, it also suggests moving to a version of Android Studio that supports the compileSdk used by your project. Keep in mind that upgrading Android Studio might also require that you upgrade AGP.

Summary

To recap, Android Studio Iguana 🦎includes the following enhancements and features:

Debugging

Design

Develop

    • Intellij platform update

Testing

Build

Download Android Studio Today

Download Android Studio Iguana 🦎 today and take advantage of the latest features to streamline your workflow and help you make better apps. Your feedback is essential – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X (formerly known as Twitter). Let's build the future of Android apps together!

New goodies from Android, Wearables at Mobile World Congress + tune in to a new episode of #TheAndroidShow next week!

Posted by Anirudh Dewani, Director of Android Developer Relations

Earlier today, at Mobile World Congress (MWC), an annual conference showcasing the latest in mobile, Android and our partners unveiled a range of new goodies, including new wearables, foldables, as well as a number of new features for Android users. Keep reading below to see how you, as developers, can take advantage of these new features and devices that are being released. And in just over a week, on Thursday March 7 at 10AM PT, we’ll be kicking off another episode of #TheAndroidShow, our quarterly live show on YouTube and on developer.android.com, where we’ll dive more into these topics.


Meet the new watch from OnePlus and how we’re boosting power with the Wear OS hybrid interface

Wearables are on display across MWC this week, and one of our favorites is OnePlus Watch 2, powered with the latest version of Wear OS (Wear OS 4). As part of our ongoing work to improve the Wear OS by Google user experience, we’ve made fundamental changes to the platform and substantially expanded the capabilities of the Wear OS hybrid interface that improve two key areas: power and performance. As a developer, you can leverage existing Wear OS APIs to get underneath optimizations without any added effort – no code changes required! You can read more about the updates here.

Images of three people wearing the OnePlus Watch 2

A few new features for Android users

Google released 9 new features Android users can take advantage of across Google apps, you can read more about those features here. For developers, we wanted to highlight a few ways you can take advantage of this news across experiences you build into your apps:

    • More places for users to see their Health Connect data, now in the Fitbit app: With permission from your users, Health Connect is a central way to connect and sync their favorite health and fitness apps, see all their data in one place, and stay in control of their privacy. By setting up Health Connect in the Fitbit mobile app for Android, users will have an overview of their health and fitness data from across their apps in one place. You can join developers like Peloton, ŌURA, and Lifesum who are using Health Connect to provide their users with deeper health and fitness insights, get started now!
Image that reads 'New updates on Android' with pictures of a smart watch, laptop, and Android Auto

A new episode of #TheAndroidShow, live on March 7 at 10AM PT. Send us your #AskAndroid questions now!

You can join us on March 7 at 10AM PT for a new episode of #TheAndroidShow. In this quarterly show, we’ll unpack the latest Android foldables and large screens for you to get building on, plus a behind-the-scenes on Gemini Nano and AICore.

We’ll have a live #AskAndroid Q&A with the team about building Android; you can ask us about building excellent apps across devices, Android 15, Compose, Gemini and more, using #AskAndroid on X or on YouTube. Our experts are ready to answer your questions live!

#TheAndroidShow: March 7 at 10AM PT, broadcast live on YouTube and d.android.com/events/show!

Wear OS hybrid interface: Boosting power and performance

Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

In collaboration with our hardware partners, we’ve continued to prioritize the Wear OS by Google user experience. As such, we’ve made fundamental design changes to the platform and substantially expanded the capabilities of the Wear OS hybrid interface that improve two key areas: power and performance.

With OnePlus Watch 2, powered with the latest version of Wear OS (Wear OS 4), the dual-chipset architecture works with our hybrid interface to get both chips to work better in tandem. This enables even more use cases to benefit from dramatically extended battery life of up to 100 hours of regular use with all functionalities accessible in Smart Mode.

Together, we’ve created a premium smartwatch experience that doesn’t compromise the advanced feature set or battery life. In this post, we’ll share how you can benefit from these changes when building experiences for Wear OS.

On the edge of innovation: redesigned smartwatch architecture

Wear OS smartwatches have a dual-chipset architecture inclusive of a powerful application processor (AP) and ultra low-power co-processor microcontroller unit (MCU). The architecture has a powerful AP capable of handling complex operations en-masse, and is seamlessly coupled with a low power MCU.

The Wear OS hybrid interface enables intelligent switching between the MCU or the AP, allowing the AP to be suspended when not needed to preserve battery life. It helps, for instance, achieve more power-efficient experiences, like sensor data processing on the MCU while the AP is asleep. At the same time, the hybrid interface provides a seamless transition between these states, keeping a rich and premium user experience without jarring transitions between power modes.

ALT TEXT

Connectivity and notification experience

To enhance connectivity-reliant interactions like notifications and phone calls, OnePlus utilized platform capabilities with the notification API in the hybrid interface, enabling the MCU to process regular notification experiences and reduce the need to activate the AP.

For example, bridged notifications will be delivered to the watch without waking up the high-performance AP. Users can read and dismiss these notifications while the watch is still powered by the MCU. The MCU can also handle wearable-specific actions in notifications, such as quick replies or remote actions.

What this means for development

You can leverage existing Wear OS APIs to get these optimizations without any added effort – no code changes required!

Notifications

The notification hybrid interface enables seamless transitions between power modes to work with the Wear OS notification stack. You get the best notification performance by using the Notification API.

Health & Fitness experiences

The Wear OS hybrid interface also elevates the fitness experience with more precise workout tracking, automatic sports recognition and smarter health data monitoring. All of these can be offered to users without compromising battery life.

Starting with Wear OS 3, developers use Health Services on Wear OS to gain access to sensor data. The health hybrid interface works under the hood to enable power optimizations by batching sensor data on the MCU and periodically updating developer apps through the Health Services API on the AP.

Watch Faces

With Wear OS 4, we launched the Watch Face Format, a declarative XML format to create customizable and power-efficient watch faces.

The platform has created capabilities to implement Watch Face Format rendering on the MCU, so using the new format helps future-proof certain watch faces to take advantage of emerging optimizations in future devices for better battery usage.

Check out the watch face format documentation and design guidelines for Wear OS watch faces.

Expand your reach with Wear OS

With the additions to the Wear OS smartwatch ecosystem and expanded device capabilities, it's an ideal time to build experiences for smartwatches that can reach more users and benefit your business.

To begin developing apps for Wear OS, try our Compose for Wear OS codelab, and check out the documentation and samples.

Read more about developer updates in Wear OS 4, and how you can get your apps ready for the latest Wear OS watches.

We can’t wait to see what experiences you’ll build!

Easily add document scanning capability to your app with ML Kit Document Scanner API

Posted by Thomas Ezan – Sr. Developer Relations Engineer; Chengji Yan, Penny Li – ML Kit Engineers; David Miro Llopis – Product Manager

We are excited to announce the launch of the ML Kit Document Scanner API. This new API makes it easy to add advanced document scanning capabilities with a high-quality and consistent user interface to your Android app. The ML Kit Document Scanner API enables your users to quickly and easily digitize paper documents.

Like the other ML Kit APIs, the ML Kit Document Scanner API enables you to seamlessly integrate features powered by Machine Learning (ML) without any ML knowledge.

ml kit document scanner illustration

Why Document Scanner SDK?

Despite the digital revolution, paper documents and printouts are still present in our everyday life. Some of our most important documents are still physical (identity documents, receipts, etc.).

The ML Kit Document Scanner API offers a number of benefits, including:

    • A high-quality and consistent user interface for digitizing physical documents.
    • Accurate document detection with precise corner and edge detection for a seamless scanning experience and optimal scanning results.
    • Flexible functionality allows users to crop scanned documents, apply filters, remove fingers, remove stains and other blemishes and send digitized files in PDF and JPEG formats back to your app.
    • On-device processing helps preserve privacy.
    • A complete solution eliminating the need for camera permission.

The ML Kit Document Scanner API is already used by Google Drive Android application and the Google Pixel Camera.

moving image showing ML Kit Document scanner API in action in  
Google Drive
ML Kit Document scanner API in action in Google Drive

Get started

The ML Kit Document Scanner API requires Android API level 21 or above. The models, scanning logic, and UI flow are dynamically downloaded via Google Play services so the ML Kit Document Scanner API has a minimal impact on your app size.

To integrate it in your app, start by configuring the scanner options and getting a scanner client:

val options = GmsDocumentScannerOptions.Builder()
    .setGalleryImportAllowed(false)
    .setPageLimit(2)
    .setResultFormats(RESULT_FORMAT_JPEG, RESULT_FORMAT_PDF)
    .setScannerMode(SCANNER_MODE_FULL)
    .build()
val scanner = GmsDocumentScanning.getClient(options)

Then register an ActivityResultCallback to receive the scanning results:

val scannerLauncher = registerForActivityResult(StartIntentSenderForResult()) {
  result -> {
    if (result.resultCode == RESULT_OK) {
      val result =
        GmsDocumentScanningResult.fromActivityResultIntent(result.data)
      result.getPages()?.let { pages ->
        for (page in pages) {
          val imageUri = page.getImageUri()
        }
      }
      result.getPdf()?.let { pdf ->
        val pdfUri = pdf.getUri()
        val pageCount = pdf.getPageCount()
      }
    }
  }
}

Finally launch the document scanner activity:

scanner.getStartScanIntent(activity)
  .addOnSuccessListener { intentSender ->   
    scannescannerrLauncher.launch(IntentSenderRequest.Builder(intentSender).build())
  }
  .addOnFailureListener { ... }

To get started with the ML Kit Document Scanner API, visit the documentation. We can’t wait to see what you’ll build with it!

The First Developer Preview of Android 15

Posted by Dave Burke, VP of Engineering
Android 14 logo

We're releasing the first Developer Preview of Android 15 today so you, our developers, can collaborate with us to build a better Android.

Android 15 continues our work to build a platform that helps improve your productivity while giving you new capabilities to produce superior media experiences, minimize battery impact, maximize smooth app performance, and protect user privacy and security all on the most diverse lineup of devices out there.

Android enables your apps to take advantage of premium device hardware, including high-end camera capabilities, powerful GPUs, dazzling displays, and AI processing. The demand for large-screen devices, including tablets, foldables and flippables, continues to grow, offering an opportunity to reach high-value users. Also, Android is committed to providing tooling and libraries to help your apps take advantage of the latest advances in AI.

Your feedback on the Android 15 Developer Preview and QPR beta program plays a key role in helping Android continuously improve. The Android 15 developer site has more information about the preview, including downloads for Pixel and detailed documentation about changes. This preview is just the beginning, and we’ll have lots more to share as we move through the release cycle. Thank you in advance for your help in making Android a platform that works for everyone.

Protecting user privacy and security

Android is constantly working to create solutions that maximize user privacy and security.

Privacy Sandbox on Android

Android 15 brings Android AD Services up to extension level 10, incorporating the latest version of the Privacy Sandbox on Android, part of our work to develop new technologies that improve user privacy and enable effective, personalized advertising experiences for mobile apps. Our website has more about the Privacy Sandbox on Android developer preview and beta programs to help you get started.

Health Connect

Android 15 integrates Android 14 extensions 10 around Health Connect by Android, a secure and centralized platform to manage and share app-collected health and fitness data. This update adds support for new data types across fitness, nutrition, and more.

File integrity

Android 15's FileIntegrityManager includes new APIs that tap into the power of the fs-verity feature in the Linux kernel. With fs-verity, files can be protected by custom cryptographic signatures, helping you ensure they haven't been tampered with or corrupted. This leads to enhanced security, protecting against potential malware or unauthorized file modifications that could compromise your app's functionality or data.

Partial screen sharing

Android 15 supports partial screen sharing so users can share or record just an app window rather than the entire device screen. This feature, enabled first in Android 14 QPR2, includes MediaProjection callbacks that allow your app to customize the partial screen sharing experience. Note that user consent is now required for each MediaProjection capture session.

Supporting creators

Android continues its work to give you access to tools and hardware to support creators to bring their vision to life on Android.

In-app Camera Controls

Android 15 adds new extensions for more control over the camera hardware and its algorithms on supported devices:

Virtual MIDI 2.0 Devices

Android 13 added support for connecting to MIDI 2.0 devices via USB, which communicate using Universal MIDI Packets (UMP). Android 15 extends UMP support to virtual MIDI apps, enabling composition apps to control synthesizer apps as a virtual MIDI 2.0 device just like they would with an USB MIDI 2.0 device.

Performance and quality

Android continues its focus on helping you improve the quality of your apps. Much of this focus is around tooling and libraries, including Jetpack Compose, Android Studio, and more.

Dynamic Performance

Android 15 continues our investment in the Android Dynamic Performance Framework (ADPF), a set of APIs that allow games and performance intensive apps to interact more directly with power and thermal systems of Android devices. On supported devices, Android 15 will add new ADPF capabilities:

    • A power-efficiency mode for hint sessions to indicate that their associated threads should prefer power saving over performance, great for long-running background workloads.
    • GPU and CPU work durations can both be reported in hint sessions, allowing the system to adjust CPU and GPU frequencies together to best meet workload demands.

To learn more about how to use ADPF in your apps and games, head over to the documentation.

Developer Productivity

Android 15 continues to add OpenJDK APIs, including quality-of-life improvements around NIO buffers, streams, security, and more. These APIs are updated on over a billion devices running Android 12+ through Google Play System updates, so you can target the latest programming features.

App compatibility

Image of Android 15 Development timeline, indicating we are on time with Developer Previews in February

To give you more time to plan for app compatibility work, we’re letting you know our Platform Stability milestone well in advance.

At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in June 2024, and from that time you’ll have several months before the official release to do your final testing. The release timeline details are here.

Get started with Android 15

The Developer Preview has everything you need to try the Android 15 features, test your apps, and give us feedback. You can get started today by flashing a system image onto a Pixel 6, 7, or 8 series device, along with the Pixel Fold and Pixel Tablet. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

For the best development experience with Android 15, we recommend that you use the latest preview of Android Studio Jellyfish (or more recent Jellyfish+ versions). Once you’re set up, here are some of the things you should do:

    • Try the new features and APIs – your feedback is critical during the early part of the developer preview. Report issues in our tracker on the feedback page.
    • Test your current app for compatibility – learn whether your app is affected by changes in Android 15; install your app onto a device or emulator running Android 15 and extensively test it.

We’ll update the preview system images and SDK regularly throughout the Android 15 release cycle. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download only. Once you’ve manually installed a preview build, you’ll automatically get future updates over-the-air for all later previews and Betas. Read more here.

If you intend to move from the Android 14 QPR Beta program to the Android 15 Developer Preview program and don't want to have to wipe your device, we recommend that you move to Developer Preview 1 now. Otherwise you may run into time periods where the Android 14 Beta will have a more recent build date which will prevent you from going directly to the Android 15 Developer Preview without doing a data wipe.

As we reach our Beta releases, we'll be inviting consumers to try Android 15 as well, and we'll open up enrollment for the Android Beta program at that time. For now, please note that the Android Beta program is not yet available for Android 15.

For complete information, visit the Android 15 developer site.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation

Posted by Leticia Lago – Developer Marketing

When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app We Spot Turtles!, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.

Caitlin and Nicolas shared their journey in our latest film for #WeArePlay, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation.

Tell us about how you both got interested in sea turtle conservation?

Caitlin: A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.

Nicolas: In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.

How did these experiences lead you to create We Spot Turtles!?

Caitlin: There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved.

Nicolas: When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.

close up image of a turtle resting in a reef underwater

What does We Spot Turtles! do?

Caitlin: Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.

How do you use artificial intelligence in the app?

Caitlin: The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.

How has Google Play supported your journey?

Caitlin: Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.

Nicolas: I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.

Photo pf Caitlin and Nicolas on the bach in Australia at sunset. Both are kneeling in the sand. Caitlin is using her phone to identify something in the distance, and gesturing to Nicolas who is looking in the same direction

What do you hope to achieve with We Spot Turtles!?

Caitlin: We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.

What are your plans for the future?

Caitlin: Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world.


Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?