Category Archives: Android Developers Blog

An Open Handset Alliance Project

Wear OS hybrid interface: Boosting power and performance

Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

In collaboration with our hardware partners, we’ve continued to prioritize the Wear OS by Google user experience. As such, we’ve made fundamental design changes to the platform and substantially expanded the capabilities of the Wear OS hybrid interface that improve two key areas: power and performance.

With OnePlus Watch 2, powered with the latest version of Wear OS (Wear OS 4), the dual-chipset architecture works with our hybrid interface to get both chips to work better in tandem. This enables even more use cases to benefit from dramatically extended battery life of up to 100 hours of regular use with all functionalities accessible in Smart Mode.

Together, we’ve created a premium smartwatch experience that doesn’t compromise the advanced feature set or battery life. In this post, we’ll share how you can benefit from these changes when building experiences for Wear OS.

On the edge of innovation: redesigned smartwatch architecture

Wear OS smartwatches have a dual-chipset architecture inclusive of a powerful application processor (AP) and ultra low-power co-processor microcontroller unit (MCU). The architecture has a powerful AP capable of handling complex operations en-masse, and is seamlessly coupled with a low power MCU.

The Wear OS hybrid interface enables intelligent switching between the MCU or the AP, allowing the AP to be suspended when not needed to preserve battery life. It helps, for instance, achieve more power-efficient experiences, like sensor data processing on the MCU while the AP is asleep. At the same time, the hybrid interface provides a seamless transition between these states, keeping a rich and premium user experience without jarring transitions between power modes.

ALT TEXT

Connectivity and notification experience

To enhance connectivity-reliant interactions like notifications and phone calls, OnePlus utilized platform capabilities with the notification API in the hybrid interface, enabling the MCU to process regular notification experiences and reduce the need to activate the AP.

For example, bridged notifications will be delivered to the watch without waking up the high-performance AP. Users can read and dismiss these notifications while the watch is still powered by the MCU. The MCU can also handle wearable-specific actions in notifications, such as quick replies or remote actions.

What this means for development

You can leverage existing Wear OS APIs to get these optimizations without any added effort – no code changes required!

Notifications

The notification hybrid interface enables seamless transitions between power modes to work with the Wear OS notification stack. You get the best notification performance by using the Notification API.

Health & Fitness experiences

The Wear OS hybrid interface also elevates the fitness experience with more precise workout tracking, automatic sports recognition and smarter health data monitoring. All of these can be offered to users without compromising battery life.

Starting with Wear OS 3, developers use Health Services on Wear OS to gain access to sensor data. The health hybrid interface works under the hood to enable power optimizations by batching sensor data on the MCU and periodically updating developer apps through the Health Services API on the AP.

Watch Faces

With Wear OS 4, we launched the Watch Face Format, a declarative XML format to create customizable and power-efficient watch faces.

The platform has created capabilities to implement Watch Face Format rendering on the MCU, so using the new format helps future-proof certain watch faces to take advantage of emerging optimizations in future devices for better battery usage.

Check out the watch face format documentation and design guidelines for Wear OS watch faces.

Expand your reach with Wear OS

With the additions to the Wear OS smartwatch ecosystem and expanded device capabilities, it's an ideal time to build experiences for smartwatches that can reach more users and benefit your business.

To begin developing apps for Wear OS, try our Compose for Wear OS codelab, and check out the documentation and samples.

Read more about developer updates in Wear OS 4, and how you can get your apps ready for the latest Wear OS watches.

We can’t wait to see what experiences you’ll build!

Easily add document scanning capability to your app with ML Kit Document Scanner API

Posted by Thomas Ezan – Sr. Developer Relations Engineer; Chengji Yan, Penny Li – ML Kit Engineers; David Miro Llopis – Product Manager

We are excited to announce the launch of the ML Kit Document Scanner API. This new API makes it easy to add advanced document scanning capabilities with a high-quality and consistent user interface to your Android app. The ML Kit Document Scanner API enables your users to quickly and easily digitize paper documents.

Like the other ML Kit APIs, the ML Kit Document Scanner API enables you to seamlessly integrate features powered by Machine Learning (ML) without any ML knowledge.

ml kit document scanner illustration

Why Document Scanner SDK?

Despite the digital revolution, paper documents and printouts are still present in our everyday life. Some of our most important documents are still physical (identity documents, receipts, etc.).

The ML Kit Document Scanner API offers a number of benefits, including:

    • A high-quality and consistent user interface for digitizing physical documents.
    • Accurate document detection with precise corner and edge detection for a seamless scanning experience and optimal scanning results.
    • Flexible functionality allows users to crop scanned documents, apply filters, remove fingers, remove stains and other blemishes and send digitized files in PDF and JPEG formats back to your app.
    • On-device processing helps preserve privacy.
    • A complete solution eliminating the need for camera permission.

The ML Kit Document Scanner API is already used by Google Drive Android application and the Google Pixel Camera.

moving image showing ML Kit Document scanner API in action in  
Google Drive
ML Kit Document scanner API in action in Google Drive

Get started

The ML Kit Document Scanner API requires Android API level 21 or above. The models, scanning logic, and UI flow are dynamically downloaded via Google Play services so the ML Kit Document Scanner API has a minimal impact on your app size.

To integrate it in your app, start by configuring the scanner options and getting a scanner client:

val options = GmsDocumentScannerOptions.Builder()
    .setGalleryImportAllowed(false)
    .setPageLimit(2)
    .setResultFormats(RESULT_FORMAT_JPEG, RESULT_FORMAT_PDF)
    .setScannerMode(SCANNER_MODE_FULL)
    .build()
val scanner = GmsDocumentScanning.getClient(options)

Then register an ActivityResultCallback to receive the scanning results:

val scannerLauncher = registerForActivityResult(StartIntentSenderForResult()) {
  result -> {
    if (result.resultCode == RESULT_OK) {
      val result =
        GmsDocumentScanningResult.fromActivityResultIntent(result.data)
      result.getPages()?.let { pages ->
        for (page in pages) {
          val imageUri = page.getImageUri()
        }
      }
      result.getPdf()?.let { pdf ->
        val pdfUri = pdf.getUri()
        val pageCount = pdf.getPageCount()
      }
    }
  }
}

Finally launch the document scanner activity:

scanner.getStartScanIntent(activity)
  .addOnSuccessListener { intentSender ->   
    scannescannerrLauncher.launch(IntentSenderRequest.Builder(intentSender).build())
  }
  .addOnFailureListener { ... }

To get started with the ML Kit Document Scanner API, visit the documentation. We can’t wait to see what you’ll build with it!

The First Developer Preview of Android 15

Posted by Dave Burke, VP of Engineering
Android 14 logo

We're releasing the first Developer Preview of Android 15 today so you, our developers, can collaborate with us to build a better Android.

Android 15 continues our work to build a platform that helps improve your productivity while giving you new capabilities to produce superior media experiences, minimize battery impact, maximize smooth app performance, and protect user privacy and security all on the most diverse lineup of devices out there.

Android enables your apps to take advantage of premium device hardware, including high-end camera capabilities, powerful GPUs, dazzling displays, and AI processing. The demand for large-screen devices, including tablets, foldables and flippables, continues to grow, offering an opportunity to reach high-value users. Also, Android is committed to providing tooling and libraries to help your apps take advantage of the latest advances in AI.

Your feedback on the Android 15 Developer Preview and QPR beta program plays a key role in helping Android continuously improve. The Android 15 developer site has more information about the preview, including downloads for Pixel and detailed documentation about changes. This preview is just the beginning, and we’ll have lots more to share as we move through the release cycle. Thank you in advance for your help in making Android a platform that works for everyone.

Protecting user privacy and security

Android is constantly working to create solutions that maximize user privacy and security.

Privacy Sandbox on Android

Android 15 brings Android AD Services up to extension level 10, incorporating the latest version of the Privacy Sandbox on Android, part of our work to develop new technologies that improve user privacy and enable effective, personalized advertising experiences for mobile apps. Our website has more about the Privacy Sandbox on Android developer preview and beta programs to help you get started.

Health Connect

Android 15 integrates Android 14 extensions 10 around Health Connect by Android, a secure and centralized platform to manage and share app-collected health and fitness data. This update adds support for new data types across fitness, nutrition, and more.

File integrity

Android 15's FileIntegrityManager includes new APIs that tap into the power of the fs-verity feature in the Linux kernel. With fs-verity, files can be protected by custom cryptographic signatures, helping you ensure they haven't been tampered with or corrupted. This leads to enhanced security, protecting against potential malware or unauthorized file modifications that could compromise your app's functionality or data.

Partial screen sharing

Android 15 supports partial screen sharing so users can share or record just an app window rather than the entire device screen. This feature, enabled first in Android 14 QPR2, includes MediaProjection callbacks that allow your app to customize the partial screen sharing experience. Note that user consent is now required for each MediaProjection capture session.

Supporting creators

Android continues its work to give you access to tools and hardware to support creators to bring their vision to life on Android.

In-app Camera Controls

Android 15 adds new extensions for more control over the camera hardware and its algorithms on supported devices:

Virtual MIDI 2.0 Devices

Android 13 added support for connecting to MIDI 2.0 devices via USB, which communicate using Universal MIDI Packets (UMP). Android 15 extends UMP support to virtual MIDI apps, enabling composition apps to control synthesizer apps as a virtual MIDI 2.0 device just like they would with an USB MIDI 2.0 device.

Performance and quality

Android continues its focus on helping you improve the quality of your apps. Much of this focus is around tooling and libraries, including Jetpack Compose, Android Studio, and more.

Dynamic Performance

Android 15 continues our investment in the Android Dynamic Performance Framework (ADPF), a set of APIs that allow games and performance intensive apps to interact more directly with power and thermal systems of Android devices. On supported devices, Android 15 will add new ADPF capabilities:

    • A power-efficiency mode for hint sessions to indicate that their associated threads should prefer power saving over performance, great for long-running background workloads.
    • GPU and CPU work durations can both be reported in hint sessions, allowing the system to adjust CPU and GPU frequencies together to best meet workload demands.

To learn more about how to use ADPF in your apps and games, head over to the documentation.

Developer Productivity

Android 15 continues to add OpenJDK APIs, including quality-of-life improvements around NIO buffers, streams, security, and more. These APIs are updated on over a billion devices running Android 12+ through Google Play System updates, so you can target the latest programming features.

App compatibility

Image of Android 15 Development timeline, indicating we are on time with Developer Previews in February

To give you more time to plan for app compatibility work, we’re letting you know our Platform Stability milestone well in advance.

At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in June 2024, and from that time you’ll have several months before the official release to do your final testing. The release timeline details are here.

Get started with Android 15

The Developer Preview has everything you need to try the Android 15 features, test your apps, and give us feedback. You can get started today by flashing a system image onto a Pixel 6, 7, or 8 series device, along with the Pixel Fold and Pixel Tablet. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

For the best development experience with Android 15, we recommend that you use the latest preview of Android Studio Jellyfish (or more recent Jellyfish+ versions). Once you’re set up, here are some of the things you should do:

    • Try the new features and APIs – your feedback is critical during the early part of the developer preview. Report issues in our tracker on the feedback page.
    • Test your current app for compatibility – learn whether your app is affected by changes in Android 15; install your app onto a device or emulator running Android 15 and extensively test it.

We’ll update the preview system images and SDK regularly throughout the Android 15 release cycle. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download only. Once you’ve manually installed a preview build, you’ll automatically get future updates over-the-air for all later previews and Betas. Read more here.

If you intend to move from the Android 14 QPR Beta program to the Android 15 Developer Preview program and don't want to have to wipe your device, we recommend that you move to Developer Preview 1 now. Otherwise you may run into time periods where the Android 14 Beta will have a more recent build date which will prevent you from going directly to the Android 15 Developer Preview without doing a data wipe.

As we reach our Beta releases, we'll be inviting consumers to try Android 15 as well, and we'll open up enrollment for the Android Beta program at that time. For now, please note that the Android Beta program is not yet available for Android 15.

For complete information, visit the Android 15 developer site.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation

Posted by Leticia Lago – Developer Marketing

When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app We Spot Turtles!, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.

Caitlin and Nicolas shared their journey in our latest film for #WeArePlay, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation.

Tell us about how you both got interested in sea turtle conservation?

Caitlin: A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.

Nicolas: In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.

How did these experiences lead you to create We Spot Turtles!?

Caitlin: There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved.

Nicolas: When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.

close up image of a turtle resting in a reef underwater

What does We Spot Turtles! do?

Caitlin: Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.

How do you use artificial intelligence in the app?

Caitlin: The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.

How has Google Play supported your journey?

Caitlin: Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.

Nicolas: I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.

Photo pf Caitlin and Nicolas on the bach in Australia at sunset. Both are kneeling in the sand. Caitlin is using her phone to identify something in the distance, and gesturing to Nicolas who is looking in the same direction

What do you hope to achieve with We Spot Turtles!?

Caitlin: We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.

What are your plans for the future?

Caitlin: Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world.


Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation

Posted by Leticia Lago – Developer Marketing

When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app We Spot Turtles!, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.

Caitlin and Nicolas shared their journey in our latest film for #WeArePlay, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation.

Tell us about how you both got interested in sea turtle conservation?

Caitlin: A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.

Nicolas: In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.

How did these experiences lead you to create We Spot Turtles!?

Caitlin: There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved.

Nicolas: When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.

close up image of a turtle resting in a reef underwater

What does We Spot Turtles! do?

Caitlin: Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.

How do you use artificial intelligence in the app?

Caitlin: The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.

How has Google Play supported your journey?

Caitlin: Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.

Nicolas: I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.

Photo pf Caitlin and Nicolas on the bach in Australia at sunset. Both are kneeling in the sand. Caitlin is using her phone to identify something in the distance, and gesturing to Nicolas who is looking in the same direction

What do you hope to achieve with We Spot Turtles!?

Caitlin: We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.

What are your plans for the future?

Caitlin: Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world.


Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

Cloud photos now available in the Android photo picker

Posted by Roxanna Aliabadi Walker – Product Manager

Available now with Google Photos

Our photo picker has always been the gateway to your local media library, providing a secure, date-sorted interface for users to grant apps access to selected images and videos. But now, we're taking it a step further by integrating cloud photos from your chosen cloud media app directly into the photo picker experience.

Moving image of the photo picker access

Unifying your media library

Backed-up photos, also known as "cloud photos," will now be merged with your local ones in the photo picker, eliminating the need to switch between apps. Additionally, any albums you've created in your cloud storage app will be readily accessible within the photo picker's albums tab. If your cloud media provider has a concept of “favorites,” they will be showcased prominently within the albums tab of the photo picker for easy access. This feature is currently rolling out with the February Google System Update to devices running Android 12 and above.

Available now with Google Photos, but open to all

Google Photos is already supporting this new feature, and our APIs are open to any cloud media app that qualifies for our pilot program. Our goal is to make accessing your lifetime of memories effortless, regardless of the app you prefer.

The Android photo picker will attempt to auto-select a cloud media app for you, but you can change or remove your selected cloud media app at any time from photo picker settings.

Image of Cloud media settings in photo picker settings

Migrate today for an enhanced, frictionless experience

The Android photo picker substantially reduces friction by not requiring any runtime permissions. If you switch from using a custom photo picker to the Android photo picker, you can offer this enhanced experience with cloud photos to your users, as well as reduce or entirely eliminate the overhead involved with acquiring and managing access to photos on the device. (Note that apps without a need for persistent and/or broad scale access to photos - for example - to set a profile picture, must adopt the Android photo picker in lieu of any sensitive file permissions to adhere to Google Play policy).

The photo picker has been backported to Android 4.4 to make it easy to migrate without needing to worry about device compatibility. Access to cloud content will only be available for users running Android 12 and higher, but developers do not need to consider this when implementing the photo picker into their apps. To use the photo picker in your app, update the ActivityX dependency to version 1.7.x or above and add the following code snippet:

// Registers a photo picker activity launcher in single-select mode.
val pickMedia = registerForActivityResult(PickVisualMedia()) { uri ->
    // Callback is invoked after the user selects a media item or closes the
    // photo picker.
    if (uri != null) {
        Log.d("PhotoPicker", "Selected URI: $uri")
    } else {
        Log.d("PhotoPicker", "No media selected")
    }
}


// Launch the photo picker and let the user choose images and videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo))

// Launch the photo picker and let the user choose only images.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly))

// Launch the photo picker and let the user choose only videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly))

More customization options are listed in our developer documentation.

Prompt users to update to your latest app version

Posted by Lidia Gaymond – Product Manager, Google Play

For years, Google Play has helped users enjoy the latest versions of your app through auto-updates or in-app updates. While most users update their apps this way, some may still be stuck on outdated, unsupported or broken versions of your app.

Today, we are introducing a new tool that will prompt these users to update, bringing them closer to the app experience you intended to deliver.

Play recovery tools allow you to prompt users running specific versions of your app to update every time they restart the app.

Image of side by side mobile device screens showing how the prompt to update may look to users
Note: Images are examples and subject to change

To use this new feature, log into Google Play Console and head to your Releases or to the App Bundle Explorer page, where you can select the app versions where you want to deliver the prompts. Alternatively, the feature is also available via the Play Developer API, and will soon be extended to allow you to target multiple app versions at once. Please note that the version you want to deploy the prompt to needs to be built as an app bundle.

You can then narrow your targeting criteria by country or Android version (if required), with no prior integration necessary.

Currently, over 50% of users are responding to the prompts, enabling more users to get the best experience of your apps.

After prompting users to update, you can use Play Console's recovery tools to edit your update configuration, view its progress, or cancel the recovery action altogether. Learn more about the feature here and start using it today!

What’s new in the Jetpack Compose January ’24 release

Posted by Ben Trengrove, Android Developer Relations Engineer

Today, as part of the Compose January ‘24 Bill of Materials, we’re releasing version 1.6 of Jetpack Compose, Android's modern, native UI toolkit that is used by apps such as Threads, Reddit, and Dropbox. This release largely focuses on performance improvements, as we continue to migrate modifiers and improve the efficiency of major parts of our API.

To use today’s release, upgrade your Compose BOM version to 2024.01.01

implementation platform('androidx.compose:compose-bom:2024.01.01')

Performance

Performance continues to be our top priority, and this release of Compose has major performance improvements across the board. We are seeing an additional ~20% improvement in scroll performance and ~12% improvement to startup time in our benchmarks, and this is on top of the improvements from the August ‘23 release. As with that release, most apps will see these benefits just by upgrading to the latest version, with no other code changes needed.

The improvement to scroll performance and startup time comes from our continued focus on memory allocations and lazy initialization, to ensure the framework is only doing work when it has to. These improvements can be seen across all APIs in Compose, especially in text, clickable, Lazy lists, and graphics APIs, including vectors, and were made possible in part by the Modifier.Node refactor work that has been ongoing for multiple releases.

There is also new guidance for you to create your own custom modifiers with Modifier.Node.

Configuring the stability of external classes

Compose compiler 1.5.5 introduces a new compiler option to provide a configuration file for what your app considers stable. This option allows you to mark any class as stable, including your own modules, external library classes, and standard library classes, without having to modify these modules or wrap them in a stable wrapper class. Note that the standard stability contract applies; this is just another convenient method to let the Compose compiler know what your app should consider stable. For more information on how to use stability configuration, see our documentation.

Generated code performance

The code generated by the Compose compiler plugin has also been improved. Small tweaks in this code can lead to large performance improvements due to the fact the code is generated in every composable function. The Compose compiler tracks Compose state objects to know which composables to recompose when there is a change of value; however, many state values are only read once, and some state values are never read at all but still change frequently! This update allows the compiler to skip the tracking when it is not needed.

Compose compiler 1.5.6 also enables “intrinsic remember” by default. This mode transforms remember at compile time to take into account information we already have about any parameters of a composable that are used as a key to remember. This speeds up the calculation of determining if a remembered expression needs reevaluating, but also means if you place a breakpoint inside the remember function during debugging, it may no longer be called, as the compiler has removed the usage of remember and replaced it with different code.

Composables not being skipped

We are also investing in making the code you write more performant, automatically. We want to optimize for the code you intuitively write, removing the need to dive deep into Compose internals to understand why your composable is recomposing when it shouldn’t.

This release of Compose adds support for an experimental mode we are calling “strong skipping mode”. Strong skipping mode relaxes some of the rules about which changes can skip recomposition, moving the balance towards what developers expect. With strong skipping mode enabled, composables with unstable parameters can also skip recomposition if the same instances of objects are passed in to its parameters. Additionally, strong skipping mode automatically remembers lambdas in composition that capture unstable values, in addition to the current default behavior of remembering lambdas with only stable captures. Strong skipping mode is currently experimental and disabled by default as we do not consider it ready for production usage yet. We are evaluating its effects before aiming to turn it on by default in Compose 1.7. See our guidance to experiment with strong skipping mode and help us find any issues.

Text

Changes to default font padding

This release now makes the includeFontPadding setting false by default. includeFontPadding is a legacy property that adds extra padding based on font metrics at the top of the first line and bottom of the last line of a text. Making this setting default to false brings the default text layout more in line with common design tools, making it easier to match the design specifications generated. Upon upgrading to the January ‘24 release, you may see small changes in your text layout and screenshot tests. For more information about this setting, see the Fixing Font Padding in Compose Text blog post and the developer documentation.

Line height with includeFontPadding as false on the left and true on the right.

Support for nonlinear font scaling

The January ‘24 release uses nonlinear font scaling for better text readability and accessibility. Nonlinear font scaling prevents large text elements on screen from scaling too large by applying a nonlinear scaling curve. This scaling strategy means that large text doesn't scale at the same rate as smaller text.

Drag and drop

Compose Foundation adds support for platform-level drag and drop, which allows for content to be dragged between apps on a device running in multi-window mode. The API is 100% compatible with the View APIs, which means a drag and drop started from a View can be dragged into Compose and vice versa. To use this API, see the code sample.

Moving image illustrating drag and drop feature

Additional features

Other features landed in this release include:

    • Support for LookaheadScope in Lazy lists.
    • Fixed composables that have been deactivated but kept alive for reuse in a Lazy list not being filtered by default from semantics trees.
    • Spline-based keyframes in animations.
    • Added support for selection by mouse, including text.

Get started!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker — they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better!

Wondering what’s next? Check out our roadmap to see the features we’re currently thinking about and working on. We can’t wait to see what you build next!

Happy composing!

How This Indie Game Studio Launched Their First Game on Google Play

Posted by Scarlett Asuncion – Product Marketing Manager

Indie game developers Geoffrey Mugford and Samuli Pietikainen first connected online through their shared passion for game design, before joining forces to create their own studio No Devs. Looking for ways to grow as a team, they entered the Quickplay Game Jam hosted by Latinx in Gaming in partnership with Google Play. The 6-week competition, open to anyone globally, challenged participants to generate a game idea around the theme of ‘tradition’. The duo became one of 4 winners to receive a share of $80,000 to bring their game jam concept to life and launch it on Google Play.

Their winning game idea, Pilkki, has just launched in early access. It offers players a captivating claymation ice fishing adventure set in a serene atmosphere that celebrates Finnish culture. Intrigued by the game’s origins and unique gameplay, we chatted with one-half of No Devs, Geoffrey. He shares how his multicultural heritage and Samuli’s Finnish background inspired their game design, the lessons they’ve learned so far and their studio’s future plans.

Headshots of Geoffrey Mugford (left)and Samuli Pietikainen (right), smiling
Geoffrey Mugford (left); Samuli Pietikainen (right)

Tell us about your journey as a team and why you entered the Quickplay Game Jam.

We started making games together in May 2022. We talked about it for a year, but hadn’t taken the plunge, so a game jam was the perfect way of kickstarting our creative partnership. Our first game jam was a success so we decided to take it further and look for more game jam opportunities. As indie developers, balancing personal projects with financial stability is tough. Winning a prize in a game jam offers a chance to prototype an idea and potentially secure early funding for it. This game jam offered that opportunity whilst also promoting cultural diversity. Because of Samuli’s background, we were keen to make a game that embodies the Finnish mindset.

What inspired the creation of Pilkki and how did you shape the game to offer a unique cultural experience like ice fishing?

At first, we struggled with the game jam’s theme of 'tradition.' We were initially keen to make a traditional 'Day of the Dead' inspired game, but realized it didn't resonate enough with us after a couple of attempts, so we shifted gears. Coming from a multicultural background, we thought about blending cultures rather than a focus on one. We considered creating new traditions using deck builder or city-builder formats but found them too ambitious given the timeframe. We eventually turned our focus to Finland and its quirky traditions. Some of them, like eukonkanto (wife-carrying races) and tinanvalanta (tin melting in a sauna ladle) caught our attention, but we ultimately settled on ice fishing - a simple, unique and very Finnish activity that could suit mobile gaming. The challenge was innovating on it - we reimagined it as a physics-driven puzzle game where the player would control the hook as a pendulum, and that's how Pilkki came to be.

side by side photos of Pilkki gameplay
Gameplay of Pilkki

Can you highlight some of the learnings and adjustments you made along the way?

We only had 6 weeks to make the game, and had already spent 2 of them brainstorming. When we settled on our game idea, we had to be very careful with scope and, sometimes, make quick decisions without the opportunity for play-testing. Some of these decisions ended up being super fun for the players - others, not so much. Luckily we had a clear division of responsibilities - I was on game design and programming, Samuli on art, audio and game feel - so we could work smoothly in parallel and meet milestones efficiently.

The win condition was a challenging aspect to figure out during development. We wanted a calm and reflective experience, similar to a real-life analogue, so we avoided score systems and timers. With time running out to complete the game, we were unable to explore alternative options. As a result, our game jam entry ended up being a race against time to catch as many fish as possible. After the game jam ended, we revisited this and turned towards a more tranquil atmosphere, where the progression was driven by puzzles rather than scores.

How did the funding from the Quickplay Game Jam in partnership with Google Play contribute to the development of Pilkki beyond its initial prototype stage?

Pilkki is much larger in scope than anything we've attempted before. Without funding, we would have likely left it in its prototype stage without exploring the concept further. The Quickplay Game Jam allowed us to recognize the potential in the idea, and dedicate ourselves to turning it into the relaxing fishing experience it has become.

With the funding, we were able to dedicate 3 months full-time to the design and development of Pilkki. We were able to take a step back and really put some thought into how we would build a game that would continue growing post-release. On top of that, Samuli experimented with multiple styles and multi-media art - this is how he developed the beautiful claymation visuals that have become our unique selling point.

Are you excited about your future as a new indie game studio?

Yes, for sure! We love creating fun and innovative experiences for people, and we have both been dreaming about working on our own games full time. It's a long road ahead, but we're excited to keep the momentum. For now, we’re actively working on Pilkki and aiming to release a major game update in 2024. We're eager to see the reaction from our players.

Having our game on Google Play gives us access to new markets worldwide. We can't wait to see how the game grows and attracts new players, and how it introduces them to our quirky take on Finnish culture.

#WeArePlay | Learn how a childhood experience with an earthquake shaped Álvaro’s entrepreneurial journey

Posted by Leticia Lago – Developer Marketing

Being trapped inside a house following a major earthquake as a child motivated Álvaro to research and improve the outcomes of destructive, large-scale quakes in Mexico. Using SkyAlert technology, sensors detect and report warnings of incoming earthquakes, giving people valuable time to prepare and get to safety.

Álvaro shared his story in our latest film for #WeArePlay, which spotlights the founders and creatives behind inspiring apps and games on Google Play. We caught up with him to find out his motivations for SkyAlert, the impact the app’s had and what his future plans are.

What was the inspiration behind SkyAlert?

Being in Colima near the epicenter of a massive earthquake as a kid had a huge impact on me. I remember feeling powerless to nature and very vulnerable watching everything falling apart around me. I was struck by how quick and smart you had to be to get to a safe place in time. I remember hugging my family once it was over and looking towards the sea to watch out for an impending tsunami – which fortunately didn’t hit my region badly. It was at this moment that I became determined to find out what had caused this catastrophe and what could be done to prevent it being so destructive another time.

Through my research, I learned that Mexico sits on five tectonic plates and, as a result, it is particularly prone to earthquakes. In fact, there've been seven major quakes in the last seven years, with hundreds losing their lives. Reducing the threat of earthquakes is my number one goal and the motivation behind SkyAlert. The technology we’ve developed can detect the warning signs of an earthquake early on, deliver alerts to vulnerable people and hopefully save lives.

How does SkyAlert work exactly?

SkyAlert collects data from a network of sensors and translates that information into alerts. People can put their zip code in order to filter updates for their locality. We’re constantly investing in getting the most reliable and fast technology available so we can make the service as timely and effective as possible.

Did you always imagine you’d be an entrepreneur?

Since I was a kid I knew I wanted to be an entrepreneur. This was inspired by my grandfather who ran a large candy company with factories all over Mexico. However, what I really wanted, beyond just running my own company, was to have a positive social impact and change lives for the better: a feat I feel proud to have achieved with SkyAlert.

How is Google Play helping your app to grow?

Being on Google Play helps us to reach the maximum number of people. We’ve achieved some amazing numbers in the last 10 years through Google Play, with over 7 million downloads. With 35% of our income coming from Google Play, this reach has helped us invest in new technologies and sensors.

We also often receive advice from Google Play and they invite us to meetings to tell us how to do better and how to make the most of the platform. Google Play is a close partner that we feel really takes care of us.

What impact has SkyAlert had on the people of Mexico?

The biggest advantage of SkyAlert is that it can help them prepare for an earthquake. In 2017, we were able to notify people of a massive quake 12 seconds before it hit Mexico City. At least with those few seconds, many were able to get themselves to a safe place. Similarly, with a large earthquake in Oaxaca, we were able to give a warning of over a minute, allowing teachers to get students in schools away from infrastructure – saving kids’ lives.

Also, many find having SkyAlert on their phone gives them peace of mind, knowing they’ll have some warning before an earthquake strikes. This can be very reassuring.

What does the future look like for SkyAlert?

We’re working hard to expand our services into new risk areas like flooding, storms and wildfires. The hope is to become a global company that can deliver alerts on a variety of natural phenomena in countries around the world.

Read more about Álvaro and other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?