Category Archives: Android Developers Blog

An Open Handset Alliance Project

#WeArePlay | How two sea turtle enthusiasts are revolutionizing marine conservation

Posted by Leticia Lago – Developer Marketing

When environmental science student Caitlin returned home from a trip monitoring sea turtles in Western Australia, she was inspired to create a conservation tool that could improve tracking of the species. She connected with a French developer and fellow marine life enthusiast Nicolas to design their app We Spot Turtles!, allowing anyone to support tracking efforts by uploading pictures of them spotted in the wild.

Caitlin and Nicolas shared their journey in our latest film for #WeArePlay, which showcases the amazing stories behind apps and games on Google Play. We caught up with the pair to find out more about their passion and how they are making strides towards advancing sea turtle conservation.

Tell us about how you both got interested in sea turtle conservation?

Caitlin: A few years ago, I did a sea turtle monitoring program for the Department of Biodiversity, Conservation and Attractions in Western Australia. It was probably one of the most magical experiences of my life. After that, I decided I only really wanted to work with sea turtles.

Nicolas: In 2010, in French Polynesia, I volunteered with a sea turtle protection project. I was moved by the experience, and when I came back to France, I knew I wanted to use my tech background to create something inspired by the trip.

How did these experiences lead you to create We Spot Turtles!?

Caitlin: There are seven species of sea turtle, and all are critically endangered. Or rather there’s not enough data on them to inform an accurate endangerment status. This means the needs of the species are going unmet and sea turtles are silently going extinct. Our inspiration is essentially to better track sea turtles so that conservation can be improved.

Nicolas: When I returned to France after monitoring sea turtles, I knew I wanted to make an app inspired by my experience. However, I had put the project on hold for a while. Then, when a friend sent me Caitlin’s social media post looking for a developer for a sea turtle conservation app, it re-ignited my inspiration, and we teamed up to make it together.

close up image of a turtle resting in a reef underwater

What does We Spot Turtles! do?

Caitlin: Essentially, members of the public upload images of sea turtles they spot – and even get to name them. Then, the app automatically geolocates, giving us a date and timestamp of when and where the sea turtle was located. This allows us to track turtles and improve our conservation efforts.

How do you use artificial intelligence in the app?

Caitlin: The advancements in AI in recent years have given us the opportunity to make a bigger impact than we would have been able to otherwise. The machine learning model that Nicolas created uses the facial scale and pigmentations of the turtles to not only identify its species, but also to give that sea turtle a unique code for tracking purposes. Then, if it is photographed by someone else in the future, we can see on the app where it's been spotted before.

How has Google Play supported your journey?

Caitlin: Launching our app on Google Play has allowed us to reach a global audience. We now have communities in Exmouth in Western Australia, Manly Beach in Sydney, and have 6 countries in total using our app already. Without Google Play, we wouldn't have the ability to connect on such a global scale.

Nicolas: I’m a mobile application developer and I use Google’s Flutter framework. I knew Google Play was a good place to release our title as it easily allows us to work on the platform. As a result, we’ve been able to make the app great.

Photo pf Caitlin and Nicolas on the bach in Australia at sunset. Both are kneeling in the sand. Caitlin is using her phone to identify something in the distance, and gesturing to Nicolas who is looking in the same direction

What do you hope to achieve with We Spot Turtles!?

Caitlin: We Spot Turtles! puts data collection in the hands of the people. It’s giving everyone the opportunity to make an impact in sea turtle conservation. Because of this, we believe that we can massively alter and redefine conservation efforts and enhance people’s engagement with the natural world.

What are your plans for the future?

Caitlin: Nicolas and I have some big plans. We want to branch out into other species. We'd love to do whale sharks, birds, and red pandas. Ultimately, we want to achieve our goal of improving the conservation of various species and animals around the world.


Discover other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

Cloud photos now available in the Android photo picker

Posted by Roxanna Aliabadi Walker – Product Manager

Available now with Google Photos

Our photo picker has always been the gateway to your local media library, providing a secure, date-sorted interface for users to grant apps access to selected images and videos. But now, we're taking it a step further by integrating cloud photos from your chosen cloud media app directly into the photo picker experience.

Moving image of the photo picker access

Unifying your media library

Backed-up photos, also known as "cloud photos," will now be merged with your local ones in the photo picker, eliminating the need to switch between apps. Additionally, any albums you've created in your cloud storage app will be readily accessible within the photo picker's albums tab. If your cloud media provider has a concept of “favorites,” they will be showcased prominently within the albums tab of the photo picker for easy access. This feature is currently rolling out with the February Google System Update to devices running Android 12 and above.

Available now with Google Photos, but open to all

Google Photos is already supporting this new feature, and our APIs are open to any cloud media app that qualifies for our pilot program. Our goal is to make accessing your lifetime of memories effortless, regardless of the app you prefer.

The Android photo picker will attempt to auto-select a cloud media app for you, but you can change or remove your selected cloud media app at any time from photo picker settings.

Image of Cloud media settings in photo picker settings

Migrate today for an enhanced, frictionless experience

The Android photo picker substantially reduces friction by not requiring any runtime permissions. If you switch from using a custom photo picker to the Android photo picker, you can offer this enhanced experience with cloud photos to your users, as well as reduce or entirely eliminate the overhead involved with acquiring and managing access to photos on the device. (Note that apps without a need for persistent and/or broad scale access to photos - for example - to set a profile picture, must adopt the Android photo picker in lieu of any sensitive file permissions to adhere to Google Play policy).

The photo picker has been backported to Android 4.4 to make it easy to migrate without needing to worry about device compatibility. Access to cloud content will only be available for users running Android 12 and higher, but developers do not need to consider this when implementing the photo picker into their apps. To use the photo picker in your app, update the ActivityX dependency to version 1.7.x or above and add the following code snippet:

// Registers a photo picker activity launcher in single-select mode.
val pickMedia = registerForActivityResult(PickVisualMedia()) { uri ->
    // Callback is invoked after the user selects a media item or closes the
    // photo picker.
    if (uri != null) {
        Log.d("PhotoPicker", "Selected URI: $uri")
    } else {
        Log.d("PhotoPicker", "No media selected")
    }
}


// Launch the photo picker and let the user choose images and videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo))

// Launch the photo picker and let the user choose only images.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly))

// Launch the photo picker and let the user choose only videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly))

More customization options are listed in our developer documentation.

Prompt users to update to your latest app version

Posted by Lidia Gaymond – Product Manager, Google Play

For years, Google Play has helped users enjoy the latest versions of your app through auto-updates or in-app updates. While most users update their apps this way, some may still be stuck on outdated, unsupported or broken versions of your app.

Today, we are introducing a new tool that will prompt these users to update, bringing them closer to the app experience you intended to deliver.

Play recovery tools allow you to prompt users running specific versions of your app to update every time they restart the app.

Image of side by side mobile device screens showing how the prompt to update may look to users
Note: Images are examples and subject to change

To use this new feature, log into Google Play Console and head to your Releases or to the App Bundle Explorer page, where you can select the app versions where you want to deliver the prompts. Alternatively, the feature is also available via the Play Developer API, and will soon be extended to allow you to target multiple app versions at once. Please note that the version you want to deploy the prompt to needs to be built as an app bundle.

You can then narrow your targeting criteria by country or Android version (if required), with no prior integration necessary.

Currently, over 50% of users are responding to the prompts, enabling more users to get the best experience of your apps.

After prompting users to update, you can use Play Console's recovery tools to edit your update configuration, view its progress, or cancel the recovery action altogether. Learn more about the feature here and start using it today!

What’s new in the Jetpack Compose January ’24 release

Posted by Ben Trengrove, Android Developer Relations Engineer

Today, as part of the Compose January ‘24 Bill of Materials, we’re releasing version 1.6 of Jetpack Compose, Android's modern, native UI toolkit that is used by apps such as Threads, Reddit, and Dropbox. This release largely focuses on performance improvements, as we continue to migrate modifiers and improve the efficiency of major parts of our API.

To use today’s release, upgrade your Compose BOM version to 2024.01.01

implementation platform('androidx.compose:compose-bom:2024.01.01')

Performance

Performance continues to be our top priority, and this release of Compose has major performance improvements across the board. We are seeing an additional ~20% improvement in scroll performance and ~12% improvement to startup time in our benchmarks, and this is on top of the improvements from the August ‘23 release. As with that release, most apps will see these benefits just by upgrading to the latest version, with no other code changes needed.

The improvement to scroll performance and startup time comes from our continued focus on memory allocations and lazy initialization, to ensure the framework is only doing work when it has to. These improvements can be seen across all APIs in Compose, especially in text, clickable, Lazy lists, and graphics APIs, including vectors, and were made possible in part by the Modifier.Node refactor work that has been ongoing for multiple releases.

There is also new guidance for you to create your own custom modifiers with Modifier.Node.

Configuring the stability of external classes

Compose compiler 1.5.5 introduces a new compiler option to provide a configuration file for what your app considers stable. This option allows you to mark any class as stable, including your own modules, external library classes, and standard library classes, without having to modify these modules or wrap them in a stable wrapper class. Note that the standard stability contract applies; this is just another convenient method to let the Compose compiler know what your app should consider stable. For more information on how to use stability configuration, see our documentation.

Generated code performance

The code generated by the Compose compiler plugin has also been improved. Small tweaks in this code can lead to large performance improvements due to the fact the code is generated in every composable function. The Compose compiler tracks Compose state objects to know which composables to recompose when there is a change of value; however, many state values are only read once, and some state values are never read at all but still change frequently! This update allows the compiler to skip the tracking when it is not needed.

Compose compiler 1.5.6 also enables “intrinsic remember” by default. This mode transforms remember at compile time to take into account information we already have about any parameters of a composable that are used as a key to remember. This speeds up the calculation of determining if a remembered expression needs reevaluating, but also means if you place a breakpoint inside the remember function during debugging, it may no longer be called, as the compiler has removed the usage of remember and replaced it with different code.

Composables not being skipped

We are also investing in making the code you write more performant, automatically. We want to optimize for the code you intuitively write, removing the need to dive deep into Compose internals to understand why your composable is recomposing when it shouldn’t.

This release of Compose adds support for an experimental mode we are calling “strong skipping mode”. Strong skipping mode relaxes some of the rules about which changes can skip recomposition, moving the balance towards what developers expect. With strong skipping mode enabled, composables with unstable parameters can also skip recomposition if the same instances of objects are passed in to its parameters. Additionally, strong skipping mode automatically remembers lambdas in composition that capture unstable values, in addition to the current default behavior of remembering lambdas with only stable captures. Strong skipping mode is currently experimental and disabled by default as we do not consider it ready for production usage yet. We are evaluating its effects before aiming to turn it on by default in Compose 1.7. See our guidance to experiment with strong skipping mode and help us find any issues.

Text

Changes to default font padding

This release now makes the includeFontPadding setting false by default. includeFontPadding is a legacy property that adds extra padding based on font metrics at the top of the first line and bottom of the last line of a text. Making this setting default to false brings the default text layout more in line with common design tools, making it easier to match the design specifications generated. Upon upgrading to the January ‘24 release, you may see small changes in your text layout and screenshot tests. For more information about this setting, see the Fixing Font Padding in Compose Text blog post and the developer documentation.

Line height with includeFontPadding as false on the left and true on the right.

Support for nonlinear font scaling

The January ‘24 release uses nonlinear font scaling for better text readability and accessibility. Nonlinear font scaling prevents large text elements on screen from scaling too large by applying a nonlinear scaling curve. This scaling strategy means that large text doesn't scale at the same rate as smaller text.

Drag and drop

Compose Foundation adds support for platform-level drag and drop, which allows for content to be dragged between apps on a device running in multi-window mode. The API is 100% compatible with the View APIs, which means a drag and drop started from a View can be dragged into Compose and vice versa. To use this API, see the code sample.

Moving image illustrating drag and drop feature

Additional features

Other features landed in this release include:

    • Support for LookaheadScope in Lazy lists.
    • Fixed composables that have been deactivated but kept alive for reuse in a Lazy list not being filtered by default from semantics trees.
    • Spline-based keyframes in animations.
    • Added support for selection by mouse, including text.

Get started!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker — they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better!

Wondering what’s next? Check out our roadmap to see the features we’re currently thinking about and working on. We can’t wait to see what you build next!

Happy composing!

How This Indie Game Studio Launched Their First Game on Google Play

Posted by Scarlett Asuncion – Product Marketing Manager

Indie game developers Geoffrey Mugford and Samuli Pietikainen first connected online through their shared passion for game design, before joining forces to create their own studio No Devs. Looking for ways to grow as a team, they entered the Quickplay Game Jam hosted by Latinx in Gaming in partnership with Google Play. The 6-week competition, open to anyone globally, challenged participants to generate a game idea around the theme of ‘tradition’. The duo became one of 4 winners to receive a share of $80,000 to bring their game jam concept to life and launch it on Google Play.

Their winning game idea, Pilkki, has just launched in early access. It offers players a captivating claymation ice fishing adventure set in a serene atmosphere that celebrates Finnish culture. Intrigued by the game’s origins and unique gameplay, we chatted with one-half of No Devs, Geoffrey. He shares how his multicultural heritage and Samuli’s Finnish background inspired their game design, the lessons they’ve learned so far and their studio’s future plans.

Headshots of Geoffrey Mugford (left)and Samuli Pietikainen (right), smiling
Geoffrey Mugford (left); Samuli Pietikainen (right)

Tell us about your journey as a team and why you entered the Quickplay Game Jam.

We started making games together in May 2022. We talked about it for a year, but hadn’t taken the plunge, so a game jam was the perfect way of kickstarting our creative partnership. Our first game jam was a success so we decided to take it further and look for more game jam opportunities. As indie developers, balancing personal projects with financial stability is tough. Winning a prize in a game jam offers a chance to prototype an idea and potentially secure early funding for it. This game jam offered that opportunity whilst also promoting cultural diversity. Because of Samuli’s background, we were keen to make a game that embodies the Finnish mindset.

What inspired the creation of Pilkki and how did you shape the game to offer a unique cultural experience like ice fishing?

At first, we struggled with the game jam’s theme of 'tradition.' We were initially keen to make a traditional 'Day of the Dead' inspired game, but realized it didn't resonate enough with us after a couple of attempts, so we shifted gears. Coming from a multicultural background, we thought about blending cultures rather than a focus on one. We considered creating new traditions using deck builder or city-builder formats but found them too ambitious given the timeframe. We eventually turned our focus to Finland and its quirky traditions. Some of them, like eukonkanto (wife-carrying races) and tinanvalanta (tin melting in a sauna ladle) caught our attention, but we ultimately settled on ice fishing - a simple, unique and very Finnish activity that could suit mobile gaming. The challenge was innovating on it - we reimagined it as a physics-driven puzzle game where the player would control the hook as a pendulum, and that's how Pilkki came to be.

side by side photos of Pilkki gameplay
Gameplay of Pilkki

Can you highlight some of the learnings and adjustments you made along the way?

We only had 6 weeks to make the game, and had already spent 2 of them brainstorming. When we settled on our game idea, we had to be very careful with scope and, sometimes, make quick decisions without the opportunity for play-testing. Some of these decisions ended up being super fun for the players - others, not so much. Luckily we had a clear division of responsibilities - I was on game design and programming, Samuli on art, audio and game feel - so we could work smoothly in parallel and meet milestones efficiently.

The win condition was a challenging aspect to figure out during development. We wanted a calm and reflective experience, similar to a real-life analogue, so we avoided score systems and timers. With time running out to complete the game, we were unable to explore alternative options. As a result, our game jam entry ended up being a race against time to catch as many fish as possible. After the game jam ended, we revisited this and turned towards a more tranquil atmosphere, where the progression was driven by puzzles rather than scores.

How did the funding from the Quickplay Game Jam in partnership with Google Play contribute to the development of Pilkki beyond its initial prototype stage?

Pilkki is much larger in scope than anything we've attempted before. Without funding, we would have likely left it in its prototype stage without exploring the concept further. The Quickplay Game Jam allowed us to recognize the potential in the idea, and dedicate ourselves to turning it into the relaxing fishing experience it has become.

With the funding, we were able to dedicate 3 months full-time to the design and development of Pilkki. We were able to take a step back and really put some thought into how we would build a game that would continue growing post-release. On top of that, Samuli experimented with multiple styles and multi-media art - this is how he developed the beautiful claymation visuals that have become our unique selling point.

Are you excited about your future as a new indie game studio?

Yes, for sure! We love creating fun and innovative experiences for people, and we have both been dreaming about working on our own games full time. It's a long road ahead, but we're excited to keep the momentum. For now, we’re actively working on Pilkki and aiming to release a major game update in 2024. We're eager to see the reaction from our players.

Having our game on Google Play gives us access to new markets worldwide. We can't wait to see how the game grows and attracts new players, and how it introduces them to our quirky take on Finnish culture.

#WeArePlay | Learn how a childhood experience with an earthquake shaped Álvaro’s entrepreneurial journey

Posted by Leticia Lago – Developer Marketing

Being trapped inside a house following a major earthquake as a child motivated Álvaro to research and improve the outcomes of destructive, large-scale quakes in Mexico. Using SkyAlert technology, sensors detect and report warnings of incoming earthquakes, giving people valuable time to prepare and get to safety.

Álvaro shared his story in our latest film for #WeArePlay, which spotlights the founders and creatives behind inspiring apps and games on Google Play. We caught up with him to find out his motivations for SkyAlert, the impact the app’s had and what his future plans are.

What was the inspiration behind SkyAlert?

Being in Colima near the epicenter of a massive earthquake as a kid had a huge impact on me. I remember feeling powerless to nature and very vulnerable watching everything falling apart around me. I was struck by how quick and smart you had to be to get to a safe place in time. I remember hugging my family once it was over and looking towards the sea to watch out for an impending tsunami – which fortunately didn’t hit my region badly. It was at this moment that I became determined to find out what had caused this catastrophe and what could be done to prevent it being so destructive another time.

Through my research, I learned that Mexico sits on five tectonic plates and, as a result, it is particularly prone to earthquakes. In fact, there've been seven major quakes in the last seven years, with hundreds losing their lives. Reducing the threat of earthquakes is my number one goal and the motivation behind SkyAlert. The technology we’ve developed can detect the warning signs of an earthquake early on, deliver alerts to vulnerable people and hopefully save lives.

How does SkyAlert work exactly?

SkyAlert collects data from a network of sensors and translates that information into alerts. People can put their zip code in order to filter updates for their locality. We’re constantly investing in getting the most reliable and fast technology available so we can make the service as timely and effective as possible.

Did you always imagine you’d be an entrepreneur?

Since I was a kid I knew I wanted to be an entrepreneur. This was inspired by my grandfather who ran a large candy company with factories all over Mexico. However, what I really wanted, beyond just running my own company, was to have a positive social impact and change lives for the better: a feat I feel proud to have achieved with SkyAlert.

How is Google Play helping your app to grow?

Being on Google Play helps us to reach the maximum number of people. We’ve achieved some amazing numbers in the last 10 years through Google Play, with over 7 million downloads. With 35% of our income coming from Google Play, this reach has helped us invest in new technologies and sensors.

We also often receive advice from Google Play and they invite us to meetings to tell us how to do better and how to make the most of the platform. Google Play is a close partner that we feel really takes care of us.

What impact has SkyAlert had on the people of Mexico?

The biggest advantage of SkyAlert is that it can help them prepare for an earthquake. In 2017, we were able to notify people of a massive quake 12 seconds before it hit Mexico City. At least with those few seconds, many were able to get themselves to a safe place. Similarly, with a large earthquake in Oaxaca, we were able to give a warning of over a minute, allowing teachers to get students in schools away from infrastructure – saving kids’ lives.

Also, many find having SkyAlert on their phone gives them peace of mind, knowing they’ll have some warning before an earthquake strikes. This can be very reassuring.

What does the future look like for SkyAlert?

We’re working hard to expand our services into new risk areas like flooding, storms and wildfires. The hope is to become a global company that can deliver alerts on a variety of natural phenomena in countries around the world.

Read more about Álvaro and other inspiring app and game founders featured in #WeArePlay.



How useful did you find this blog post?

A New Approach to Real-Money Games on Google Play

Posted by Karan Gambhir – Director, Global Trust and Safety Partnerships

As a platform, we strive to help developers responsibly build new businesses and reach wider audiences across a variety of content types and genres. In response to strong demand, in 2021 we began onboarding a wider range of real-money gaming (RMG) apps in markets with pre-existing licensing frameworks. Since then, this app category has continued to flourish with developers creating new RMG experiences for mobile.

To ensure Google Play keeps up with the pace of developer innovation, while promoting user safety, we’ve since conducted several pilot programs to determine how to support more RMG operators and game types. For example, many developers in India were eager to bring RMG apps to more Android users, so we launched a pilot program, starting with Rummy and Daily Fantasy Sports (DFS), to understand the best way to support their businesses.

Based on the learnings from the pilots and positive feedback from users and developers, Google Play will begin supporting more RMG apps this year, including game types and operators not covered by an existing licensing framework. We’ll launch this expanded RMG support in June to developers for their users in India, Mexico, and Brazil, and plan to expand to users in more countries in the future.

We’re pleased that this new approach will provide new business opportunities to developers globally while continuing to prioritize user safety. It also enables developers currently participating in RMG pilots in India and Mexico to continue offering their apps on Play.

    • India pilot: For developers in the Google Play Pilot Program for distributing DFS and Rummy apps to users in India, we are extending the grace period for pilot apps to remain on Google Play until June 30, 2024 when the new policy will take effect. After that time, developers can distribute RMG apps on Google Play to users in India, beyond DFS and Rummy, in compliance with local laws and our updated policy.
    • Mexico pilot: For developers in the Google Play Pilot Program for DFS in Mexico, the pilot will end as scheduled on June 30, 2024, at which point developers can distribute RMG apps on Google Play to users in Mexico, beyond DFS, in compliance with local laws and our updated policy.

Google Play’s existing developer policies supporting user safety, such as requiring age-gating to limit RMG experiences to adults and requiring developers use geo-gating to offer RMG apps only where legal, remain unchanged and we’ll continue to strengthen them. In addition, Google Play will continue other key user safety and transparency efforts such as our expanded developer verification mechanisms.

With this policy update, we will also be evolving our service fee model for RMG to reflect the value Google Play provides and to help sustain the Android and Play ecosystems. We are working closely with developers to ensure our new approach reflects the unique economics and various developer earning models of this industry. We will have more to share in the coming months on our new policy and future expansion plans.

For developers already involved in the real-money gaming space, or those looking to expand their involvement, we hope this helps you prepare for the upcoming policy change. As Google Play evolves our support of RMG around the world, we look forward to helping you continue to delight users, grow your businesses, and launch new game types in a safe way.

What’s new with Google Cast?

Posted by Meher Vurimi, Product Manager

Since we launched Google Cast in 2013, we've been working to bring casting capabilities to more apps and devices. We have come a long way. Now, users can cast to many new devices, like TVs, speakers, smart displays, and even the latest Pixel Tablet. We are very excited to launch new features that make it more seamless to cast on Android.

Output Switcher

Moving image of output switcher showing various device categories
Figure 1: Output Switcher showing various device categories

Android makes moving media between various devices– including phones to TVs, tablets, speakers, and smart displays–easy with Output Switcher. Output Switcher is easily accessible from the Android System UI and aims to allow cross-device transfer and control in one place for different technical protocols. With Output Switcher 2.0 on Android U, you can also see improved volume control, device categories, and support for devices with custom protocols.

More information can be found in the Google Cast developer guide and Media router.

    • Enable Output Switcher in AndroidManifest.xml
<application>
    ...
    <receiver
         android:name="androidx.mediarouter.media.MediaTransferReceiver"
         android:exported="true">
    </receiver>
    ...
</application>

    • Update SessionManagerListener for background casting
class MyService : Service() {
    private var castContext: CastContext? = null
    protected fun onCreate() {
        castContext = CastContext.getSharedInstance(this)
        castContext
            .getSessionManager()
            .addSessionManagerListener(sessionManagerListener, 
CastSession::class.java)
    }

    protected fun onDestroy() {
        if (castContext != null) {
            castContext
                .getSessionManager()
                .removeSessionManagerListener(sessionManagerListener, 
CastSession::class.java)
        }
    }
}

    • Support Remote-to-Local playback
class MySessionTransferCallback : SessionTransferCallback() {
        fun onTransferring(@SessionTransferCallback.TransferType transferType: 
Int) {
            // Perform necessary steps prior to onTransferred
        }

        fun onTransferred(@SessionTransferCallback.TransferType transferType: 
Int,
                          sessionState: SessionState?) {
            if (transferType == SessionTransferCallback.TRANSFER_TYPE_FROM_REMOTE_TO_LOCAL) {
                // Remote stream is transferred to the local device.
                // Retrieve information from the SessionState to continue playback on the local player.
            }
        }

        fun onTransferFailed(@SessionTransferCallback.TransferType transferType: 
Int,
                             @SessionTransferCallback.TransferFailedReason 
transferFailedReason: Int) {
            // Handle transfer failure.
        }
    }

Cast to devices nearby

Moving image showing bringing an Android phone close to the docked Pixel Tablet to transfer media
Figure 2: Bring your Android phone close to the docked Pixel Tablet to transfer media

It will soon be possible to cast to devices nearby in a whole new way when you have a Pixel Pro phone and a docked Pixel Tablet. Users can transfer ongoing music from their Pixel Pro phone to a docked Pixel Tablet just by bringing the phone closer to the docked tablet. Similarly, they can transfer the music to their phone from a docked Pixel Tablet just by holding the phone closer to the tablet. This feature needs Output Switcher integration as a prerequisite.

Cast from short-form video apps

Moving image showing enabling and disabling autoplay for short-form content
Figure 3: Enabling and disabling autoplay for short-form content (autoplay is enabled by default)

Short-form content is extremely popular and growing in use. Google Cast can make it easy for users to watch their favorite short-form content on TVs or other cast-enabled devices. Now, you can easily extend Google Cast support into your apps. These are the guidelines we put together to provide a great user experience to your users.

cast from your phone

Ensure that the Google Cast icon is prominently displayed on every screen with playable content on the top right corner. Users automatically understand they can cast media to a TV just by seeing the Cast icon.

cast with autoplay

Users will also have an option to disable autoplay to cast a specific video. When autoplay is enabled, playback automatically transitions to the next video without any user intervention.

Persistent Cast icon

Moving image showing cast icon and error message for users to troubleshoot if no devices  are found
Figure 4: Cast icon is shown even if the sender device is not connected to Wi-Fi, showcasing an error message for users to troubleshoot if no devices are found.

We've heard feedback that when users don't see the cast icon, they assume their Chromecast built-in devices haven't been discovered. To improve user experience and discovery, we have introduced the “Persistent cast icon”. With this support, users will see the cast icon whenever they need and can receive better help and guidance on why they don’t see a specific device. In addition, we've updated when device discovery starts. More information can be found in the Google Cast Developer Guide.

Shaka Player

For any Web Receiver applications streaming HLS content, we recommend looking into migrating to Shaka Player for playback. The current player (MPL) will no longer adopt feature updates. As a result, the Web Receiver SDK has increased support for HLS playback using Shaka Player on the device targets and has introduced an opt-in flag to enable it. Refer to the Shaka Player migration guide hosted on the DevSite for more information and implementation details.

To opt-in to use Shaka Player for HLS content use the following snippet in your Google Cast Receiver application:

const context = cast.framework.CastReceiverContext.getInstance();

const castReceiverOptions = new cast.framework.CastReceiverOptions();
castReceiverOptions.useShakaForHls = true;

context.start(castReceiverOptions);

Cast to new devices

Moving image showing the experience of casting to an LG TV as a first time user
Figure 5: Casting to LG TVs for a first time user

We have been continuously working with various OEMs to bring Chromecast built-in to new devices. Last year, we launched Chromecast built-in to new speakers, while also introducing the receiver support on docked Pixel Tablets.

As always, Google TVs come with Chromecast built-in, including the new Hisense ULED and ULED X Series, latest TCL Q Class models, and new TCL QM7 line. In fact, there are now over 220 million monthly active Google TV and other Android TV OS devices, and we’re just getting started. More devices are launching with Chromecast built-in, like the 2024 LG TV series.

Thank you for creating excellent apps, across devices in 2023!

Posted by Anirudh Dewani, Director of Android Developer Relations

Hello Android Developers,

As we approach the end of 2023, I wanted to take a moment to reflect on all that we've accomplished together as a community, and send a huge *thank you* for all of your work!

It's been an incredible year for Android, with many new features and improvements released as part of the platform as well as many new delightful app experiences crafted and delivered by you, all for the benefit of our users across the world. Here are just a few of the highlights:

    • The release of feature packed and highly performant Android 14, our most ambitious release to date.
    • The incredible momentum on large screens and Wear OS, fueled by hardware innovations of device makers and by the great app experiences you build for users
    • The growth of Compose, from being a mobile developer toolkit to Compose Everywhere, helping you build excellent apps for mobile, tablets, wear and TV,
    • And the growth of the entire Android Developer community around the world, and the millions of amazing apps you build for users!

I'm so proud of everything we've achieved together this year!

Your hard work and dedication continue to make Android the best mobile platform in the world, and I want to thank you for being a part of this community. Your contributions are invaluable, and I'm grateful for your continued support.

Thanks again for all that you do, and we can’t wait to see what you build next year!

Best,
Anirudh Dewani
Director, Android Developer Relations

Thank You for building excellent apps across devices! 0 PELOTO zoom SAMSUNG happyHolidays (year: Int = 2023)

Increase your app’s availability across device types

Posted by Alex Vanyo – Developer Relations Engineer

TL;DR: Remove unnecessary feature requirements that prevent users from downloading your app on devices that don’t support the features. Automate tracking feature requirements and maximize app availability with badging!

Required features reduce app availability

<uses-feature> is an app manifest element that specifies whether your app depends on a hardware or software feature. By default, <uses-feature> specifies that a feature is required. To indicate that the feature is optional, you must add the android:required="false" attribute.

Google Play filters which apps are available to download based on required features. If the user’s device doesn’t support some hardware or software feature, then an app that requires that feature won’t be available for the user to download.

<uses-permission>, another app manifest element, complicates things by implicitly requiring features for permissions such as CAMERA or BLUETOOTH (see Permissions that imply feature requirements). The initial declared orientations for your activities can also implicitly require hardware features.

The system determines implicitly required features after merging all modules and dependencies, so it may not be clear to you which features your app ultimately requires. You might not even be aware when the list of required features has changed. For example, integrating a new dependency into your app might introduce a new required feature. Or the integration might request additional permissions, and the permissions could introduce new, implicitly required features.

This behavior has been around for a while, but Android has changed a lot over the years. Android apps now run on phones, foldables, tablets, laptops, cars, TVs and watches, and these devices are more varied than ever. Some devices don’t have telephony services, some don’t have touchscreens, some don’t have cameras.

Expectations based on permissions have also changed. With runtime permissions, a <uses-permission> declaration in the manifest no longer guarantees that your app will be granted that permission. Users can choose to deny access to hardware in favor of other ways to interact with the app. For example, instead of giving an app permission to access the device’s location, a user may prefer to always search for a particular location instead.

Banking apps shouldn’t require the device to have an autofocusing camera for check scanning. They shouldn’t specify that the camera must be a front or rear camera or that the device has a camera at all! It should be enough to allow the user to upload a picture of a check from another source.

Apps should support keyboard navigation and mouse input for accessibility and usability reasons, so strictly requiring a hardware touchscreen should not be necessary.

Apps should support both landscape and portrait orientations, so they shouldn’t require that the screen could be landscape-oriented or could be portrait-oriented. For example, screens built in to cars may be in a fixed landscape orientation. Even if the app supports both landscape and portrait, the app could be unnecessarily requiring that the device supports being used in portrait, which would exclude those cars.

Determine your app’s required features

You can use aapt2 to output information about your APK, including the explicitly and implicitly required features. The logic matches how the Play Store filters app availability.

aapt2 dump badging <path_to_.apk>

In the Play Console, you can also check which devices are being excluded from accessing your app.

Increase app availability by making features optional

Most apps should not strictly require hardware and software features. There are few guarantees that the user will allow using that feature in the first place, and users expect to be able to use all parts of your app in the way they see fit. To increase your app’s availability across form factors:

    • Provide alternatives in case the feature is not available, ensuring your app doesn’t need the feature to function.
    • Add android:required="false" to existing <uses-feature> tags to mark the feature as not required (or remove the tag entirely if the app no longer uses a feature).
    • Add the <uses-feature> tag with android:required="false" for implicitly required feature due to declaring permissions that imply feature requirements.

Prevent regressions with CI and badging

To guard against regressions caused by inadvertently adding a new feature requirement that reduces device availability, automate the task of determining your app’s features as part of your build system. By storing the badging output of the aapt2 tool in a text file and checking the file into version control, you can track all declared permissions and explicitly and implicitly required features from your final universal apk. This includes all features and permissions included by transitive dependencies, in addition to your own.

You can automate badging as part of your continuous integration setup by setting up three Gradle tasks for each variant of your app you want to validate. Using release as an example, create these three tasks:

    • generateReleaseBadging – Generates the badging file from the universal APK using the aapt2 executable. The output of this task (the badging information) is used for the following two tasks.
    • updateReleaseBadging – Copies the generated badging file into the main project directory. The file is checked into source control as a golden badging file.
    • checkReleaseBadging – Validates the generated badging file against the golden badging file.

CI should run checkReleaseBadging to verify that the checked-in golden badging file still matches the generated badging file for the current code. If code changes or dependency updates have caused the badging file to change in any way, CI fails.

Screen grab of failing CI due to adding a new permission and required feature without updating the badging file.
Failing CI due to adding a new permission and required feature without updating the badging file.

When changes are intentional, run updateReleaseBadging to update the golden badging file and recheck it into source control. Then, this change will surface in code review to be validated by reviewers that the badging changes are expected.

Screen grab showing updated golden badging file for review with additional permission and implied required feature.
Updated golden badging file for review with additional permission and implied required feature.

CI-automated badging guards against changes inadvertently causing a new feature to be required, which would reduce availability of the app.

For a complete working example of a CI system verifying the badging file, check out the setup in the Now in Android app.

Keep features optional

Android devices are continually becoming more varied, with users expecting a great experience from your Android app regardless of the type of device they’re using. While some software or hardware features might be essential to your app’s function, in many cases they should not be strictly required, needlessly preventing some users from downloading your app.

Use the badging output from aapt2 to check which features your app requires, and use the Play Console to verify which devices the requirements are preventing from downloading your app. You can automatically check your app’s badging in CI and catch regressions.

Bottom line: If you don’t absolutely need a feature for your entire app to function, make the feature optional to ensure your app’s availability to the greatest number of devices and users.

Learn more by checking out our developer guide.