Category Archives: Android Developers Blog

An Open Handset Alliance Project

How to optimize your Android app for large screens (And what NOT to do!)

Posted by the Android team

Large foldables, tablets, and desktop devices like Chromebooks – with more active large screen Android devices each year, it’s more important than ever for apps to ensure they provide their users with a seamless experience on large screens. For example, these devices offer more screen space, and users expect apps to do more with that space. We’ve seen that apps enjoy better business metrics on these devices when they do work to support them.

These devices can also be used in different places and in different ways than we might expect on a handset. For example foldables can be used in tabletop mode, users may sit further away from a desktop display, and many large screen devices may be used with a mouse and keyboard.

These differences introduce new things to consider. For example:
  • Can a user reach the most important controls when using your app with two hands on a tablet?
  • Does all of your app’s functionality work with a keyboard and mouse?
  • Does your app’s camera preview have the right orientation regardless of the position of the device?
image showing differentiated experiences across large sceen devices

Large Screen Design and Quality

Defining great design and quality on large screens can be difficult, because different apps will have different ways to excel. You know your product best, so it’s important to use your app on large screen devices and reflect on what will provide the best experience. If you don’t have access to a large screen device; try one of the foldable, desktop, or tablet virtual devices.

Google also provides resources thoughout the development process to help as you optimize your app. If you’re looking for design guidance, there are thoughtful design resources like the large screen Material Design guidance and ready to use compositions like the Canonical layouts. For inspiration, there are great examples of a variety of different apps in the large screens gallery. If you’re looking for a structured way to approach large screen quality, the Large screen app quality guidelines provide a straight-forward checklist and a set of tests to give you confidence that your app is ready for large screens.


Dos and Don’ts of Optimizing for Large Screens

Whether you already have beautiful large screen designs or not, we want to highlight some helpful tips and common mistakes to avoid when optimizing your app for large screens.

Don’t: assume exclusive access to resources

  • Don’t assume you have exclusive access to hardware resources like the camera. Large screens commonly have more than one app active at a time, and those other apps may try to access the same resources.
  • This means you should test your app side by side simultaneously with other apps, and never assume a resource is available at any given time.

Do: handle hardware access gracefully

  • Check for hardware resources like the camera before trying to use them. Remember that hardware peripherals can be added and removed at any time via USB.
  • Fail gracefully when access to a given resource isn’t available at runtime.
try { // Attempt to use the camera ... } catch (e: CameraAccessException) { e.message?.let { Log.e(TAG, it) } // Fail gracefully if the camera isn't currently available } }

Do: respond appropriately to lifecycle events:

  • Your app may still be visible during onPause(), especially when multiple apps on onscreen, so you need to keep media playing and your UI fresh until onStop() is called

Don’t: stop your app’s UI in onPause()

override fun onPause() { //DON'T clean up resources here. //Your app can still be visible. super.onPause() }

Don’t: rely on device-type booleans like “isTablet

  • In the past, a common pattern for apps to use was to leverage screen width to create a boolean like “isTablet” to make changes based on the kind of device the app is running on, but this approach is fragile. The core problem with this approach is that it looks for a proxy to determine what the device type is, and those proxies are error-prone. For example, if you determine a device is a tablet because it has a large display when your app launches, your app can behave incorrectly when its window is resized to not take up the full screen. Even if your device-type boolean responds to configuration changes, unfolding a foldable would change your experience in a way that it couldn’t return to until another configuration change occurs, such as refolding the device.

Do: work to replace existing uses of device-type booleans with the right approach

Query for the information about the device that’s necessary for what you’re trying to accomplish. For example:

  • If you’re using device-type booleans to adapt your layout, use WindowSizeClasses instead. The library has support for both Views and for Jetpack Compose, and it makes it clear and easy to adapt your UI to pre-defined breakpoints.
// androidx.compose.material3.windowsizeclass.WindowSizeClass class MainActivity : ComponentActivity() { … setContent { val windowSizeClass = calculateWindowSizeClass(this) WindowSizeClassDisplay(windowSizeClass) } @Composable fun WindowSizeClassDisplay(windowSizeClass : WindowSizeClass) { when (windowSizeClass.widthSizeClass) { WindowWidthSizeClass.Compact -> { compactLayout() } WindowWidthSizeClass.Medium -> { mediumLayout() } WindowWidthSizeClass.Expanded -> { expandedLayout() } } }
  • If you’re using isTablet for changing user facing strings like “your tablet”, you might not need any more information. The solution can be as simple as using more general phrasing such as “your Android device”.
  • If you’re using a device-type boolean to predict the presence of a hardware feature or resource (e.g. - telephony, bluetooth, etc), check for the desired capabilities directly at runtime before trying to use them, and fail gracefully when they become unavailable. This feature-based approach ensures that your app can respond appropriately to peripheral devices that can be attached or removed. It also avoids cases where a feature is missing even though it could be supported by the device.
val packageManager: PackageManager = context.packageManager val hasTelephony = packageManager.hasSystemFeature(PackageManager.FEATURE_TELEPHONY)

Do: use Jetpack CameraX when possible

  • There can be a surprising amount of complexity in showing camera previews – orientation, aspect ratio, and more. When you use Jetpack CameraX, the library will handle as many of these details as possible for you.

Don’t: assume that your camera preview will align with device orientation

  • There are several kinds of orientation to consider when implementing a camera preview in your app - natural orientation, device orientation, and display orientation. Proper implementation of a camera preview requires accounting for the various kinds of orientation and adapting as the device’s conditions change.

Don’t: assume that aspect ratios are static

Do: declare hardware feature requirements correctly

  • When you’re declaring your app’s feature requirements, refer to the guidance in the Large Screens Cookbook. To ensure that you aren’t unnecessarily limiting your app’s reach, be sure to use the most inclusive manifest entries that work with your app.
<uses-feature android:name="android.hardware.camera.any" android:required="false" /> <uses-feature android:name="android.hardware.camera" android:required="false" /> <uses-feature android:name="android.hardware.camera.autofocus" android:required="false" /> <uses-feature android:name="android.hardware.camera.flash" android:required="false" />

Don’t: assume window insets are static

  • Large screens can change frequently, and that includes their WindowInsets. This means we can’t just check the insets when our app is launched and never change them.

Do: use the WindowInsetsListener APIs for Views

  • The WindowInsetsListener APIs notify your app when insets change
    ViewCompat.setOnApplyWindowInsetsListener(view) { view, windowInsets -> val insets = windowInsets.getInsets( WindowInsetsCompat.Type.systemBars()) view.updateLayoutParams<MarginLayoutParams>( leftMargin = insets.left, bottomMargin = insets.bottom, rightMargin = insets.right, ) WindowInsetsCompat.CONSUMED }

    Do: use the windowInsetsPadding Modifier for Jetpack Compose

    • The windowInsetsPadding Modifier will dynamically pad based on the given type of window insets. Additionally, multiple instances of the Modifier can communicate with each other to avoid adding duplicate padding, and they’re automatically animated.

    Don’t: assume the device has a touch screen

    Do: test your app on large screens

    • The most important thing you can do to ensure your app’s experience is great on large screens is to test it yourself. If you want a rigorous test plan that’s already prepared for you, try out the large screen compatibility tests.

    Do: leverage the large screen tools in Android Studio

    • Android Studio provides tools to use during development that make it much easier to optimize for large screens. For example, multipreview annotations allow you to visualize your app in many conditions at the same time. There’s also a wide variety of tablet, desktop, and foldable AVDs available in the Android Virtual Device Manager to help you test your app on large screens today.

    Stay tuned for Google I/O

    These tips are a great starting point as you optimize your app for large screens, and there are even more updates to come at Google I/O on May 10th. Tune in to watch the latest news and innovations from Google, with live streamed keynotes and helpful product updates on demand.

    Get ready for I/O ‘23: start planning your sessions, and take a look at some of Android’s favorite moments!

    Posted by Maru Ahues Bouza, Director, Android Developer Relations

    Google I/O 2023 is just a week away, kicking off on Wednesday May 10 at 10AM PT with the Google Keynote and followed at 12:15PM PT by the Developer Keynote. The program schedule launched last week, allowing you to save sessions to your calendar and start previewing content.

    To help you get ready for this year's Google I/O, we’re taking a look back at some of Android’s favorite moments from past Google I/Os, as well as a playlist of developer content to help you prepare. Take a look below, and start getting ready!


    Modern Android Development

    Helping you stay more productive and create better apps, Modern Android Development is Android’s set of tools and APIs, and they were born across many Google I/Os. Tor Norbye, Director of Engineering for Android, reflects on how Android development tools, APIs, and best practices have evolved over the years, starting in 2013 when he and the team announced Android Studio. Here are some of the talks we’re excited for in developer productivity at this year’s Google I/O:



    Building for a multi-device world

    From the launch of Android Auto and Android Wear in 2014 to last year’s preview of the Google Pixel Tablet, Google I/O has always been an important moment for seeing the new form factors that Android is extending to. Sara Hamilton, Developer Relations Engineer for Android, discusses how we are continuing to invest in multi-device experiences and making it easier for you to build for the entire Android device ecosystem. Sara shares her excitement for developers continuing to bring unique experiences to all screen sizes and types, from tablets and foldables, to watches and tvs. Some of our favorite talks at this year’s Google I/O in the multi-device world include:




    The platform and app quality

    From originally playing a smaller part in Google I/O keynotes in the early days to announcing 3 billion monthly active users in 2021, Dan Sandler, Software Engineer for Android, looks back at the tremendous growth of the Android platform and how it’s continuing to evolve. With a focus on helping you make quality apps, here are some of our favorite Android platform talks this year:




    We can’t wait to show you all that’s new across Android in just under a week. Be sure to tune in on the Google I/O website on May 10 to catch the latest Android updates and announcements this year!

    Get ready for Google I/O

    Posted by Timothy Jordan, Director, Developer Relations & Open Source

    I/O is just a few days away and we couldn’t be more excited to share the latest updates across Google’s developer products, solutions, and technologies. From keynotes to technical sessions and hands-on workshops, these announcements aim to help you build smarter and ship faster.

    Here are some helpful tips to maximize your experience online.


    Start building your personal I/O agenda

    Starting now, you can save the Google and developer keynotes to your calendar and explore the program to preview content. Here are just a few noteworthy examples of what you’ll find this year:

    What's new in Android
    Get the latest news in Android development: Android 14, form factors, Jetpack + Compose libraries, Android Studio, and performance.
    What’s new in Web
    Explore new features and APIs that became stable across browsers on the Web Platform this year.
    What’s new in Generative AI
    Discover a new suite of tools that make it easy for developers to leverage and build on top of Google's large language models.
    What’s new in Google Cloud
    Learn how Google Cloud and generative AI will help you develop faster and more efficiently.

    For the best experience, create or connect a developer profile and start saving content to My I/O to build your personal agenda. With over 200 sessions and other learning material, there’s a lot to cover, so we hope this will help you get organized.

    This year we’ve introduced development focus filters to help you navigate content faster across mobile, web, AI, and cloud technologies. You can also peruse content by topic, type, or experience level so you can find what you’re interested in, faster.


    Connect with the community

    After the keynotes, you can talk to Google experts and other developers online in I/O Adventure chat. Here you can ask questions about new releases and learn best practices from the global developer community.

    If you’re craving community now, visit the Community page to meet people with similar interests in your area or find a watch party to attend.

    We hope these updates are useful, and we can’t wait to connect online in May!

    U-NEXT sees 1.5X increase in tablet installations after boosting support for large screens

    Posted by the Android team

    As the largest domestic streaming and digital content service in Japan, U-NEXT is always looking for new ways to connect its users to their favorite content. With just a single application, the platform hosts an extensive library of over 840,000 titles, ranging from movies, anime, and live streams to manga, magazines, and e-books.

    Always looking for ways to improve its UX for its expanding user base, U-NEXT recently turned to the growing market of large screens and foldables, which includes devices like tablets and Chromebooks. Here, U-NEXT engineers saw an opportunity to create a better way to view content by focusing on what makes these devices special. For example, better multi-window support on larger screens could offer a more visually rich UX, while an improved foldable UX might better mimic the experience readers get with a traditional paperback.

    But some users bumped into bugs while using the U-NEXT app with these larger and foldable viewing formats. For instance, the app would often hide important buttons when users opened U-NEXT on larger screens, forcing them to search the page for those navigation tools.

    To optimize a UX overhaul to support these formats, the U-NEXT team tackled the project in two phases: remove any existing bugs, then add the features that its large-screen users would benefit from the most.

    Headshot of Tomoya Miwa, Principal engineer at U-NEXT, smiling, with text quote 'We wanted to provide a better user experience using the advantages of large screens and foldables'

    Clearing out the bugs

    To fix the visibility issue for important in-app navigation buttons, U-NEXT engineers used a ConstraintLayout to set constraint barriers. These barriers prevented UI elements from being pushed off-screen while ensuring they’re always oriented correctly, no matter the screen size.

    What’s more, U-NEXT’s application didn’t always display properly on larger screens. For example, pages displaying browsable video lists typically consist of a header and a curated list of content. These lists are supposed to occupy most of the space on the page. But on larger screens, the headers occupied the most on-screen real estate, making video content harder to navigate. The U-NEXT team resolved this issue by restricting the width of the header image on larger screens, giving the list more space and making browsing easier for large-screen users.

    When users view books on the U-NEXT application, they can tap the screen to reveal a horizontal, scrollable wheel that lets them quickly and easily navigate their place in the text. But when users tried to access this navigation tool on Chromebooks, it wouldn’t appear on the page.

    “Originally, we used SystemUiVisibility to determine whether a Chromebook was full-screen when a user tapped it,” said Tomoya Miwa, principal engineer at U-NEXT. “If SystemUiVisibility detected it wasn’t full screen, it’s supposed to display the controller. However, this listener isn’t called on when SystemUiVisibility is changed on Chromebooks, so the controller couldn’t be displayed.”

    This meant U-NEXT had to change how they manage the visibility of the controller when SystemUiVisibility changes on Chromebooks. After this bug fix, the application would hide and display the controller at the same time when the screen is tapped on a Chromebook, resolving the issue for these users.

    The last bug U-NEXT devs tackled was one that temporarily disrupted video when users folded their device during viewing. Switching device orientation while viewing content is supposed to be seamless, but the automatic deletion and recreation of the Activity during orientation changes caused videos to momentarily cut out.

    Instead of letting Android handle these configuration changes automatically, U-NEXT developers changed the app to handle them manually. Using onConfigurationChanged(), the team overrode the change and prevented the UI elements from automatically being deleted and recreated, letting the app preserve them and prevent any viewing interruptions.

    Making the most with more form factors

    As part of its feature overhaul, U-NEXT replaced the traditional navigation bar with a navigation rail, which U-NEXT engineers anticipated would significantly improve the user experience. U-NEXT made this change in line with Android’s Do’s and Don't for Large Screens presentation from its recent Android Developer Summit, which provided best practices for developers optimizing for large screens.

    “Reachability is an important factor when it comes to curating comfortable user experiences,” said Tomoya. “With a traditional, horizontal navigation bar, it makes it difficult to reach the buttons in the middle. With a navigation rail, it becomes much easier to reach these buttons.”

    Image showing side by side rendering of UI before the implementation of the navigation rail on the left and after on the right

    Next, the team enhanced support for two-page spreads when users viewed any e-books content on large screens. Apps typically display a single page when devices are oriented vertically on large screens and foldables. But because most large screens and foldables offer plenty of room for a double-page view, U-NEXT developers wanted to ensure users would always see a double-page spread whether in portrait or landscape orientation—even when the device was slightly folded.

    The U-NEXT team also included some smaller, quality-of-life updates to make the user experience for large screens and foldables even better. These included enhancing the app’s compatibility with Compose by ensuring the Navigation component was consistent on every screen size, adding better support for Google Play in-app billing on large screens, and optimizing picture-in-picture viewing.

    'The number of installations on tablets increased by more than 1.5x following the update for large screen devices.' — Tomoya Miwa, principal engineer at U-NEXT

    Android support makes optimization easy

    The U-NEXT team was surprised by how easy it was to optimize its app for large screens and foldable devices. Thanks to Android’s developer resources, U-NEXT was able to improve content viewing on its app, across devices, while also minimizing time and effort.

    “It’s not that difficult,” said Tomoya. “Introducing the navigation was relatively easy, and foldable support in general is not hard as long as your app is compatible with basic screen rotation.”

    Since updating the U-NEXT app to better support large screens, tablet installations have increased by 1.5X. Additionally, the watch time from users on large screen devices jumped by more than 10%.

    Looking forward, the U-NEXT team plans to keep expanding its app’s large screen capabilities by enhancing mouse and keyboard compatibility, introducing list detail view to improve search functionality, adding greater support for tabletop mode, and implementing drag-and-drop features to make content sharing easier.

    U-NEXT is excited to see Android add more resources to its large and expanding list of documentation, including the recently updated Material 3 library, which will further help support the growing number of users with large screen and foldable devices.

    Start optimizing for large screens today

    More people are using large screens, foldables, and other up-and-coming form factors. Learn how you can better support your users on these devices with examples from Android’s Large Screen Gallery.

    #WeArePlay | Meet Maria, AnnMaria and Dennis from the USA. More stories from around the world

    Posted by Leticia Lago, Developer Marketing

    From underserved communities needing more support with kids' education, to struggling to preserve the memories of passed loved ones. In our latest release of #WeArePlay stories, we’re celebrating the inspiring founders who identified problems around them and made apps or games to solve them.


    Starting with Maria, Annmaria and Dennis from Minnesota, USA - founders of 7 Generation Games. Growing up as a Latina in rural North Dakota, Maria wanted to build something inspired by her experiences and help support the education gap in underserved communities. She teamed up with her mom AnnMaria, a teacher and computer programmer, and software developer Dennis, to set up 7 Generation Games. They make educational games – in English, Spanish and indigenous languages – to improve math skills of Hispanic and Native American children. Making Camp Ojibwe is a village-building simulation where players earn points by answering math and social studies questions. Now with multiple titles, their games are proven to improve children’s school results.


    #WeArePlay David, Arman & Hayk ZOOMERANG Yerevan, Armenia, Google Play
    Next, David, Arman & Hayk from Armenia - founders of Zoomerang. After uploading his music online, David got limited views because his video editing wasn’t engaging. It was his passion for music that led him to start Zoomerang with co-founders Arman and Hayk. They created a platform where content creators could get editing templates for their videos, allowing thousands to grow their brand and vivify their content.


    #WeArePlay Rama LITTLE THINKING MINDS Amman, Jordan, Google Play
    Next, Rama from Jordan - founder of Little Thinking Minds. When she and her friend and co-founder Lamia had their first boys, they struggled to find resources to teach their children Arabic. So, they utilized their background in film production and started making children’s videos in Arabic in their backyards. When they held a screening at a local cinema, over 500 parents and children came to watch it, and they had to screen it multiple times. A few years later and the content is now digitized in a series of apps used in schools of 10 countries. The most popular, I Read Arabic, has educational videos, books, games, and a dashboard for teachers to track students' progress.


    #WeArePlay Prakash FORKEEPS Cape Town, South Africa, Google Play
    Last but not least, Prakash from South Africa - founder of ForKeeps. When Prakash’s sister passed away, his nieces longed to hear her voice again and keep her memory alive. When his father died, he felt the same and regretted not having all his photos and messages in one place. This inspired Prakash and his co-founders to create ForKeeps: a platform for preserving a person’s legacy with photo albums, stories, and voice messages. Through the app, people can feel their loved one’s presence after they're gone. The Forever Album tool also allows the audience to share and celebrate special occasions in real time. Now Prakash’s goal is to help more people across different cultures around the world record memories for their loved ones.

    Check out their stories now at g.co/play/weareplay and keep an eye out for more stories coming soon.


    How useful did you find this blog post?

    Photo Picker Everywhere

    Posted by Yacine Rezgui - Developer Relations Engineer

    Improving privacy remains a top priority on Android. We've been investing in the platform to give users more control, increase transparency, and reduce the scope of access to private data.

    Last year, we launched a new feature to emphasize this strategy: the Android photo picker. The photo picker is a browsable interface that presents the user with their media library, sorted by date from newest to oldest, and integrates nicely with your app’s experience without requiring media storage permissions!

     Moving image showing screengrab of Photo Picker on a mobile device

    It allows users to browse their photo gallery and grant access to specific items to an app. It’s a powerful tool allowing you to quickly add a photo selection feature to your apps without having to develop a complex in-house picker from scratch. It also eliminates the need to maintain complex logic for handling permissions and querying MediaStore, enabling you to save time and effort that would otherwise be spent on coding and debugging.

    The photo picker is easy to implement, as you only need to include a few lines of code with the support library. Furthermore, it’s highly configurable, so you can customize the user experience according to your app’s specific needs.

    What’s new?

    Availability across all Android versions

    One key piece of feedback we’ve heard from developers is the lack of support for older devices, making maintenance costly in terms of development. We are pleased to announce that, as part of the ActivityX 1.7.0 release, the Photo Picker support library will use a backported version provided by Google Play services on devices running Android KitKat (4.4) and later!

    To enable the backported photo picker:

    • Update the ActivityX dependency to the version 1.7.0
    • Add the following code snippet that adds the Google Play Services module dependency in your AndroidManifest.xml. It instructs Google Play services to set up the backported photo picker module while installing or updating your application (you can read more in the documentation.

    <!-- Prompt Google Play services to install the backported photo picker module --> <service android:name="com.google.android.gms.metadata.ModuleDependencies" android:enabled="false" android:exported="false"> <intent-filter> <action android:name="com.google.android.gms.metadata.MODULE_DEPENDENCIES" /> </intent-filter> <meta-data android:name="photopicker_activity:0:required" android:value="" /> </service>

    Register an activity result with PickVisualMedia or PickMultipleVisualMedia and launch the photo picker.

    // Register a variant of the photo picker where the user can select at most 5 items val pickMultipleVisualMedia = registerForActivityResult(PickMultipleVisualMedia(5)) { uris -> // Process URIs Log.d("Photo Picker URIs count", uris.size) } // Launching the photo picker (photos & video included) pickMultipleVisualMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo))

    And that’s it! In less than 10 lines of code, you have a permission-less photo picker with a nice UX that blends well into your application, and you have a single code path for maintaining the feature’s functionality for all Android versions running KitKat and above.

    GET_CONTENT takeover

    Since our last blog post, we started rolling out support for the GET_CONTENT intent in the Android photo picker whenever the specified MIME type filter matches image/* and/or video/*. As the rollout will continue in the upcoming months, make sure to test your app once your device has the feature enabled:

    adb shell device_config put storage_native_boot take_over_get_content true

    Later this year, the photo picker will seamlessly support cloud storage providers like Google Photos, allowing users to select their remote content without having to leave your app, and without any code change on the developers side.

    If you have any feedback or suggestions, submit tickets to our issue tracker.

    Com2uS brings a seamless multi-platform gameplay experience with Google Play Games on PC

    Posted by Arjun Dayal, Director, Google Play Games

    Reaching a motivated audience to play on PC

    Summoners War: Chronicles is a mobile MMORPG from South Korean game developer Com2uS, released globally in March 2023. To date, Summoners War has earned over $2.7 billion with more than 180 million downloads worldwide. Set in a fantasy world where players must collect and train various monsters to battle against other players, Summoners War is one of the most popular mobile games in the world.

    Nearly a decade later, the game continues to grow its large and active community of players, in part because Com2uS continues to release new content and updates to keep the game fresh and exciting. As part of Com2uS’s effort to offer their players the best way to enjoy the games wherever they want to play, they decided to expand their game to PC. They chose to expand to Google Play Games on PC to reach new users and offer the high-end immersive gaming experience to an already motivated audience ready to play mobile games on PC.

    Moving image of gameplay across four devices with different screen sizes
    Seamless sync across signed-in devices for Summoners War: Chronicles
    Subject to game availability and PC compatibility

    Optimizing for PC with the same Android build

    Google Play Games on PC offered a quick and smooth integration process from mobile to PC using the same Android build. With many developer tools at hand, only few optimization steps were required to create a differentiated PC gaming experience. You can watch our Google Play Games on PC playlist to see more details on the integration process.

    The Com2uS team added input support which is an essential feature to enable players to enjoy the game on large screens including tablets and foldables. Summoners War: Chronicles currently supports keyboard, mouse, and game controller, which has been one of the top requests from users. Also, it was important to optimize the user experience with cross-platform ability in mind. For example, the game's interface needed to be easily navigable and intuitive for players on all platforms, and the UI had to be adjusted for different size screens and allow clear explanation of the game controls.

    To optimize the graphics settings for each platform, Com2uS chose Vulkan as their primary graphics API for its high performance and multi-platform ability. Even on high-end mobile devices, some players preferred flexible quality settings to avoid overheating and to preserve battery life. Vulkan allowed Com2uS to provide the best possible graphics quality for their key user demographic on both PC and mobile devices.

    Using the Google Play Games on PC Emulator, Com2uS could test and debug the build in various player configurations, identical to the user environment. The emulator supports streamlined testing across multiple aspect ratios, directly accessible from the Play store and enables ADB access for developing and debugging. Having developed the game with Unity, they were able to automatically detect the emulator and directly deploy the game. Google Play Games on PC Emulator is coming soon for all developers, with support for the most popular game engines including Unity, Unreal, Cocos and more.

    Providing the best user experience with simultaneous release

    In November 2022, Com2uS simultaneously released Summoners War: Chronicles on both PC and mobile. Players can now enjoy the game on mobile or PC via Google Play Games. The release of the game on PC offers players a new level of gaming experience, allowing them to enjoy Summoners War on a larger screen and with more advanced hardware.

    With Play Games Services, players can sync their progress automatically whenever they launch the game on a new device and can continue to collect rewards and Play Points no matter where they choose to play.

    Get started with Google Play Games on PC

    Starting today, we’re excited to announce that Google Play Games beta is opening up sign-ups for all players in Japan! Enable your players to experience immersive and seamless multi-platform gameplay with Google Play Games on PC. To join, express interest in our beta program today.


    Google Play Games on PC is available to download in 14 countries as of April 19, 2023.
    Please see g.co/googleplaygames for more information. Game titles may vary by region.



    Google Play Logo

    Delivering an immersive sound experience with Spatial Audio

    Posted by Nevin Mital - Developer Relations Engineer, Android Media

    In Android 13 (API level 33), we introduced a new standardized platform architecture for spatial audio, a premium and more engaging sound experience. With spatial audio, your content sounds more realistic to users by making it sound as though they are in the middle of the action. The individual instruments from a band can be separated and “placed” around the user, or the sound from a whale might grow as it approaches from behind and taper off as it swims away. Read on to learn more about Android’s support for spatial audio and how to implement the feature in your app.

    Spatial audio on Android

    There are two main distinctions of spatial audio:

    • With static spatial audio, the sounds are anchored to the user and move with them. A bird chirping to their left will always be on their left, no matter how they turn or move their head.
    • With spatial audio with head tracking, the sounds are positioned in the environment around the user. By turning their head to the left, the user will now hear the bird chirping in front of them.
    ALT TEXT

    On Android, only multi-channel audio configured with the right AudioAttributes and AudioFormat is spatialized by default, though OEMs can customize this behavior. On devices where the OEM has integrated a spatializer effect, static spatial audio will work when any headset is connected to the device, though head-tracked spatial audio requires a headset with compatible head tracking sensors. OEMs like Pixel, OnePlus, and Xiaomi have already made these experiences available to their users.

    Implementing & testing spatial audio

    The easiest way to integrate with this feature is to use ExoPlayer! If you use ExoPlayer from Jetpack Media3 release 1.0.0 or newer, ExoPlayer will configure the decoder to prevent multi-channel audio from being downmixed to stereo and the default track selection behavior will take into account whether or not spatialization is possible. This means your content just needs to include a multi-channel audio track that ExoPlayer can select. ExoPlayer will monitor the device’s state and select a multi-channel track when spatialization is possible, or switch to a stereo track if not.

    Android 12L (API level 32) added the new Spatializer class to allow you to query the spatialization capabilities of the device. There are four conditions that must all be true for the device to output spatialized audio:

    // Get an instance of the Spatializer from the AudioManager val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager val spatializer = audioManager.spatializer if ( // Does the device have a spatializer effect? spatializer.immersiveAudioLevel != Spatializer.SPATIALIZER_IMMERSIVE_LEVEL_NONE // Is spatial audio enabled in the settings? && spatializer.isEnabled // Is spatialization available, for example for the current audio output routing? && spatializer.isAvailable // Can audio with the given parameters be spatialized? && spatializer.canBeSpatialized(audioAttributes, audioFormat) ) { // Spatialization is possible, so you can select a multi-channel track for playback with // spatial audio. } else { // Spatialization is not possible, so you may choose to select a stereo track for playback // to preserve bandwidth. }

    ExoPlayer performs these checks when deciding which audio track to select. To further check if head tracking is available, you can call the isHeadTrackerAvailable() method. The Spatializer class also includes the following listeners to be able to react to changes in the device’s state:

    OnSpatializerStateChangedListener

    For changes in whether the spatializer is enabled or available.

    OnHeadTrackerAvailableListener

    For changes in whether head tracking is available.

    With these signals, you can manually adjust your playback for spatial audio. Note that if you are not using ExoPlayer, you should make sure to configure the decoder to output multi-channel audio when possible by setting the max channel count to a large number with MediaFormat.setInteger(MediaFormat.KEY_MAX_OUTPUT_CHANNEL_COUNT, ##). See how ExoPlayer does this on GitHub. There are two ways to prevent spatialization depending on your use-case. If your audio is already spatialized, call setIsContentSpatialized(true) when configuring the AudioAttributes for your audio stream to prevent the audio from being double-processed. In all other cases, you can instead call setSpatializationBehavior(AudioAttributes.SPATIALIZATION_BEHAVIOR_NEVER) to disable spatialization altogether.

    As mentioned previously, using spatial audio requires a supported device (that is, getImmersiveAudioLevel() does not return SPATIALIZER_IMMERSIVE_LEVEL_NONE) and a connected headset. To test spatial audio, start by making sure the feature is enabled in settings:

    • For wired headsets, go to System settings > Sound & vibration > Spatial audio.
    • For wireless headsets, go to System settings > Connected devices > Gear icon for your wireless device > Spatial audio.

    Note that for spatial audio with head tracking, the headset must have head tracking sensors that are compatible with the device, such as Pixel Buds Pro with a Pixel phone, and head tracking must also be enabled in settings.

    Next steps

    Hearing is believing, so we highly recommend trying out spatial audio for yourself! You can see an example implementation in our sample app, Universal Android Music Player. And for more details on everything discussed here, check out our spatial audio developer guide.

    Automatic Update Prompts for Crashing Apps

    Posted by Kurt Williams, Product Manager, Google Play

    We are excited to announce a new feature that will help you to increase the rollout velocity of app updates and meet Play’s quality bar. On phones and tablets running Android 7.0 (SDK level 24) and above, the Play Store will prompt users to update your app if it crashes in the foreground and a more stable version is available. This will reduce your user-perceived crash rate.

    Update the app to fix crashes dialog
    The Play Store can now automatically prompt users to update your app after a user visible crash.


    What you need to do: nothing!

    These new prompts don’t require any integration by you and are enabled automatically when Play determines that a newer version of your app has a statistically relevant, lower crash rate. Furthermore, since the dialog is shown by the Play Store and not your app, the update prompt can be shown even if your app crashes on startup.

    Data driven improvement over time

    There are a few thresholds we take into account, which we will tune over time to achieve the best outcome. These include:

    1. User activity level of an app version according to Vitals to ensure we have statistical relevance
    2. Foreground crash rate of an app version and of its newer version
    3. Number of times a prompt can be shown for each version of your app on a device, if the user doesn’t choose to update

    We believe that this new feature will help your users update to the best available version of your app, and help you deliver the best experience to your users.



    Google Play Logo

    Voice controlled workouts with Google Assistant

    Posted by John Richardson, Partner Engineer

    With tens of millions of installs of the adidas Running app, users every day turn to adidas as part of their health and fitness routine. Like many in the industry, adidas recognized that in this ever-evolving market, it's important to make it as easy as possible for users to achieve their fitness goals and making their app available on Wear was a natural fit. adidas didn’t stop at bringing their running app to the watch, however, they also realized that in a user situation such as a workout, the ability to engage with the application hands-free, or even eyes-free, further simplified how users could engage with the app.

    Integrating Google Assistant

    To enable hands-free control, adidas looked to Google Assistant and App Actions, which lets users control apps with their voice using built-in intents (BIIs). Users can perform specific tasks by voice or act upon tasks such as starting a run or swim.

    Integrating Health and Fitness BIIs was a simple addition that adidas’ staff Android developer made in their IDE by declaring <capability> tags in their shortcuts.xml file in order to create a consistent experience between the mobile app and a watch surface. It’s a process that looks like this:

    1. First, Assistant parses the user’s request using natural language understanding and identifies the appropriate BII. For example, START_EXERCISE to begin a workout.
    2. Second, Assistant will then fulfill the user’s intent by launching the application to a specified content or action. Besides START_EXERCISE, users can also stop (STOP_EXERCISE), pause (PAUSE_EXERCISE), or resume (RESUME_EXERCISE) their workouts. Haptic feedback or dings can also be added here to show whether a user request was successful or not.

    With App Actions being built on Android, the development team was able to deploy quickly. And when partnered with the Health Services and Health Connect APIs which respectively support real-time sensor and health data, end users can have a cohesive and secure experience across mobile and Wear OS devices.

    Miving image illustrating adidas Running app launching via Google Assistant on a wearable device

    “What’s exciting about Assistant and Wear is that the combination really helps our users reach their fitness goals. The ability for a user to leverage their voice to track their workout makes for a unique and very accessible experience,” says Robert Hellwagner, Director of Product Innovation for adidas Runtastic. “We are excited by the possibility of what can be done by enabling voice based interactions and experiences for our users through App Actions.”

    Learn more

    Enabling voice controls to unlock hands-free and eyes-free contexts is an easy way to create a more seamless app experience for your users. To bring natural, conversational interactions to your app read our documentation today, explore how to build with one of our codelabs, or subscribe to our App Actions YouTube playlist for more information. You can also sign up to develop for Android Health Connect if you are interested in joining our Google Health and Fitness EAP. To jump right into how this integration was built, learn more about integrating WearOS and App Actions.