Author Archives: Android Developers

Watch out for Wear OS at Android Dev Summit 2021

Posted by Jeremy Walker, Developer Relations Engineer

image of 4 watch faces against dark blue background.

This year’s Android Dev Summit had many exciting announcements for Android developers, including some major updates for the Wear OS platform. At Google I/O, we announced the launch of the new Wear OS. Since then, Wear OS Powered by Samsung has launched on the Galaxy Watch4 series. Many developers such as Strava, Spotify, and Calm have already created helpful experiences for the latest version of Wear OS, and we’re looking forward to seeing what new experiences developers will help bring to the watch. To learn more and create better apps for the wrist, read more about the updates to our APIs, design tools, and the Play store.


Compose for Wear OS

The Jetpack Compose library simplifies and accelerates UI development, and we’re bringing Compose support to Wear OS. You can design your app with familiar UI components, adapted for the watch. These components include Material You, so you can create beautiful apps with less code.

Compose for Wear OS is now in developer preview. To learn more and get started:

Try it out and share your feedback here or join the #compose-wear channel on the Jetbrains Slack and let us know there! Make sure you do it before we finalize APIs during beta!


Watch Face Studio

image of clock face in editing software

Watch faces are one of the most visible ways that users can express themselves on their smartwatches. Creating a watch face is a great way to showcase your brand for users on Wear OS. We’ve partnered with Samsung to provide better tools for watch face creation and make it easier to design watch faces for the Wear OS ecosystem.

Watch Face Studio is a design tool created by Samsung that allows you to produce and distribute your own watch faces without any coding. It includes includes intuitive graphics tools to allow you to easily design watch faces. You can create watch faces for your personal use, or upload them in Google Play Console to share with your users on Wear OS devices that support API level 28 and above.


Library updates

We recently released a number of Android Jetpack Wear OS libraries to help you follow best practices, reduce boilerplate, and create performant, glanceable experiences for your users.

Tiles are now enabled for most devices in the market, providing predictable, glanceable access to information and quick actions. The API is now in beta, check it out!

For developers who want more fine-grain control of their watch faces (outside of Watch Face Studio), we've launched the new Jetpack Watch Face APIs beta built from the ground up in Kotlin.

The new API offers a number of new features:

  • Watch face styling which persists across both the watch and phone (no need for your own database).
  • Support a WYSIWYG watch face configuration UI on the phone.
  • Smaller, separate libraries (only include what you need).
  • Battery improvements by encouraging good battery usage patterns out of the box; for example, reducing the interactive frame rate when battery is low.
  • New Screenshot APIs so users can see their watch face changes in real time.
  • And many more...

This is a great time to start moving from the older Watch Face Support Library to this new version.


Play Store updates

We’re making it easier for people to discover your Wear OS apps in the Google Play Store. Earlier this year, we enabled searching for watch faces and made it easier for people to find your apps in the Wear category. We also launched the capability for people to download apps onto their watches directly from the mobile Play Store. You can read more about these changes here.

We’ve also released updated Wear OS quality guidelines to help you meet your users’ expectations, as well as new screenshot guidelines to help your users have a better understanding of what your app will look like. To help people better understand how your app would work on their device in their location, we will be launching form factor and location specific ratings in 2022.

To learn more about developing for Wear OS, check out the developer website.

What’s New in Scalable Automated Testing

Posted by Arif Sukoco, Android Studio Engineering Manager (@GoogArif) & Jolanda Verhoef, Developer Relations Engineer (@Lojanda)

dark blue background with three different devices showing the same screen: phone , tablet, and watch

We know it can be challenging to run Android instrumented tests at scale, especially when you have a big test suite that you want to run against a variety of Android device profiles.

At I/O 2021 we first introduced Unified Test Platform or UTP. UTP allows us to build testing features for Android instrumented tests such as running instrumented tests from Android Studio through Gradle, and Gradle Managed Devices (GMD). GMD allows you to define a set of virtual devices in build.gradle, and let Gradle manage them by spinning them up before each instrumented test run, and tearing them down afterwards. In the latest version of Android Gradle Plugin 7.2.0, we are introducing more features on top of GMD to help scale tests across multiple Android virtual devices in parallel.


Sharding

The first feature we are introducing is sharding on top of GMD. Sharding is a common technique used in test runners where the test runner splits up the tests into multiple groups, or shards, and runs them in parallel. With the ability to spin up multiple emulator instances in GMD, sharding is an obvious next step to make GMD a more scalable solution for large test suites.

When you enable sharding for GMD and specify the desired number of shards, it will automatically spin up that number of managed devices for you. For example, the sample below configures a Gradle Managed Devices called pixel2 in your build.gradle:


android {
  testOptions {
    devices {
      pixel2 (com.android.build.api.dsl.ManagedVirtualDevice) {
        device = "Pixel 2"
        apiLevel = 30
        systemImageSource = "google"
        abi = "x86"
      }
    }
  }
}

Let’s say you have 4 instrumented tests in your test suite. You can pass an experimental property to Gradle to specify how many shards you want to divide your tests in. The following command splits the test run into two shards:

class com.example.myapplicationExampleInstrumentedTests

 ./gradlew -Pandroid.experimental.androidTest.numManagedDeviceShards=2 pixel2DebugAndroidTest

Invoking Gradle this way will tell GMD to spin up 2 instances of pixel2, and split the running of your 4 instrumented tests between those 2 emulated devices. In the Gradle output, you will see ​​"Starting 2 tests on pixel2_0", and "Starting 2 tests on pixel2_1".

As seen in this example, sharding through GMD spins up multiple identical virtual devices. If you apply sharding and have more than one device defined in build.gradle, GMD will spin up multiple instances of each virtual device.

The HTML format output of your test run report will be generated in app/build/reports/androidTests/managedDevice/pixel2. This report will contain the combined test results from all the shards.

You can also load the test results from each shard to Android Studio by selecting Run > Import Tests From File from the menu and loading the protobuf output files app/build/outputs/androidTest-results/managedDevice/pixel2/shard_1/test-result.pb and app/build/outputs/androidTest-results/managedDevice/pixel2/shard_2/test-result.pb.

It’s worth remembering that when sharding your tests, there is always a tradeoff between the extra resources and time required to spin up additional emulator instances, and the savings in test running time. As such, it is more useful when you have larger test suites to run.

Also please note that currently GMD doesn’t support running tests for test-only modules yet, and there are known flakiness issues when running on cloud hosted CI servers.


Slimmer Emulator System Images

When running multiple emulator instances at the same time, your limited server’s computing resources could become an issue.

One of the ways to improve this is by slimming down the Android emulator system image to create a new type of device that’s optimized for running automated tests. The Automated Test Device (ATD) system image is designed to consume less CPU and memory by removing components that normally do not affect the running of your app’s instrumented tests, such as the SystemUI, Settings app, bundled apps like Gmail, Google Maps, etc., and some other components. Please read the release notes for more information about the ATD system image.

The ATD system images have hardware rendering disabled by default. This helps with another common source of slow-running test suites. Often, when running instrumented tests on an emulator, access to the host’s GPU for graphics hardware acceleration is not available. In this case, the emulator will choose to use software graphics acceleration, which is much more CPU intensive. Nearly all functionalities still work as expected with hardware rendering off, with the notable exception of screenshots. If you need to take screenshots in your test, we recommend taking a look at the new AndroidX Test Screenshot APIs which will dynamically enable hardware rendering in order to take a screenshot. Please take a look at the examples for how to use these APIs.

To use ATD, first make sure you have downloaded the latest version of the Android emulator from the Canary channel (version 30.9.2 or newer). To download this emulator, go to Appearance & Behavior > System Settings > Updates and set the IDE updates dropdown to “Canary Channel”.

Next, you need to specify an ATD system image in your GMD configuration:


android {
  testOptions {
    devices {
      pixel2 (com.android.build.api.dsl.ManagedVirtualDevice) {
        device = "Pixel 2"
        apiLevel = 30
        systemImageSource = "aosp-atd" // Or "google-atd" if you need
                                       // access to Google APIs
        abi = "x86" // Or "arm64-v8a" if you are on an Apple M1 machine
      }
    }
  }
}

You can now run tests from the Gradle command line just like you would with GMD as before, including with sharding enabled. The only thing you need to add for now is to let Gradle know you are referring to a system image in the Canary channel.


./gradlew -Pandroid.sdk.channel=3
-Pandroid.experimental.androidTest.numManagedDeviceShards=2
pixel2DebugAndroidTest

Test running time improvement using ATD might vary, depending on your machine configuration. In our tests, comparing ATD and non-ATD system images running on a Linux machine with Intel Xeon CPU and 64GB of RAM, we saw 33% shorter test running time when using ATD, while on a 2020 Macbook Pro with Intel i9 processor and 32GB of RAM, we saw 55% improvement.

We’re really excited about these new features, and we hope they can allow you to better scale out your instrumented tests. Please try them out and let us know what you think! Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

Here’s how to watch the 2021 Android Dev Summit!

Posted by The Android Team

We’re less than 24 hours away from kicking off the 2021 Android Dev Summit, broadcasting live online on October 27 & 28. The summit kicks off on October 27 at 10AM PDT with a 50-minute technical keynote, The Android Show. You can tune in at developer.android.com/dev-summit, or watch on YouTube.

After the show, we’ll be posting 30+ technical sessions to the site as well as YouTube for you to watch at your own pace, from Material You in Jetpack Compose to Kotlin Flows in practice.

Two days of live, technical Android content

Over the two day event, we have a number of ways for you to tune in and hear your favorite Android development topics discussed live from the team who built Android. Got questions about Modern Android Development, Large Screens, or Material You? Ask them on Twitter now using #AskAndroid to get them answered live on the air. We’ll also host live Android Code-Alongs. Tune in to watch Android experts as they code, tackle programming challenges, and answer your questions live across Jetpack Compose and Compose for Wear OS.

screenshot of conference agenda

For the full agenda with timings, check out the Android Dev Summit page. And of course, don’t forget: if you run into the bugs of chaos before then, let them know that together with Team Jetpack, we’re coming for them at Android Dev Summit…

Evolving our business model to address developer needs

Posted by Sameer Samat, Vice President, Product Management

When we started Android and Google Play more than a decade ago, we made a bet that a free and open mobile ecosystem could compete with the proprietary walled gardens that dominated the industry. It wasn’t yet clear what kinds of businesses would move to mobile or what apps would be successful. To keep things simple, we went with an easy-to-understand business model: The vast majority of developers could distribute their apps on Google Play for free (currently 97% do so at no charge). For the developers who offered a paid app or sold in-app digital goods (currently just 3% of developers), the flat service fee was 30%. This model helped apps to become one of the fastest-growing software segments. And instead of charging licensing fees for our OS, our service fee allowed us to continually invest in Android and Play while making them available for free to device makers all over the world.

The creativity and innovation from developers around the world spurred amazing new app experiences we could have never imagined when we first introduced Android. As the ecosystem evolved, a wider range of business models emerged to support these different types of apps. We've made important changes along the way, including moving beyond a “one size fits all” service fee model to ensure all types of businesses can be successful. Instead of a single service fee, we now have multiple programs designed to support and encourage our diverse app ecosystem.

The result is that 99% of developers qualify for a service fee of 15% or less. And after learning from and listening to developers across many industries and regions, including developers like Anghami, AWA, Bumble, Calm, Duolingo, KADOKAWA, KKBOX, Picsart, and Smule, we're announcing additional changes to further support our ecosystem of partners and help them build sustainable businesses, and ensure Play continues to lead in the mobile app ecosystem.

Decreasing service fees on subscriptions to 15%

Digital subscriptions have become one of the fastest growing models for developers but we know that subscription businesses face specific challenges in customer acquisition and retention. We’ve worked with our partners in dating, fitness, education and other sectors to understand the nuances of their businesses. Our current service fee drops from 30% to 15% after 12 months of a recurring subscription. But we’ve heard that customer churn makes it challenging for subscription businesses to benefit from that reduced rate. So, we’re simplifying things to ensure they can.

To help support the specific needs of developers offering subscriptions, starting on January 1, 2022, we're decreasing the service fee for all subscriptions on Google Play from 30% to 15%, starting from day one.

For developers offering subscriptions, this means that first-year subscription fees will be cut in half. We’ve already gotten positive feedback from our developer partners on this change:

“Our partnership with Google has been a powerful one for our business, helping us to scale and ultimately playing a key role in advancing our mission to empower women globally. The pricing change they’ve announced will allow us to better invest in our products and further empower users to confidently connect online.”
– Whitney Wolfe Herd, Founder and CEO, Bumble Inc.
"Just as every person learns in different ways, every developer is different as well. We're excited to see Google continuing to collaborate with the ecosystem to find models that work for both the developer and platform. This reduction in subscription fees will help Duolingo accelerate our mission of universally available language learning."
- Luis von Ahn, Co-Founder and CEO of Duolingo.

Going further with cross platform experiences

While apps remain incredibly important for mobile phones, great services must now also span TVs, cars, watches, tablets and more. And we recognize that developers need to invest in building for those platforms now more than ever.

Earlier this year we launched the Play Media Experience program to encourage video, audio and book developers alike to help grow the Android platform by building amazing cross-device experiences. This helped developers invest in these multi-screen experiences with a service fee as low as 15%.

Today, we’re also making changes to the service fee in the Media Experience program, to better accommodate differences in these categories. Ebooks and on-demand music streaming services, where content costs account for the majority of sales, will now be eligible for a service fee as low as 10%. The new rates recognize industry economics of media content verticals and make Google Play work better for developers and the communities of artists, musicians and authors they represent. You can go here for more information.

We will continue to engage with developers to understand their challenges and opportunities — and how we can best support them in building sustainable businesses. It’s a theme that will be front and center at the Android Developer Summit on October 27-28, where you’ll hear more about our latest tools, application programming interfaces (APIs) and technologies designed to help developers be more productive and create better apps.

If you’re looking for more information about Google Play and its service fees, we've answered some common questions here.

Android Devs assemble: help Team Jetpack fight the bugs of chaos at #AndroidDevSummit + agenda now live!

Posted by The Android Team

Image shows Jetpack superhero avatar

Excited for Android Dev Summit on October 27-28? Us too! But, before we get there, we need your help. Team Jetpack is in a brutal fight against the bugs of chaos… they are outnumbered and they need you to join their forces, defeat the bugs, and help Android restore order to the universe. Will you answer the call?



Create your own Team Jetpack superhero, with a custom look and feel, and add your own mix of Android coding power boosts to unlock magical superpowers. Once you’re done, you’ll get a digital trading card for your superhero to share on Twitter, and you’ll be all set to join us at #AndroidDevSummit and help restore order to the universe. Go to goo.gle/ads21 to make yours!



#AndroidDevSummit agenda + sessions announced!

We just posted the livestream agenda, released the full technical talk details, and added additional speakers to the lineup for Android Dev Summit. Take a look and start planning your days. Android Dev Summit kicks off with a 50-minute technical keynote, The Android Show. After the show, we’ll be posting 30+ technical sessions for you to watch at your own pace, from Material You in Jetpack Compose to Kotlin Flows in practice.

Photo of ADS21 session schedule

Over the two day event, we have a number of ways for you to tune in and hear your favorite Android development topics discussed live from the team who built Android. Got questions about Modern Android Development, Large Screens, or Material You? Ask them on Twitter now using #AskAndroid to get them answered live on the air. We’ll also host live Android Code-Alongs. Tune in to watch Android experts as they code, tackle programming challenges, and answer your questions live across Jetpack Compose and Compose for Wear OS.

We can’t wait to connect with you in just over a week! For the full agenda with timings, check out the Android Dev Summit page. And of course, don’t forget: if you run into the bugs of chaos before then, let them know that together with Team Jetpack, we’re coming for them at Android Dev Summit…

Launching Data safety in Play Console: Elevating Privacy and Security for your users

Posted by Krish Vitaldevara, Director, Product Management

Illustration of a phone with a security symbol

We know that a big part of feeling safe online is having control over your data. That’s why every day we’re committed to empowering users with advanced security and privacy controls and increased agency with respect to data practices. With the new Data safety section, developers will now have a transparent way to show users if and how they collect, share, and protect user data, before users install an app.

Starting today, we’re rolling out the Data safety form in Google Play Console. We’ve also listened to your feedback, so to provide developers with additional guidance, we’re sharing helpful information in our Help Center, developer guide, Play Academy course, and more. Following our common protocols, we'll begin gradual rollout today and expect to expand access to everyone within a couple of weeks.


How to submit your app information in Play Console

Starting today, you can go to App content in your Play Console and look for a new section called “Data safety.” We recommend that you review the guidance and submit your form early so you can get review feedback and make changes before rejected forms prevent you from publishing new app updates. Developers have told us that early feedback would help them fill out the form correctly before users see the Data safety section in February 2022. The enforcement on apps without approved forms starts April 2022.

We understand that completing the form may require a meaningful amount of work, so we built the product and timeline based on developer feedback to make this process as streamlined as possible. Also, developers have asked for a way to more easily import information when they have multiple apps. Therefore, we’ve added an option for developers to import a pre-populated file.


How to get prepared


What your users will see in your app's store listing starting February

Image of app store data privacy and security section. Text reads Developers can showcase key privacy and security practices at a glance

Users will first see the Data safety summary in your store listing. Your app profile will show what data an app collects or shares and highlight safety details, such as whether:

  • The app has security practices, like data encryption in transit
  • The app has committed to follow our Families policy
  • The app has been independently reviewed for conformance with a global security standard
Image of phone data privacy and security. Text reads Developers can share what their app collects and why, so users can download with confidence
GIF of location settings. text reads developers can explain how the data is used

Users can tap the summary to see more details like:

  • What type of data is collected and shared, such as location, contacts, personal information (e.g., name, email address), financial information, and more
  • How the data is used, such as for app functionality, personalization, and more
  • Whether data collection is optional or required in order to use an app

Users have shared that seeing this information helps them understand how some apps may handle their information and feel more trusting about certain apps.


What to expect

Image shows timeline. May '21 pre anouncement. July '21 policy is available. October '21 developers can start declaring information in Google Play Console. Febryary '22 users can start seeing the section on Google Play. April '22 deadline for developers to declare information

Timeline dates subject to change.


You can submit your Data safety form in the Play Console now for early review feedback. You are not required to submit an app update in order to submit your safety profile.

In February 2022, we will launch this feature in the Play store. If your information is approved, your store listing will automatically update with your data safety information. If your information has not been submitted or has been rejected, your users will see “No information available.”

image of data privacy and security settings

By April 2022, all your apps must have their Data safety section approved. While we want as many apps as possible to be ready for the February 2022 consumer experience, we know that some developers will need more time to assess their apps and coordinate with multiple teams.

Also by April, all apps must also provide a privacy policy. Previously, only apps that collected personal and sensitive user data needed to share a privacy policy. Without an approved section or privacy policy, your new app submissions or app updates may be rejected. Non-compliant apps may face additional enforcement actions in the future.

Thank you for your continued partnership in building this feature alongside us and in making Google Play a safe and trustworthy platform for everyone.

Announcing the Android Basics in Kotlin Course

Posted by Murat Yener, Developer Advocate

image with phone showing the different Android Basics in Kotlin units

We are always looking for ways to make learning Android development accessible for all. In 2020, we announced the launch of Android Basics in Kotlin, a free self-paced programming course. Since then, over 100,000 beginners have completed their first milestone in the course.

Android Basics in Kotlin teaches people with no programming experience how to build simple Android apps. Along the way, students learn the fundamentals of programming and the basics of the Kotlin programming language. Today, we’re excited to share that the final unit has been released, and the full Android Basics in Kotlin course is now available.

This course is organized into units, where each unit is made up of a series of pathways. At the end of each pathway, there is a quiz to assess what you’ve learned so far. If you complete the quiz, you earn a badge that can be saved to your Google Developer Profile.

The course is free for anyone to take. Basic computer literacy and basic math skills are recommended prerequisites, along with access to a computer that can run Android Studio. If you’ve never built an app before but want to learn how, check out the Android Basics in Kotlin course.

Compose for Wear OS now in Developer Preview!

Posted by Jeremy Walker, Developer Relations Engineer

Blue background with illustration of watch

At this year’s Google I/O, we announced we are bringing the best of Jetpack Compose to Wear OS. Well, today, Compose for Wear OS is in Developer Preview after a number of successful alpha releases.

Compose simplifies and accelerates UI development, and the same is true of Compose for Wear OS, with built-in support for Material You to help you create beautiful apps with less code.

In addition, what you’ve learned building mobile apps with Jetpack Compose translates directly to the Wear OS version. Just like mobile, you’re welcome to start testing it out right away, and we want to incorporate your feedback into the early iterations of the libraries before the beta release.

This article will review the main composables we've built and point you towards resources to get started using them.

Let's get started!


Dependencies

Most of the Wear related changes you make will be at the top architectural layers.

Flow chart showing the top two boxes circled in red. Boxes order reads: Material, Foundation, UI, Runtime

That means many of the dependencies you already use with Jetpack Compose don't change when targeting Wear OS. For example, the UI, Runtime, Compiler, and Animation dependencies will remain the same.

However, you will need to use the proper Wear OS Material, Navigation, and Foundation libraries which are different from the libraries you have used before in your mobile app.

Below is a comparison to help clarify the differences:


Wear OS Dependency

(androidx.wear.*)

Comparison

Mobile Dependency

(androidx.*)

androidx.wear.compose:compose-material

instead of

androidx.compose.material:material 

androidx.wear.compose:compose-navigation

instead of

androidx.navigation:navigation-compose

androidx.wear.compose:compose-foundation

in addition to

androidx.compose.foundation:foundation

1. Developers can continue to use other material related libraries like material ripple and material icons extended with the Wear Compose Material library.


While it's technically possible to use the mobile dependencies on Wear OS, we always recommend using the wear-specific versions for the best experience.

Note: We will be adding more wear composables with future releases. If you feel any are missing, please let us know.


Here's an example build.gradle file:

// Example project in app/build.gradle file
dependencies {
    // Standard Compose dependencies...

    // Wear specific Compose Dependencies
    // Developer Preview starts with Alpha 07, with new releases coming soon.
    def wear_version = "1.0.0-alpha07"
    implementation "androidx.wear.compose:compose-material:$wear_version"
    implementation "androidx.wear.compose:compose-foundation:$wear_version"

    // For navigation within your app...
    implementation "androidx.wear.compose:compose-navigation:$wear_version"

    // Other dependencies...
}

After you've added the right Wear Material, Foundation, and Navigation dependencies, you are ready to get started.


Composables

Let's explore some composables you can start using today.

As a general rule, many of the Wear composables that are equivalent to the mobile versions can use the same code. The code for styling color, typography, and shapes with MaterialTheme is identical to mobile as well.

For example, to create a Wear OS button your code looks like this:

Button(
    modifier = Modifier.size(ButtonDefaults.LargeButtonSize),
    onClick = { /*...*/ },
    enabled = enabledState
) {
    Icon(
        painter = painterResource(id = R.drawable.ic_airplane),
        contentDescription = "phone",
        modifier = Modifier
            .size(24.dp)
            .wrapContentSize(align = Alignment.Center),
    )
}

The code above is very similar to the mobile side, but the library creates a Wear OS optimized version of the button, that is, a button circular in shape and sized by ButtonDefaults to follow Wear OS Material Guidelines.

Blue circle with a black airplane logo in the middle

Below are several composable examples from the library:

In addition, we've introduced many new composables that improve the Wear experience:

We also offer a wear optimized composable for lists, ScalingLazyColumn, which extends LazyColumn and adds scaling and transparency changes to better support round surfaces. You can see in the app below, the content shrinks and fades at the top and bottom of the screen to help readability:

GIF showing watch face scrolling though calendar

If you look at the code, you can see it's the same as LazyColumn, just with a different name.

val scalingLazyListState: ScalingLazyListState = 
    rememberScalingLazyListState()

ScalingLazyColumn(
    modifier = Modifier.fillMaxSize(),
    verticalArrangement = Arrangement.spacedBy(6.dp),
    state = scalingLazyListState,
) {
    items(messageList.size) { message ->
        Card(/*...*/) { /*...*/ }
    }

    item {
        Card(/*...*/) { /*...*/ }
    }
}

Swipe to Dismiss

Wear has its own version of Box, SwipeToDismissBox, which adds support for the swipe-to-dismiss gesture (similar to the back button/gesture on mobile) out of the box.

Here's a simple example of the code:

// Requires state (different from Box).
val state = rememberSwipeToDismissBoxState()

SwipeToDismissBox(
    modifier = Modifier.fillMaxSize(),
    state = state
) { swipeBackgroundScreen ->

    // Can render a different composable in the background during swipe.
    if (swipeBackgroundScreen) {
        /* ... */
        Text(text = "Swiping Back Content")
    } else {
        /* ... */
        Text( text = "Main Content")
    }
}

Here's a more complex example of the behavior:

GIF of watch face showing calendar agenda

Navigation

Finally, we also offer a Navigation composable, SwipeDismissableNavHost, which works just like NavHost on mobile but also supports the swipe-to-dismiss gesture out of the box (actually uses SwipeToDismissBox under the hood).

Here's an example (code):

GIF showing watch face alarm

Scaffold

Scaffold provides a layout structure to help you arrange screens in common patterns, just like mobile, but instead of an App Bar, FAB, or Drawer, it supports Wear specific layouts with top-level components like Time, Vignette, and the scroll/position indicator.

The code is very similar to what you would write on mobile.


Get Started

We're excited to bring Jetpack Compose to Wear OS and make watch development faster and easier. To dive right in and create an app, check out our quick start guide. To see working examples (both simple and complex), have a look at our sample repo.

The Developer Preview is your opportunity to influence the APIs, so please share your feedback here or join the Slack #compose-wear channel and let us know there!

Answering your top questions on Android Game Development Kit

Posted by Wayne Lu, Technical Lead Manager, Android DevRel

hand holding a phone with game and chat

We launched the Android Game Development Kit (AGDK) in July, and have collected some top questions from developers - ranging from AGDK libraries and tools, optimizing memory in Android, and implementing graphics.


AGDK and game engines

Firstly, we’ve heard questions from early, rising game developers on how to use our set of AGDK libraries and tools. We have the following recommendations depending on your setup:

  1. For game developers using popular game engines such as Defold, Godot, Unity, or Unreal - you can follow our guides to learn how to develop apps on Android. Using these game engines lets you focus on building gameplay instead of the entire technology stack.
  2. If you're using Unreal Engine and targeting multiple platforms such as PC or consoles, Android Game Development Extension (AGDE) may be a great addition to your workflow.
  3. We also support developers who want to customize and write their own game engine - you can learn more about this with our C or C++ documentation.

After choosing your game engine and workflow, you should look into our tools such as the Android Studio Profiler to inspect your game, Android GPU Inspector to profile graphics and Android Performance Tuner to optimize frame rates and loading times.


Game Mode API and Interventions

Following this, we’ve received questions on developing for Android 12. While you don’t have to do anything special for your game to run on Android 12, we’ve introduced Game Mode API and interventions to help players customise their gaming experience.

  1. Read more about the Game Mode API, and find out how to optimize your game for the best performance or longest battery life when the user selects the corresponding game mode.
  2. Learn about the Game Mode interventions - these are set by original equipment manufacturers (OEMs), to improve the performance of games that are no longer being updated by developers. For example: WindowManager backbuffer resize to reduce a device's GPU load.

Memory Access in Android

Secondly, you’ve asked us how memory access works in Android game development versus Windows. In short, here are a couple of pointers:

  1. Games need to share memory with the system. Some devices have less available memory than others, so testing is needed to check for low memory issues on a range of supported devices. Testing should be done on devices with typical apps that a user would have installed (i.e. not a clean device).
  2. The amount of memory a game can allocate depends on various factors such as the amount of physical memory, the number of dirty pages, and the amount of total zRam (for compressed swapping)
  3. Symptoms of low memory can be: onTrimMemory() calls, memory thrashing, or termination of the game by the Low Memory Killer. Use bugreport logs to check if the game was killed by the Low Memory Killer, or on Android 11 and later check the ApplicationExitInfo to see if the game was terminated because of REASON_LOW_MEMORY.
  4. Avoid memory thrashing: this occurs when there’s low but insufficient memory to kill the game. You can detect this via system tracing, and should reduce the overall memory footprint to avoid this issue.
  5. Use the Android Profiler and other tools to inspect your memory usage.

Implementing Graphics in Android

Thirdly, we’ve received questions about implementing graphics in Android. You have the following options: OpenGL ES or Vulkan graphics APIs:

  1. Learn how to configure OpenGL ES graphics for your C++ game engine by initializing variables, rendering with the game loop, scenes and objects.
  2. Read our Vulkan guides to learn how to draw a cube, compile shaders, setup validation layers, and other best practices.

Check out the Q&A video to view the top questions on AGDK and visit g.co/android/AGDK for our latest resources for Android game development.

Mindful architecture: Headspace’s refactor to scale

Posted by Manuel Vicente Vivo, Android Developer Relations Engineer

Contributors: Mauricio Vergara, Product Marketing Manager, Developer Marketing, Marialaura Garcia, Associate Product Marketing Manager, Developer Marketing

Headspace Technical case study graphic


Executive Summary

Headspace was ready to launch new wellness and fitness features, but their app architecture wasn’t. They spent eight months refactoring to a Model-View-ViewModel architecture, rewriting in Kotlin and improving test coverage from 15 to 80%. The improved app experience increased MAU by 15% and increased review scores from 3.5 to 4.7 between Q2 and Q4 of 2020. To learn more about how Headspace’s focus on Android Excellence impacted their business, read the accompanying case study here.


Introduction

Headspace has grown into a leader in mindfulness by creating an app which helps millions of people to meditate daily. Mindfulness goes far beyond meditation, it connects to all aspects of a person’s life. That idea prompted the most recent stage in Headspace’s evolution. In 2019, they decided to expand beyond meditation and add new fitness and wellness features to their Android app. Headspace realized that they would need a cross-functional team of engineers and designers to be able to deliver on the new product vision and create an excellent app experience for users. An exciting new phase for the company: their design team started the process by creating prototypes for the new experience, with fresh new designs.

With designs in hand, the only thing stopping Headspace from expanding their app and broadening users’ horizons was their existing Android software architecture. It wasn’t well structured to support all these new features. Headspace’s development team made the case to their leadership that building on the existing code would take longer than a complete rewrite. After sharing the vision and getting everyone on board, the team set out on a collective journey to write a new Android app in pursuit of app excellence.


The Android Rewrite

Headspace’s Android development team first needed a convenient way to standardize how they built and implemented features. "Before we wrote a single line of code, our team spent a week evaluating some important implementation choices for the foundation of our app,” Aram Sheroyan, an Android developer at Headspace explains;

“This was crucial pre-work so that we were all on the same page when we actually started to build."

Immersing themselves in Google’s literature on the latest, best practices for Android development and app architecture, the team found a solution they could all confidently agree on. Google recommended refactoring their app using a new base architecture: model-view-view-model. MVVM is a widely-supported software pattern that is progressively becoming industry standard because it allows developers to create a clear separation of concerns, helping streamline an app’s architecture. “It allowed us to nicely separate our view logic," Sheroyan explained.

With MVVM as the base architecture, they identified Android’s Jetpack libraries, including Dagger and Hilt for dependency injection. The new tools made boilerplate code smaller and easier to structure, not to mention more predictable and efficient. Combined with MVVM, the libraries provided them with a more detailed understanding of how new features should be implemented. The team was also able to improve quality in passing arguments between functions. The app had previously suffered from crashes due to NullPointerException errors and incorrect arguments. Adopting the safeArgs library helped to eliminate errors when passing arguments.

In rewriting the app, the team further made sure to follow the Repository pattern to support a clearer separation of concerns. For example, instead of having one huge class that saves data in shared preferences, they decided that each repository’s local data source should handle the respective logic. This separation of data sources enables the team to test and reproduce business code outside of the live app for unit testing without having to change production code. Separating concerns in this way made the app more stable and the code more modular.

The team also took the opportunity to fully translate their app into the Kotlin programming language, which offered useful helper functions, sealed classes, and extension functions. Removing legacy code and replacing the mix of Java and Kotlin with pure Kotlin code decreased build time for the app. The new architecture also made it easier to write tests and allowed them to increase test coverage from around 15% to more than 80%. This resulted in faster deployments, higher quality code, and fewer crashes.

To capture the new user experience in the app’s reviews, Headspace implemented the Google Play In-App Review API. The new API allowed them to encourage all users to share reviews from within the app. The implementation increased review scores by 24%, and — as store listing reviews are tied to visibility on Google Play — helped draw attention to the app’s recent improvements.


Achieving App Excellence

The rewrite took eight months and with it came a new confidence in the code. Now that the codebase had 80%+ unit test coverage, they could develop and test new features with confidence rather than worries. The new architecture made this possible thanks to its improved logic separation, and a more reusable code, making it easier to plan and implement new features.

The build time for the app decreased dramatically and development velocity picked up. The team’s new clarity around best practices and architecture also reduced friction for onboarding new developers, since it was now based on Android industry standards. They could communicate more clearly with potential candidates during the interview process, as they now had a shared architectural language for discussing problem sets and potential solutions.

With velocity came faster implementation of features and an improved retention flow. They could now optimize their upsell process, which led to a 20% increase in the number of paid Android subscribers relative to other platforms where the app is published. The combination of a new app experience and the implementation of the new In-App Review API led to their review scores improving from 3.5 to 4.7 stars between Q2 and Q4 of 2020! Overall, the new focus on Android App Excellence and the improved ratings earned Headspace a 15% increase in MAU globally..

These were just a few of the payoffs from the significant investment Headspace made in app excellence. Their laser focus on quality paid off across the board, enabling them to continue to grow their community of users and lay a solid foundation for the future evolution of their app experience.


Get your own team on board

If you’re interested in getting your team on board for your own App Excellence journey, check out our condensed case study for product owners and executives linked here. To learn more about how consistent, intuitive app user experiences can grow your business, visit the App Excellence landing page.