Tag Archives: mobile

Refine emails faster with updates to Help me write in Gmail

What’s changing 

Building upon our popular Help me write feature in Gmail and the recent launch of the summarization feature in the Gmail mobile app, we’re excited to introduce two new Gemini in Gmail updates to help you draft emails even faster: 
  • A new option for Help me write that polishes emails drafts on web and mobile devices 
  • Help me write and Refine my draft shortcuts on Android and iOS devices 
When using Gemini to refine emails, users can choose from the following options: Formalize, Elaborate and Shorten. We recently added the Polish option to web and mobile, which can effortlessly refine your emails, saving you time. For example, if you enter rough notes into a draft, Gemini can turn the content into a completely formal draft, ready for you to review in one click. 
polish draft using Gemini in Gmail

On mobile, when an email draft is empty, the “Help me write” shortcut now appears in the body of the email and when selected, it will open the full Help me write experience. When 12+ words are present in an email draft, the ​​“Refine my draft” shortcut will be shown below the email content to indicate that there are options available to Polish, Formalize, Elaborate, or Shorten your draft, or Write a new draft. The menu can be triggered simply by swiping right on “Refine my draft”. 

refine my email draft on Gmail using Gemini

Getting started 

Rollout pace 

  • The option for Help me write to polish email drafts is available now on web, Android and iOS. 
  • The Help me write and Refine my draft shortcuts are available now on Android and iOS. 

Availability 

Available for Google Workspace customers with: 
  • Gemini Business and Enterprise add-on 
  • Gemini Education and Education Premium add-on 
  • Google One AI Premium 

Resources 

Enhancing your productivity on Android devices with new features in Gmail and Google Chat apps

What’s changing

We’re introducing numerous improvements across the Gmail and Google Chat apps on Android foldables and tablets in order to enhance your productivity when using these devices. 

In the Gmail app, you’ll notice a new formatting bar located on the email compose screen. This now includes additional formatting options like the ability to change the font type and make a bulleted list.
additional formatting options in bar



Next, you’ll be able to view a list of helpful keyboard shortcuts in the Gmail app and in the Chat app by pressing “?” when you plug an external keyboard into your Android device. 
list of helpful keyboard shortcuts in the Gmail app and in the Chat app

Lastly, we’re enabling Smart Compose on Android tablets and foldables, a feature originally introduced on Gmail web that intelligently autocompletes your emails. Similar to the mobile experience, Smart Compose suggests text as you type that can be accepted by swiping across the gray text or pressing tab on a physical keyboard. 
Smart Compose on Android tablets and foldables



Getting started 

Rollout pace 

Availability 

  • Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts 

Resources 

Prepare your app for the new Samsung Galaxy foldables and watches!

Posted by Maru Ahues Bouza – Product Management Director, Android Developer

Yesterday’s Galaxy Unpacked event from Samsung debuted the latest in foldables, wearables, and more! The event introduced the Galaxy Z Fold6 and Z Flip6 and the Galaxy Watch7 and Watch Ultra - and it has never been easier to build apps that look great across all these screen sizes and types. To help you get your apps ready for the latest Android devices, we’re sharing how you can prepare your app for Wear OS 5 and how to build adaptive apps that scale across mobile, tablets, foldables and more!

Get your app ready for Wear OS 5

Samsung’s new Galaxy Watch lineup, including the Watch Ultra and Watch7, will be the first smartwatches powered by Wear OS 5, the latest version of the Wear OS platform. As Wear OS 5 is based on Android 14, this new platform version brings with it a number of developer-facing changes. To ensure your app is ready for the next generation of devices, start by testing your app on the Wear OS 5 Emulator!

Galaxy Watch Ultra (left) and Galaxy Watch7 (right)
Galaxy Watch Ultra (left) and Galaxy Watch7 (right)

Wear OS 5 brings the next iteration of the Watch Face Format, providing more features to create expressive, efficient and individual watch faces for your users. New watches launched with Wear OS 5 will only support third-party watch faces built with Watch Face Format, prioritizing the user experience. For more information on watch face compatibility, see this Help Center article.

As we gather momentum behind the Watch Face Format, we’re changing requirements for publishing watch faces on Google Play. Check out the watch face page for the latest guidance.

Build adaptive to scale across screen sizes and types

The latest in large screens and foldables are here, with the new Galaxy Z Fold6 and Z Flip6, so there is even more reason to ensure your app looks great across whatever screen size or folded state your users are engaging with. The best way to do that is to make your app adaptive - meaning your users get an optimal experience on all their devices. By building an adaptive app, you scale across mobile, tablets, foldables, desktop and more.

Galaxy Watch Ultra (left) and Galaxy Watch7 (right)
Galaxy Z Fold6

A great place to start when building adaptive apps is with the new Compose adaptive layout libraries. These libraries are designed to help you to make your UI look good across window sizes. From navigation UI to list/detail and supporting pane layouts, we’re providing composables to make building an adaptive app easier than ever.

Additionally, window size classes are the best way to scale your UI, with opinionated breakpoints that help you design, develop, and test responsive/adaptive layouts across various window sizes. Window size classes enable you to change your app layout as the display space available to your app changes, for example, when a device folds or unfolds, the device orientation changes, or the app window is resized in multi‑window mode.

Discover everything you need to know about building adaptive apps with the adaptive apps documentation; it will be continually updated with the latest and greatest tools and APIs to enable you to scale across screens!

Get started with Adaptive Apps and Wear OS

With these new devices, from the smallest to the largest, there are opportunities to build apps that excite your users on all their favorite Android screens. Apps like SoundCloud, Peloton, and more are already building experiences that scale across their user’s favorite screens!

Get building for Wear OS today by checking out Wear OS developer site and visiting the Wear OS gallery for inspiration. And scale your app across even more screens by building adaptive with the latest from Compose!

Use the Apple Volume Purchasing Program (VPP) to distribute apps for device enrollment and company owned devices

What’s changing

In November 2023, we announced the ability to purchase and distribute iOS apps to user-enrolled devices through Apple’s Volume Purchase Program. Beginning today, we’re expanding this functionality to include device enrollment and company-owned iOS devices.




Who’s impacted

Admins and end users


Why you’d use it 

Admins can use the Volume Purchasing Program to efficiently curate a suite of work-related apps—both free and paid—for their team. This streamlined process not only simplifies the deployment of essential business apps but also ensures that employees have access to the right apps they need to be productive and efficient, all within the secure perimeter of our MDM platform. To further streamline the enrollment and app distribution process, we’re automatically installing mandatory apps during enrollment for company-owned devices. This latest update makes it easier for admins to deploy apps across various device types in their organization.


Additional details

Please note that Apple ID sign-in won't be needed in the company-owned iOS devices flow after configuring apps with VPP.


The automatic installation of mandatory apps during onboarding applies to all enrollment types and devices that violate mandatory apps compliance will be immediately blocked until the required app(s) are installed. 


Getting started


Rollout pace


Availability

Available to Google Workspace
  • Business Plus
  • Enterprise Essentials and Enterprise Essentials Plus
  • Enterprise Standard and Plus
  • Education Standard and Plus, and the Endpoint Education Upgrade add-on
  • Frontline Starter and Standard
  • Cloud Identity Premium

Home APIs: Enabling all developers to build for the home

Posted by Matt Van Der Staay – Engineering Director, Google Home


This blog was originally posted on Google for Developers.

As the saying goes, “home is where the heart is.” It’s where we spend the most time; it’s your space to be comfortable, where you can truly relax, connect and make memories. Our homes have gotten more helpful with connected products, such as a smart door lock or Nest thermostat. Despite this momentum, it's still too hard to develop for the home.

We are changing all of that. Building on the foundation of Matter, we've re-envisioned Google Home as a platform for developers - all developers, not just those that build smart home devices. Google Home is the destination to create innovative experiences for the home.

Today, we’re announcing the Home APIs and Home runtime. With the Home APIs, app developers can access over 600M devices, Google’s hubs and Matter infrastructure, and an automation engine powered by Google intelligence - all available on both Android and iOS. Here are five things to know:

1. Any developer can now build an experience that works with Google Home.

The home offers a unique opportunity for developers to create seamless and deeper relationships with users, but developing for the smart home is harder than it needs to be. Building for the smart home means integrations with many device makers, operating hubs and Matter fabrics, and operating automations engines driven by intelligent signals.

Whether you build an app specifically for smart home devices or build apps that have nothing to do with the smart home – like a fitness app or delivery app - the Home APIs will let you create app experiences that offer your customers delightful and differentiated experiences on both Android and iOS.

2. Access 600 million connected devices from your app

The new Device and Structure APIs let you access over 600M devices with a single integration. Control and manage the devices already connected to Google Home, such as Matter light bulbs or the Nest Learning Thermostat, whether at home, or on the go. You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points - like turning on the lights automatically before the food delivery driver arrives.

The Home APIs have been designed with privacy and security in mind, leveraging industry standard best practices. Users are always in control and need to explicitly grant access to their structure and smart home devices before an app can access it. And they can easily revoke access at any time from the Google Home app. To ensure quality experiences, developers who adopt the Home APIs must pass certification before launching their app.

The Device and Structure APIs
The Device and Structure APIs provide all of the foundational building blocks to create a smart home experience.

The new Commissioning API lets you setup Matter devices in your app or the Home app or directly with Fast Pair on Android, without the need to create a new Matter fabric, saving you time and resources.

The Commissioning API
The Commissioning API provides all of the customer experience to set up a Matter device.

3. Automate with Google’s unique intelligence about the home

As people add more devices to their home, it becomes challenging to make them all work in unison. Over the past year, we have added new signals and allowed those with advanced skills to script their home using generative AI. With the new Automation API, you can create and manage home automations in your app, using Google Home’s new automation engine and intelligent signals.

Automations can be triggered by device signals from the home such as occupancy events from motion sensors, mode changes from appliances, or media events from a smart TV. For example, Yale is using the Automation API to turn on the foyer lights when the front door is unlocked at night. Automations can also use Google’s intelligence signals like home and away, which fuses together signals from devices across the home to create a more accurate presence detection.

The Automations API
The Automations API provides all of the tools for creating and managing automations.

4. Expanding hubs for Google Home to the TV

A hub for Google Home is a device that enables remote access and local control of their Matter devices across Wifi and Thread. The Home APIs use the network of hubs for Google Home to control Matter devices whether the user is in the home or away.

Later this year, we’re upgrading our hubs and introducing the Home runtime, so other devices, including Chromecast with Google TV, select panel TVs with Google TV running Android 14, or higher and eligible LG brand TVs will also become hubs for Google Home.

Home APIs make controlling lights and switches locally over a hub feel snappy. We are adopting these APIs in the Google Home app, and our early tests show device control operating up to three times faster than before. Developers using the Home APIs can see faster and more responsive local control in their apps as well.

5. Delightful new experiences from a diverse set of apps

We are working with a broad range of brands across lighting, security, automotive, energy, and entertainment to build seamless smart home experiences that help get more usefulness from the smart home.

Partners from every major smart home category are building on the Home APIs.
Partners from every major smart home category are building on the Home APIs.

Here are how some of our first partners are using the Home APIs:

ADT’s new Trusted Neighbor will revolutionize the universal practice of “giving a trusted neighbor a key to your home,” enabling users to easily grant secure and temporary access to their homes for neighbors, friends or helpers.

ADT Trusted Neighbor Program

LG will enable millions of TVs to be hubs for Google Home, allowing seamless control of devices from any app built using Home APIs. You will also be able to use the ThinQ mobile app or the Home Hub on the LG TV to control devices.

Home APIs on LG TVs for Google Home

Eve Systems will bring their experience to Android for the first time and build helpful automations like lowering the blinds when the temperature drops at night.

Eve Systems using Home APIs

Google Pixel is bridging the digital and physical worlds so that bedtime mode can not only dim your screen, but can also automatically dim your bedroom lights, lower the shades and lock the front door.

Google Pixel using Home APIs

And this is just the beginning. With the Home APIs, a workout app could keep you cool while you are burning calories by turning on the fan before you begin working out. Or a vacation rental app could make sure that the lights are on and the temperature is just right when a guest arrives. With the Home APIs, now anyone can bridge digital experiences and physical devices.


Sign Up to Build with the Home APIs

Do you have a great idea or feature that you'd like to build into your app with the Home APIs? Tell us about it and join the waitlist for access to the Home APIs or Home runtime. We will expand access on a rolling basis and the first apps built on the Home APIs will come to the Play Store and App Store starting this fall. Learn more about what’s included in the Home APIs from our I/O session on the Google Home Developer Center.

Android Support for Kotlin Multiplatform to Share Business Logic Across Mobile, Web, Server, and Desktop Platforms

Posted by Maru Ahues Bouza – Director, Product Management, and Jeffrey van Gogh – Director, Engineering

Traditionally, developers must either write code individually for each platform they want to target, or make a number of compromises in order to reuse code across platforms. Android has been actively supporting Kotlin since 2017, and today we are excited to announce we are supporting Kotlin Multiplatform on Android, which enables sharing code across mobile, web, server, and desktop platforms. This helps increase productivity for developers, and fits great with Android's Kotlin-first approach, resulting in higher quality Android apps. Our focus is to support sharing business logic (the parts that are most agnostic to the user interfaces) because we've seen Android developers get the most value in not having to maintain duplicate copies of this code.

Kotlin Multiplatform (KMP) has been a long-standing investment for the team behind Google Workspace, allowing for flexibility and speed in delivering valuable cross-platform experiences. The Google Workspace team is enthusiastic about KMP's potential as the direction for its multi-platform architecture investment, confident in its ability to meet performance expectations for various workloads.

The initial step in this journey is the rollout of the Google Docs app for Android, iOS, and Web, which leverages KMP for shared business logic, validating its readiness for production use at Google scale. The Google Workspace team is thrilled to continue exploring the possibilities of KMP across its product suite, aiming to enhance productivity and deliver seamless experiences to users on all platforms.

We see a lot of companies successfully leveraging Kotlin Multiplatform for cross-platform development of their apps, learn how they apply different code-sharing strategies here.

Kotlin Multiplatform, developed by JetBrains, provides a novel approach to sharing code across platforms by compiling Kotlin to platform-native binaries. Kotlin is able to provide the full, modern, memory managed language to native platforms enabling native interoperability and incremental adoption. Kotlin on Android, combined with Kotlin Multiplatform on other platforms, provides a great way to increase productivity and quality, without compromising on performance or interoperability.

Architecture overview for Kotlin Multiplatform (KMP)
Kotlin Multiplatform Architecture

Current Status of Support

Many widely-used libraries offer built-in support for Kotlin Multiplatform, streamlining your cross-platform development experience. These libraries work seamlessly together. For example, Ktor simplifies networking tasks by handling REST service consumption, while kotlinx.serialization converts data to formats like JSON, and Okio manages essential file I/O. Additionally, SKIE facilitates the use of modern types and coroutines on iOS, and CocoaPods integration enables the use of iOS-specific dependencies.

We've worked with JetBrains and the Kotlin developer community to add Kotlin Multiplatform support to a number of Jetpack libraries and in some cases provide the iOS platform targets, while in others, JetBrains and the community provide the multiplatform distributions.

Today, the Annotations, Collections, and DataStore libraries all have support for Kotlin Multiplatform in stable versions. We are also adding support to validate binary compatibility for the iOS platform targets, bringing them on a par with the quality standards for Android. In addition to the libraries above, we've also begun working on Kotlin Multiplatform support for Room, Lifecycle, and ViewModels with alpha versions now available. To better understand which classes and functions are available where, the library reference documentation now indicates "common" and platform support.

Indication of Common, Native and Android support in documentation
Indication of Common, Native and Android support in documentation

Android engineers have collaborated with JetBrains on the Kotlin compiler to improve runtime performance in Kotlin/Native (for iOS & native desktop operating systems), showing 18% runtime performance improvements in compiler benchmarks. In addition the Android team contributed to build time performance improvements for the Kotlin Native Compiler of up to 2x speed ups.

The Android Gradle Plugin now has official support for Kotlin Multiplatform, enabling a concise build definition for setting up Android as a platform target for shared code as shown below:

plugins {
    id("org.jetbrains.kotlin.multiplatform")
    id("com.android.library")
}

kotlin {
    androidTarget {
        compilations.all {
            kotlinOptions {
                jvmTarget = "11"
            }
        }
    }  
    listOf(
        iosX64(),
        iosArm64(),
        iosSimulatorArm64()
    ).forEach { iosTarget ->
        iosTarget.binaries.framework {
            baseName = "Shared"
            isStatic = true
        }
    }    
    sourceSets {
        commonMain.dependencies {
            // put your Multiplatform dependencies here
        }
    }
}
KMP Support in the Android Gradle Plugin DSL

As Android Studio is based on the IntelliJ Platform from JetBrains, it inherits support for Kotlin Multiplatform code editing and many other development features. Other Android development tools like Android Lint and Kotlin Symbol Processing (KSP) are also beginning to add more Kotlin Multiplatform support as well.

Google Chrome now has official support for WasmGC which is used by Kotlin Multiplatform's WebAssembly platform target to enable code sharing with the browser in an efficient and performant way.

Latest details on these projects are available on the updated Android Kotlin Multiplatform page.

Future Areas of Work

We've heard from many Android developers and Google engineering teams that they want expanded support for Kotlin Multiplatform so they can more easily share code with other platforms. Android plans to continue collaborating with JetBrains, Google engineering teams, and the community on a variety of projects, including:

    • Expanding and stabilizing Jetpack libraries with Kotlin Multiplatform support
    • Wasm platform target support in Jetpack libraries
    • Kotlin/Native build performance
    • Kotlin/Native debugging
    • Expanding Kotlin Multiplatform support in Android Studio

Learn More and Try It Out

Sharing code with Kotlin Multiplatform between Android and other platforms enables higher developer productivity and quality so we hope you will give it a try! You can use the Kotlin Multiplatform wizard to create a new KMP project. Learn more in the documentation.

Alternatively, explore one of these sample projects showcasing how to use some of the Jetpack libraries with Kotlin Multiplatform:

If there are additional areas you would like Android to work on let us know and also be a part of our vibrant Android Developer community on LinkedIn, Medium, YouTube, and X.

Get ready for Google I/O: Program lineup revealed

Posted by Timothy Jordan – Director, Developer Relations and Open Source

Developers, get ready! Google I/O is just around the corner, kicking off live from Mountain View with the Google keynote on Tuesday, May 14 at 10 am PT, followed by the Developer keynote at 1:30 pm PT.

But the learning doesn’t stop there. Mark your calendars for May 16 at 8 am PT when we’ll be releasing over 150 technical deep dives, demos, codelabs, and more on-demand. If you register online, you can start building your 'My I/O' agenda today.

Here's a sneak peek at some of the exciting highlights from the I/O program preview:

Unlocking the power of AI: The Gemini era unlocks a new frontier for developers. We'll showcase the newest features in the Gemini API, Google AI Studio, and Gemma. Discover cutting-edge pre-trained models from Kaggle, and delve into Google's open-source libraries like Keras and JAX.

Android: A developer's playground: Get the latest updates on everything Android! We'll cover groundbreaking advancements in generative AI, the highly anticipated Android 15, innovative form factors, and the latest tools and libraries in the Jetpack and Compose ecosystem. Plus, discover how to optimize performance and streamline your development workflow.

Building beautiful and functional web experiences: We’ll cover Baseline updates, a revolutionary tool that empowers developers with a clear understanding of web features and API interoperability. With Baseline, you'll have access to real-time information on popular developer resource sites like MDN, Can I Use, and web.dev.

The future of ChromeOS: Get a glimpse into the exciting future of ChromeOS. We'll discuss the developer-centric investments we're making in distribution, app capabilities, and operating system integrations. Discover how our partners are shaping the future of Chromebooks and delivering world-class user experiences.

This is just a taste of what's in store at Google I/O. Stay tuned for more updates, and get ready to be a part of the future.

Don't forget to mark your calendars and register for Google I/O today!

Get ready for Google I/O: Program lineup revealed


Developers, get ready! Google I/O is just around the corner, kicking off live from Mountain View with the Google keynote on Tuesday, May 14 at 10 am PT, followed by the Developer keynote at 1:30 pm PT.

But the learning doesn’t stop there. Mark your calendars for May 16 at 8 am PT when we’ll be releasing over 150 technical deep dives, demos, codelabs, and more on-demand. If you register online, you can start building your 'My I/O' agenda today.

Here's a sneak peek at some of the exciting highlights from the I/O program preview:

Unlocking the power of AI: The Gemini era unlocks a new frontier for developers. We'll showcase the newest features in the Gemini API, Google AI Studio, and Gemma. Discover cutting-edge pre-trained models from Kaggle, and delve into Google's open-source libraries like Keras and JAX.

Android: A developer's playground: Get the latest updates on everything Android! We'll cover groundbreaking advancements in generative AI, the highly anticipated Android 15, innovative form factors, and the latest tools and libraries in the Jetpack and Compose ecosystem. Plus, discover how to optimize performance and streamline your development workflow.

Building beautiful and functional web experiences: We’ll cover Baseline updates, a revolutionary tool that empowers developers with a clear understanding of web features and API interoperability. With Baseline, you'll have access to real-time information on popular developer resource sites like MDN, Can I Use, and web.dev.

The future of ChromeOS: Get a glimpse into the exciting future of ChromeOS. We'll discuss the developer-centric investments we're making in distribution, app capabilities, and operating system integrations. Discover how our partners are shaping the future of Chromebooks and delivering world-class user experiences.

This is just a taste of what's in store at Google I/O. Stay tuned for more updates, and get ready to be a part of the future.

Don't forget to mark your calendars and register for Google I/O today!

Posted by Timothy Jordan – Director, Developer Relations and Open Source

Configure managed iOS apps for your users using Google Mobile Device Management

What’s changing 

Directly from the Admin console, admins can remotely set custom configs for managed iOS apps on end-user devices for their enterprise using Google Mobile Device Management. Managed configurations are applied using XML property lists and the same app can be configured differently across different domains, groups, or organizational units (OUs).

Creating the app configuration using XML information


Applying the configuration



Who’s impacted

Admins and end users


Why it’s important

Prior to this update, mobile app configuration was only available for managed Android devices. Beginning today, Workspace admins can use Managed App Configuration to set custom app configurations and deploy them to manage iOS devices across their organization. This gives admins the flexibility they need to create safety parameters that align with the various needs of users across their organization.




Getting started


Rollout pace


Availability


How to effectively A/B test power consumption for your Android app’s features

Posted by Mayank Jain - Product Manager, and Yasser Dbeis - Software Engineer; Android Studio

Android developers have been telling us they're looking for tools to help optimize power consumption for different devices on Android.

The new Power Profiler in Android Studio helps Android developers by showing power consumption happening on devices as the app is being used. Understanding power consumption across Android devices can help Android developers identify and fix power consumption issues in their apps. They can run A/B tests to compare the power consumption of different algorithms, features or even different versions of their app.

The new Power Profiler in Android Studio
The new Power Profiler in Android Studio

Apps which are optimized for lower power consumption lead to an improved battery and thermal performance of the device, which means an improved user experience on Android.

This power consumption data is made available through the On Device Power Monitor (ODPM) on Pixel 6+ devices, segmented by each sub-system called “Power Rails”. See Profileable power rails for a list of supported sub-systems.

The Power Profiler can help app developers detect problems in several areas:

    • Detecting unoptimized code that is using more power than necessary.
    • Finding background tasks that are causing unnecessary CPU usage.
    • Identifying wakelocks that are keeping the device awake when they are not needed.

Once a power consumption issue has been identified, the Power Profiler can be used when testing different hypotheses to understand why the app could be consuming excessive power. For example, if the issue is caused by background tasks, the developer can try to stop the tasks from running unnecessarily or for longer periods. And if the issue is caused by wakelocks, the developer can try to release the wakelocks when the resource is not in use or use them more judiciously. Then compare the power consumption before/after the change using the Power Profiler.

In this blog post, we showcase a technique which uses A/B testing to understand how your app’s power consumption characteristics might change with different versions of the same feature - and how you can effectively measure them.

A real-life example of how the Power Profiler can be used to improve the battery life of an app.

Let’s assume you have an app through which users can purchase their favorite movies.

Sample app to demonstrate A/B testing for measure power consumption
Sample app to demonstrate A/B testing for measure power consumption 
Video (c) copyright Blender Foundation | www.bigbuckbunny.org

As your app becomes popular and is used by more users, you realize that a high quality 4K video takes very long to load every time the app is started. Because of its large size, you want to understand its impact on power consumption on the device.

Originally, this video was in 4K quality in the best of intentions, so as to showcase the best possible movie highlights to your customers.

This makes you think…

    • Do you really need a 4K video banner on the home screen?
    • Does it make sense to load a 4K video over the network every time your app is run?
    • How will the power consumption characteristics of your app change if you replace the 4K video with something of lower quality (while still preserving the vivid look & feel of the video)?

This is a perfect scenario to perform an A/B test for power consumption

With an A/B test, you can test two slightly different variations of the video banner feature and choose the one with the better power consumption characteristics.

Scenario A : Run the app with 4K video banner on screen & measure power consumption

Scenario B : Run the app with lower resolution video banner on screen & measure power consumption

A/B Test setup

Let's take a moment and set up our Android Studio profiler to run this A/B test. We need to start the app and attach the CPU profiler to it and trigger a system trace (where the Power Profiler will be shown).

Step 1

Create a custom “Run configuration” by clicking the 3 dot menu > Edit

Custom run configuration
Custom run configuration

Step 2

Then select the “Profiling” tab and ensure that “Start this recording on startup” and CPU Activity > System Trace is selected. Then click “Apply”.

Edit configuration settings
Edit configuration settings

Now simply run the “Profile app startup profiling with low overhead” whenever you want to run this app from start and attach the CPU profiler to it.

Note on precision

The following example scenarios use the entire app startup for estimating the power consumption for this blog’s purpose. However you can use more advanced techniques to have even higher precision in getting power readings. Some techniques to try are:

    • Isolate and measure power consumption for video playback only after a tap event on the video player
    • Use the trace markers API to mark the start and stop time for power measurement timeline - and then only measure power consumption within that marked window

Scenario A

In this scenario, we run the app with 4K video playing and measure power consumption for the first 30 seconds. We can optionally also run the scenario A multiple times and average out the readings. Once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel and record as a screenshot for comparing against scenario B

Power consumption in scenario A - playing a 4k video
Power consumption in scenario A - playing a 4k video

As you can see, the average power consumed by WLAN, CPU cores & Memory combined is about 1,352 mW (milliwatts)

Now let's compare and contrast how this power consumption changes in Scenario B

Scenario B

In this scenario, we run the app with low quality video playing and measure power consumption for the first 30 seconds. As before, we can also optionally run scenario B multiple times and average out the power consumption readings. Again, once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel.

Power consumption in scenario B - playing a lower quality video
Power consumption in scenario B - playing a lower quality video

The total power consumed by WLAN, CPU Little, CPU Big and CPU Mid & Memory is about 741 mW (milliwatts)

Conclusion

All else being equal, Scenario B (with lower quality video) consumed 741 mW power as compared to Scenario A (with 4K video) which required 1,352 mW power.

Scenario B (lower quality video) took 45% less power than Scenario A (4K) - while the lower quality video provides little to no visual difference in perceived quality of the app’s screen.

As a result of this A/B test for power consumption, you conclude that replacing the 4K video with a lower quality video on our app’s home screen not only reduces power consumption by 45%, also reduces the required network bandwidth and can potentially also improve the thermal performance of the devices.

If your app’s business logic still requires the 4K video to be shown on the app’s screen, you can explore strategies like:

    • Caching the 4K video across subsequent runs of the app.
    • Loading video on a user tap.
    • Loading an image initially and only load the video after the screen has fully rendered (delayed loading).

The overall power consumption numbers presented in the above A/B test scenario might seem small, but it shows the techniques that app developers can use to effectively A/B test power consumption for their app’s features using the Power Profiler in Android Studio.

Next Steps

The new Power Profiler is available in Android Studio Hedgehog onwards. To know more, please head over to the official documentation.