Tag Archives: problem solving

A Smoother Ride: Android Emulator Stability and Performance Updates

Posted by Neville Sicard-Gregory – Senior Product Manager, Android Studio


Looking for a more stable, reliable, and performant Emulator? Download the latest version of Android Studio or ensure your Emulator is up to date in the SDK Manager.

A split screen shows Kotlin code on the left and the corresponding Android app display on the right in Android Studio. The app displays the Google Play Store, Photos, YouTube, Gmail, and Chrome icons.

We know how critical the stability, reliability, and performance of the Android Emulator is to your everyday work as an Android developer. After listening to valuable feedback about stability, reliability, and performance, the Android Studio team took a step back from large feature work on the Android Emulator for six months and started an initiative called Project Quartz. This initiative was made up of several workstreams aimed at reducing crashes, speeding up startup time, closing out bugs, and setting up better ways to detect and prevent issues in the future.

Improved stability and reliability

A key goal of Project Quartz aimed to reduce Emulator crashes, which can frustrate and block developers, decreasing their productivity. We focused on fixing issues causing backend and UI crashes and freezes, updated the UI framework, updated our hypervisor framework, and our graphics libraries, and eliminated tech debt. This included:

    • Moving to a newer version of Qt, the cross-platform framework for building the graphical user interfaces of the Android Emulator, and making it stable on all platforms (as of version 34.2.13/ This was also a required change to ensure things like Google Maps and the location settings UI continued to work in the Android Emulator.
    • Updating gfxstream, the graphics rendering system used in the Android Emulator, to improve our graphics layer.
    • Adding more than 600 end-to-end tests to the existing pytests test suite.

As a result, we have seen 30% fewer crashes in the latest stable version of Android Studio, as reported by developers who have opted-in to sharing crash details with us. Along with additional end-to-end testing, this means a more stable, reliable, and higher quality experience with fewer interruptions while using the Android Emulator to test your apps.

A horizontal bar graph showing performance times of different versions of the Android emulator in milliseconds

This chart illustrates the reduction in reported crashes by stable versions of the Android Emulator (newer versions are at the top and shorter is better).

We have also enhanced our opt-in telemetry and logging to better understand and identify the root causes of crashes, and added more testing to our pre-launch release process to improve our ability to detect potential issues prior to release.

Improved release quality

We also implemented several measures to improve release quality, including increasing the number and frequency of end-to-end, automated, and integration tests on macOS, Microsoft Windows, and Linux. Now, more than 1,100 end-to-end tests are ran in postsubmit, up from 500 tests in the past implementation, on all supported operating system platforms . These tests cover various scenarios, including (among other features) different Android Emulator snapshot configurations, diverse graphics card considerations , networking and Bluetooth functionality, and performance benchmarks between Android Emulator system image versions.

This comprehensive testing ensures these critical components function correctly and translates to a more reliable testing environment for developers. As a result, Android app developers can accurately assess their app's behavior in a wider range of scenarios.

Reduced open issues and bugs

It was also important for us to reduce the number of open issues and bugs logged for the Android Emulator by addressing their root cause and ensuring we cover more of the use cases you run into in production. During Project Quartz, we reduced our open issues by 43.5% from 4,605 to 2,605. 17% of these were actively fixed during Quartz and the remaining were closed as either obsoleted or previously fixed (e.g. in an earlier version of the Android Emulator) or duplicates of other issues.

Next Steps

While these improvements are exciting, it's not the end. We will continue to build on the quality improvements from Project Quartz to further enhance the Android Emulator experience for Android app developers.

As always, your feedback has and continues to be invaluable in helping us make the Android Emulator and Android Studio more robust and effective for your development needs. Sharing your metrics and crashdumps is crucial in helping us understand what specifically causes your crashes so we can prioritize fixes.

You can opt-in by going to Settings, then Appearance and Behavior, then System Settings, then Data Sharing, and selecting the checkbox marked ‘Send usage statistics to Google.'

The Android Studio settings menu displays the Data Sharing settings page, where 'Send usage statistics to Google' option is selected.

Be sure to download the latest version of the Android Emulator alongside Android Studio to experience these improvements.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Together, we can create incredible Android experiences for users worldwide!

My review is stuck – what now?

Troubleshooting your code reviews

Google engineers contribute thousands of open source pull requests every month! With that experience, we've learned that a stuck review isn't the end of the world. Here are a few techniques for moving a change along when the frustration starts to wear on you.

You've been working on a pull request for weeks - it implements a feature you're excited about, and sure, you've hit a few roadbumps along the review path, but you've been dealing with them and making changes to your code, and surely all this hard work will pay off. But now, you're not sure what to do: your pull request hasn't moved forward in some time, you dread working on another revision just to be pushed back on, and you're not sure your reviewers are even on your side anymore. What happened? Your feature is so cool - how are you going to land it?

Reviews can stall for a number of reasons:

  • There may be multiple solutions to the problem you're trying to solve, but none of them are ideal - or, all of them are ideal, but it’s hard to tell which one is most ideal. Or, your reviewers don't agree on which solution is right.
  • You may not be getting any response from reviewers whatsoever. Either your previously active reviewers have disappeared, or you have had trouble attracting any in the first place.
  • The goalposts have moved; you thought you were just going to implement a feature, but then your reviewer pointed out that your tests are poor style, and in fact, the tests in that whole suite are poor style, and you need to refactor them all before moving forward. Or, you implemented it with one approach, and your reviewer asked for a different one; after a few rounds with that different approach, another reviewer suggests you try… the approach you started with in the first place!
  • The reviewer is asking you to make a change that you think is a bad idea, and you aren't convinced by their arguments. Or maybe it's the second reviewer who isn't convinced, and you aren't sure which side to take in their disagreement - and which approach to implement in your PR.
  • You think the reviewer's suggestion is a good one, but when you sit down to write the code, it's toilsome, or ugly, or downright impossible to make work.
  • The reviewer is pointing out what's wrong, but not explaining what they want instead; the reviews you are receiving aren't actionable, clear, or consistent.

These things happen all the time, especially in large projects where review consensus is a bit muddy. But you can't change the project overnight - and anyway, you just want your code to land! What can you do to get past these obstacles?


Step Back and Summarize the Discussion

As the author of your change, it's likely that you're thinking about this change all the time. You spent hours working on it; you spent hours looking through reviewer comments; you were personally impacted by the lack of this change in the codebase, because you ran into a bug or needed some functionality that didn't exist. It's easy to assume that your reviewers have the same context that you do - but they probably don't!

You can get a lot of mileage out of reminding your reviewers what the goal of your change is in the first place. It can be really helpful to take a step back and summarize the change and discussion so far for your reviewers. For the most impact, make sure you're covering these key points:


What is the goal of your pull request?

Avoid being too specific, as you can limit your options: the goal isn't to add a button that does X, the goal is to make users who need X feel less pain. If your goal is to solve a problem for your day job, make sure you explain that, too - don't just say "my company needs X," say "my company's employees use this project in this way with these scaling or resource constraints, and we need X in order to do this thing."

There will also be some goals inherent to the project. Most of the time, there is an unstated goal to keep the codebase maintainable and clean. You may need to ensure your new feature is accessible to users with disabilities. Maybe your new feature needs to perform well on low-end devices, or compile on a platform that you don't use.

Not all of these constraints will apply to every change - but some of them will be highly relevant to your work. Make sure you restate the constraints that need to be considered for this pull request.


What constraints have your reviewers stated?

During the life of the pull request until now, you've probably received feedback from at least one reviewer. What does it seem like the reviewer cares about? Beyond individual pieces of feedback, what are their core concerns? Is your reviewer asking whether your feature is at the appropriate level because they're in the middle of a long campaign to organize the project's architecture? Are they nitpicking your error messages because they care about translatability?

Try to summarize the root of the concerns your reviewers are discussing. By summarizing them in one place, it becomes pretty clear when there's a conflict; you can lay out constraints which mutually exclude each other here, and ask for help determining whether any may be worth discarding - or whether there's a way to meet both constraints that you hadn't considered yet.


What alternative approaches have you considered?

Before you wrote the code in the first place, you may have considered a couple different ways of approaching the problem. You probably discarded the paths not taken for a reason, but your reviewers may have never seen that reasoning. You may also have received feedback suggesting you reimplement your change with a different approach; you may or may not think that approach is better.

Summarize all the possible approaches that you considered, and some of their pros and cons. Even better, since you've already summarized the goals and constraints that have been uncovered so far during your review, you can weigh each approach against these criteria. Perhaps it turns out the approach you discarded early on is actually the most performant one, and performance is a higher priority than you had realized at first. The community may even chime in with an approach you hadn't considered at all, but which is much more architecturally acceptable to the project.


List assumptions and remaining open questions

Through the process of writing this recap, you may come to some conclusions. State those explicitly - don't rely on your reviewers to come to the same conclusions you did. Now you have all the reasoning out in the open to defend your point.

You may also be left with some open questions - like that two reviewers are in disagreement without a clear solution, or that you aren't sure how to meet a given constraint. List those questions, too, and ask for your co-contributors' help answering them.

Here's an example of a time the author did this on the Git project - it's not in a pull request, but the same techniques are used. Sometimes these summaries are so comprehensive that it's worth checking them into the project to remind future contributors how we arrived at certain decisions.


Pushing Back

Contributors - especially ones who are new to the project, but even long-time members - often feel like the reviewer has all the power and authority. Surely the wise veteran reviewing my code is infallible; whatever she says must be true and the best approach. But it's not so! Your reviewers are human, and they want to work together with you to come to a good solution.

Pushing back isn't always the answer. For example, if you disagree with the project's established style, or with a direction that's already been discussed and agreed upon (like avoiding adding new dependencies unless completely necessary), trying to push back against the will of the entire project is likely fruitless - especially if you're trying to get an exception for your PR. Or if the root of your disagreement is "it works on my machine, so who cares," you probably need to think a little more broadly before arguing with your reviewer who says the code isn't portable to another platform or maintainable in the long term. Reviewers and project maintainers have to live with your code for a long time; you may be only sending one patch before moving on to another project. So in matters of maintainability, it's often best to defer to the reviewer.

However, there are times when it could be the right move:

  • Your reviewer's top goal is avoiding code churn, but your top goal is a tidy end state.
  • When you try out the approach your reviewer recommended, but it is difficult for a reason they may not have anticipated.
  • When the reviewer's feedback truly doesn't have any improvements over your existing approach, and it's purely a matter of taste. (Use caution here - push back by asking for more justification, in case you missed something!)

Sometimes, after hearing pushback, your reviewer may reply pointing out something that you hadn't considered which converts you over to their way of thinking. This isn't a loss - you understand their goals better, and the quality of your PR goes up as a result. But sometimes your reviewer comes over to your side instead, and agrees that their feedback isn't actually necessary.


Lean on Reviewers for Help

Code reviews can feel a little bit like homework. You send your draft in; you get feedback on it; you make a bunch of changes on your own; rinse and repeat. But they don't have to be this way - it's much better to think of reviews as a team effort, where you and the reviewers are all pitching in to produce a high-quality code change. You're not being graded!

If implementing changes based on the feedback you received is difficult - like it seems that you'll need to do a large-scale refactor to make the function interface change you were asked for - or you're running into challenges that are hard to figure out on your own - like that one test failure doesn't make any sense - you can ask for more help! Just keep in mind the usual rules of asking for help online:

  • Use a lot of detail to describe where you're having trouble - cite the exact test that's failing or error case you're seeing.
  • Explain what you've tried so far, and what you think the issue is.
  • Include your in-progress code so that your reviewers can see how far you've gotten.
  • Be polite; you're asking for help, not demanding the author change their feedback or implement it for you.

Remember that if you need to, you can also ask your reviewer to switch to a different medium. It might be easier to debug a failing test over half an hour of instant messaging instead of over a week of GitHub comments; it might be easier to reason through a difficult API change with a video conference and a whiteboard. Your reviewer may be busy, but they're still a human, and they're invested in the success of your patch, too. (And if you don't think they're invested - did you try summarizing the issue, as described above? Make sure they understand where you're coming from!) Don't be afraid to ask for a bit more bandwidth if you think it'd be valuable.


Get a Second Opinion

When you're really, truly stuck - nobody agrees and your pushback isn't working and you met with your reviewer and they're stumped too - remember that most of the time, you and your reviewer aren't the only people working on the project. It's often possible to ask for a second opinion.

It may sound transactional, but it's okay to offer review-for-review: "Hi Alice, I know you're quite busy, but I see that you have a PR out now; I can make time to review it, but it'd help me out a lot if you can review mine, too. Bob and I are stuck because…." This can be a win for both you and Alice!

If you're stuck and can't come to a decision, you can also escalate. Escalation sounds scary, but it is a healthy part of a well-run project. You can talk to the project or subsystem maintainer, share a link to the conversation, and mention that you need help coming to a decision. Or, if your disagreement is with the maintainer, you can raise it with another trusted and experienced contributor the same way.

Sometimes you may also receive comments that feel particularly harsh or insulting, especially when they're coming from a contributor you've never spoken with face-to-face. In cases like this, it can be helpful to even just get someone else to read that comment and tell you if they find it rude, too. You can ask just about anybody - your friend, your teammate at your day job - but it's best if you ask someone who has some prior context with the project or with the person who made the comment. Or, if you'd rather dig, you can examine this same reviewer's comments on other contributions - you may find that there's a language barrier, or that they're incredibly active and are pressed for time to reply to your review, or that they're just an equal-opportunity jerk (or not-so-equal-opportunity), and your review isn't the exception.

If you can’t find another person to help, you can read through past closed and merged PRs to see the comments – do any of them apply to your PR, and perhaps the maintainer just hasn’t had time to give you the same feedback? Remember, this might be your first PR in the project but the maintainer might think “oh I’ve given this feedback a thousand times”.


When to Give Up

All these techniques help put more agency back in your hands as the author of a pull request, but they're not a sure thing. You might find that no matter how many of these techniques you use, you're still stuck - or that none of these techniques seem applicable. How do you know when to quit? And when you do want to, how can you do it without burning any bridges?


What did I want out of this PR, anyway?

There are many reasons to contribute to open source, and not all of us have the same motivations. Make sure you know why you sent this change. Were you personally or professionally affected by the bug or lack of feature? Did you want to learn what it's like to contribute to open source software? Do you love the community and just feel a need to give back to it?

No matter what your goal is, there's usually a way to achieve it without landing your pull request. If you need the change, it's always an option to fork - more on that in a moment. If you're wanting to try out contribution for the first time, but this project just doesn't seem to be working for it, you can learn from the experience and try out a different project instead - most projects love new contributors! If you're trying to help the community, but they don't want this change, then continuing to push it through isn't actually all that helpful.

Like when you re-summarized the review, or when you pushed back on your reviewer's opinions, think deeply about your own goals - and think whether this PR is the best way to meet them. Above all, if you're finding that you dread coming back to the project, and nothing is making that feeling go away, there's absolutely nothing wrong with moving on instead.


Taking a break

It's not necessary for you to definitively decide to stop working on your PR. You can take a break from it! Although you'll need to rebase your work when you come back to it, your idea won't go stale from a few weeks or months in timeout, and if any strong emotions (yours or the reviewers') were playing a role, they'll have calmed down by then. You may even come back to an email from your long-lost disappeared reviewer, saying they've been out on parental leave for the last two months and they're happy to get back to your change now. And once the idea has been seeded in the project, it's not unusual for community members to say weeks or months later, talking about a different problem, "Yeah, this would be a lot easier if we picked up something like <your abandoned pr>, actually. Maybe we should revive that effort."

These breaks can be helpful for everyone - but it's best practice to make sure the project knows you're stepping away. Leave a comment saying that you'll be quiet for a while; this way, a reviewer who may not know you feel stuck isn't left hanging waiting for you to resume your work.


Forking

For smaller projects, you often have the option of forking - a huge benefit of using open source software! You can decide whether or not to share your forked change with the world (in accordance with the license); "forking" can even mean that you just compile the project from source and use it with your own modifications on your local machine.

And if you were working on this PR for your day job, you can maintain a soft fork of the project with those changes applied. Google even has a tool for doing this efficiently with monorepos called Copybara.

Bonus: if you use your change yourself (or give it to your users) for a period of time, and come back to the PR after a bit of a break, you now have information about your code's proven stability in the wild; this can strengthen the case for your change when you resume work on it.


Giving up responsibly

If you do decide you're done with the effort, it's helpful to the project for you to say so explicitly. That way, if someone wants to take over your code change and finish pushing it through, they don't need to worry about stepping on your toes in the process. If you're comfortable doing so, it can be helpful to say why:

"After 15 iterations, I've decided to instead invest my spare time in something that is moving more quickly. If anybody wants to pick up this series and run with it, you have my blessing."
"It seems like Alice and I have an intractable disagreement. Since we're not able to resolve it, I'm going to let this change drop for now."
"My obligations at home are ramping up for the summer and I won't have time to continue working on this until at least September. If anybody wants to take over, feel free; I'll be available to provide review, but please don't block submission on my behalf, as I'll be quite busy."

Avoid the temptation to flounce, though. The above samples explain your circumstances and reasoning in a way that's easy to accept; it's less productive to say something like, "It's obvious that this project has no respect for its developers. Further work on this effort is a waste of my time!" Think about how you would react to reading such an email. You'd probably dismiss it as someone who's easily frustrated, maybe feel a little guilty, but ultimately, it's unlikely that you'd make any changes.


You Are the Main Driver of Your PR

In the end, it's important to remember that, ultimately, the author of a pull request is usually the one most invested in the success of that pull request. When you run into roadblocks, remember these techniques to overcome them, and know that you have power to help your code keep moving forward.

By Emily Shaffer – Staff Software Engineer

Emily Shaffer is a Staff Software Engineer at Google. They are the tech lead of Google's development efforts on the first-party Git client, and help advise Google on participation in other open source projects in the version control space, including jj, JGit, and Copybara. They formerly worked as a subsystem maintainer within the OpenBMC project. You can recognize Emily on most social media under the handle nasamuffin.

Get ready for Google I/O: Program lineup revealed

Posted by Timothy Jordan – Director, Developer Relations and Open Source

Developers, get ready! Google I/O is just around the corner, kicking off live from Mountain View with the Google keynote on Tuesday, May 14 at 10 am PT, followed by the Developer keynote at 1:30 pm PT.

But the learning doesn’t stop there. Mark your calendars for May 16 at 8 am PT when we’ll be releasing over 150 technical deep dives, demos, codelabs, and more on-demand. If you register online, you can start building your 'My I/O' agenda today.

Here's a sneak peek at some of the exciting highlights from the I/O program preview:

Unlocking the power of AI: The Gemini era unlocks a new frontier for developers. We'll showcase the newest features in the Gemini API, Google AI Studio, and Gemma. Discover cutting-edge pre-trained models from Kaggle, and delve into Google's open-source libraries like Keras and JAX.

Android: A developer's playground: Get the latest updates on everything Android! We'll cover groundbreaking advancements in generative AI, the highly anticipated Android 15, innovative form factors, and the latest tools and libraries in the Jetpack and Compose ecosystem. Plus, discover how to optimize performance and streamline your development workflow.

Building beautiful and functional web experiences: We’ll cover Baseline updates, a revolutionary tool that empowers developers with a clear understanding of web features and API interoperability. With Baseline, you'll have access to real-time information on popular developer resource sites like MDN, Can I Use, and web.dev.

The future of ChromeOS: Get a glimpse into the exciting future of ChromeOS. We'll discuss the developer-centric investments we're making in distribution, app capabilities, and operating system integrations. Discover how our partners are shaping the future of Chromebooks and delivering world-class user experiences.

This is just a taste of what's in store at Google I/O. Stay tuned for more updates, and get ready to be a part of the future.

Don't forget to mark your calendars and register for Google I/O today!

Get ready for Google I/O: Program lineup revealed


Developers, get ready! Google I/O is just around the corner, kicking off live from Mountain View with the Google keynote on Tuesday, May 14 at 10 am PT, followed by the Developer keynote at 1:30 pm PT.

But the learning doesn’t stop there. Mark your calendars for May 16 at 8 am PT when we’ll be releasing over 150 technical deep dives, demos, codelabs, and more on-demand. If you register online, you can start building your 'My I/O' agenda today.

Here's a sneak peek at some of the exciting highlights from the I/O program preview:

Unlocking the power of AI: The Gemini era unlocks a new frontier for developers. We'll showcase the newest features in the Gemini API, Google AI Studio, and Gemma. Discover cutting-edge pre-trained models from Kaggle, and delve into Google's open-source libraries like Keras and JAX.

Android: A developer's playground: Get the latest updates on everything Android! We'll cover groundbreaking advancements in generative AI, the highly anticipated Android 15, innovative form factors, and the latest tools and libraries in the Jetpack and Compose ecosystem. Plus, discover how to optimize performance and streamline your development workflow.

Building beautiful and functional web experiences: We’ll cover Baseline updates, a revolutionary tool that empowers developers with a clear understanding of web features and API interoperability. With Baseline, you'll have access to real-time information on popular developer resource sites like MDN, Can I Use, and web.dev.

The future of ChromeOS: Get a glimpse into the exciting future of ChromeOS. We'll discuss the developer-centric investments we're making in distribution, app capabilities, and operating system integrations. Discover how our partners are shaping the future of Chromebooks and delivering world-class user experiences.

This is just a taste of what's in store at Google I/O. Stay tuned for more updates, and get ready to be a part of the future.

Don't forget to mark your calendars and register for Google I/O today!

Posted by Timothy Jordan – Director, Developer Relations and Open Source

How to effectively A/B test power consumption for your Android app’s features

Posted by Mayank Jain - Product Manager, and Yasser Dbeis - Software Engineer; Android Studio

Android developers have been telling us they're looking for tools to help optimize power consumption for different devices on Android.

The new Power Profiler in Android Studio helps Android developers by showing power consumption happening on devices as the app is being used. Understanding power consumption across Android devices can help Android developers identify and fix power consumption issues in their apps. They can run A/B tests to compare the power consumption of different algorithms, features or even different versions of their app.

The new Power Profiler in Android Studio
The new Power Profiler in Android Studio

Apps which are optimized for lower power consumption lead to an improved battery and thermal performance of the device, which means an improved user experience on Android.

This power consumption data is made available through the On Device Power Monitor (ODPM) on Pixel 6+ devices, segmented by each sub-system called “Power Rails”. See Profileable power rails for a list of supported sub-systems.

The Power Profiler can help app developers detect problems in several areas:

    • Detecting unoptimized code that is using more power than necessary.
    • Finding background tasks that are causing unnecessary CPU usage.
    • Identifying wakelocks that are keeping the device awake when they are not needed.

Once a power consumption issue has been identified, the Power Profiler can be used when testing different hypotheses to understand why the app could be consuming excessive power. For example, if the issue is caused by background tasks, the developer can try to stop the tasks from running unnecessarily or for longer periods. And if the issue is caused by wakelocks, the developer can try to release the wakelocks when the resource is not in use or use them more judiciously. Then compare the power consumption before/after the change using the Power Profiler.

In this blog post, we showcase a technique which uses A/B testing to understand how your app’s power consumption characteristics might change with different versions of the same feature - and how you can effectively measure them.

A real-life example of how the Power Profiler can be used to improve the battery life of an app.

Let’s assume you have an app through which users can purchase their favorite movies.

Sample app to demonstrate A/B testing for measure power consumption
Sample app to demonstrate A/B testing for measure power consumption 
Video (c) copyright Blender Foundation | www.bigbuckbunny.org

As your app becomes popular and is used by more users, you realize that a high quality 4K video takes very long to load every time the app is started. Because of its large size, you want to understand its impact on power consumption on the device.

Originally, this video was in 4K quality in the best of intentions, so as to showcase the best possible movie highlights to your customers.

This makes you think…

    • Do you really need a 4K video banner on the home screen?
    • Does it make sense to load a 4K video over the network every time your app is run?
    • How will the power consumption characteristics of your app change if you replace the 4K video with something of lower quality (while still preserving the vivid look & feel of the video)?

This is a perfect scenario to perform an A/B test for power consumption

With an A/B test, you can test two slightly different variations of the video banner feature and choose the one with the better power consumption characteristics.

Scenario A : Run the app with 4K video banner on screen & measure power consumption

Scenario B : Run the app with lower resolution video banner on screen & measure power consumption

A/B Test setup

Let's take a moment and set up our Android Studio profiler to run this A/B test. We need to start the app and attach the CPU profiler to it and trigger a system trace (where the Power Profiler will be shown).

Step 1

Create a custom “Run configuration” by clicking the 3 dot menu > Edit

Custom run configuration
Custom run configuration

Step 2

Then select the “Profiling” tab and ensure that “Start this recording on startup” and CPU Activity > System Trace is selected. Then click “Apply”.

Edit configuration settings
Edit configuration settings

Now simply run the “Profile app startup profiling with low overhead” whenever you want to run this app from start and attach the CPU profiler to it.

Note on precision

The following example scenarios use the entire app startup for estimating the power consumption for this blog’s purpose. However you can use more advanced techniques to have even higher precision in getting power readings. Some techniques to try are:

    • Isolate and measure power consumption for video playback only after a tap event on the video player
    • Use the trace markers API to mark the start and stop time for power measurement timeline - and then only measure power consumption within that marked window

Scenario A

In this scenario, we run the app with 4K video playing and measure power consumption for the first 30 seconds. We can optionally also run the scenario A multiple times and average out the readings. Once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel and record as a screenshot for comparing against scenario B

Power consumption in scenario A - playing a 4k video
Power consumption in scenario A - playing a 4k video

As you can see, the average power consumed by WLAN, CPU cores & Memory combined is about 1,352 mW (milliwatts)

Now let's compare and contrast how this power consumption changes in Scenario B

Scenario B

In this scenario, we run the app with low quality video playing and measure power consumption for the first 30 seconds. As before, we can also optionally run scenario B multiple times and average out the power consumption readings. Again, once the System trace is shown in Android Studio, select the 0-30 second time range from the timeline selection panel.

Power consumption in scenario B - playing a lower quality video
Power consumption in scenario B - playing a lower quality video

The total power consumed by WLAN, CPU Little, CPU Big and CPU Mid & Memory is about 741 mW (milliwatts)

Conclusion

All else being equal, Scenario B (with lower quality video) consumed 741 mW power as compared to Scenario A (with 4K video) which required 1,352 mW power.

Scenario B (lower quality video) took 45% less power than Scenario A (4K) - while the lower quality video provides little to no visual difference in perceived quality of the app’s screen.

As a result of this A/B test for power consumption, you conclude that replacing the 4K video with a lower quality video on our app’s home screen not only reduces power consumption by 45%, also reduces the required network bandwidth and can potentially also improve the thermal performance of the devices.

If your app’s business logic still requires the 4K video to be shown on the app’s screen, you can explore strategies like:

    • Caching the 4K video across subsequent runs of the app.
    • Loading video on a user tap.
    • Loading an image initially and only load the video after the screen has fully rendered (delayed loading).

The overall power consumption numbers presented in the above A/B test scenario might seem small, but it shows the techniques that app developers can use to effectively A/B test power consumption for their app’s features using the Power Profiler in Android Studio.

Next Steps

The new Power Profiler is available in Android Studio Hedgehog onwards. To know more, please head over to the official documentation.

Build with Google AI video series, Season 2: more AI patterns

Posted by Joe Fernandez – Google AI Developer Relations

We are off to another exciting year in Artificial Intelligence (AI) and it's time to build more applications with Google AI technology! The Build with Google AI video series is for developers looking to build helpful and practical applications with AI. We focus on useful code projects you can implement and extend in an afternoon to bring the power of artificial intelligence into your workflow or organization. Our first season received over 100,000 views in six weeks! We are glad to see that so many of you liked the series, and we are excited to bring you even more Google AI application projects.

Today, we are launching Season 2 of the Build with Google AI series, featuring projects built with Google's Gemini API technology. The launch of Gemini and the Gemini API has brought developers even more advanced AI capabilities, including advanced reasoning, content generation, information synthesis, and image interpretation. Our goal with this season is to help you put those capabilities to work for you and your organizations.


AI app patterns

The Build with Google AI series features practical application code projects created for you to use and customize. However, we know that you are the best judge of what you or your organization needs to solve day-to-day problems and get work done. That's why each application we feature in this series is also meant to be used as an AI pattern. You can extend the applications immediately to solve problems and provide value for your business, and these applications show you a general coding pattern for getting value out of AI technology.

For this second season of this series, we show how you can leverage Google's Gemini AI model capabilities for applications. Here's what's coming up:

  • AI Slides Reviewer with Google Workspace (3/20) - Image interpretation is one of the Gemini model's biggest new features. We show you how to make practical use of it with a presentation review app for Google Slides that you can customize with your organization's guidelines and recommendations. 
  • AI Flutter Code Agent with Gemini API (3/27) - Code generation was the most popular episode from last season, so we are digging deeper into this topic. Build a code generation extension to write Flutter code and explore user interface designs and looks with just a few words of description.
  • AI Data Agent with Google Cloud (4/3) - Why write code to extract data when you can just ask for it? Build a web application that uses Gemini API's Function Calling feature to translate questions into code calls and data into plain language answers.

Season 1 upgraded to Gemini API: We've upgraded Season 1 tutorials and code projects to use the Gemini API so you can take advantage of the latest in generative AI technology from Google. Check them out!


Learn from the developers

Just like last season, we'll go back to the studio to talk with coders who built these projects so they can share what they learned along the way. How do you make the Gemini model review an entire presentation? What's the most effective way to generate code with AI? How do you get a database to answer questions with the Gemini API? Get insights into coding with AI to jump start your own development project.


New home for AI developer content

Developers interested in Google's AI offerings now have a new home at ai.google.dev. There you'll find a wealth of resources for building with AI from Google, including the Build with Google AI tutorials. Stay tuned for much more content through the rest of the year.

We are excited to bring you the second season of Build with Google AIcheck out Season 2 right now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

YouTube Ads Creative Analysis

Posted by Brian Craft, Satish Shreenivasa, Huikun Zhang, Manisha Arora and Paul Cubre – gTech Data Science Team


Introduction


Why analyze YouTube ads?

YouTube has billions of monthly logged-in users and every day people watch billions of hours of video and generate billions of views. Businesses can connect with YouTube users using YouTube ads, which are promotional videos that appear on YouTube's website and app, with a variety of video ad formats and goals.

Image of a sample YouTube in-stream skippable video ad
A sample YouTube in-stream skippable video ad

The Challenge

An effective video ad focuses on the ABCDs.

  • Attention: Capturing the viewer's attention till the end.
  • Branding: Helping them hear or visualize the brand.
  • Connection: Making them feel something about the brand.
  • Direction: Encouraging them to take action.

But each YouTube ad has a varying number of components, for instance, objects, background music or a logo. Each of these components affect the view through rate (which is referred to as VTR for the remainder of the post) of the video ad. Therefore, analyzing video ads through the lens of the components in the ad helps businesses understand what about the ad improves VTR. The insights from these analyses can be used to inform the creation of new creatives and to optimize existing creatives to improve VTR.


The Proposal

We propose a machine learning based approach for analyzing a company’s YouTube ads to assess which components affect VTR, for the purpose of optimizing a video ad’s performance. We illustrate how to:

  • Use Google Cloud Video Intelligence API to extract the components of each video ad, using the underlying video files.
  • Transform that extracted data to engineered features that map to actionable business questions.
  • Use a machine learning model to isolate the effect on VTR of each engineered feature.
  • Interpret and action on those insights to improve video ad performance, for instance altering existing creatives or create new creatives to be used in an AB test.

Approach


The Process

The proposed analysis has 5 steps, discussed below.

1. Define Business Questions
Align on a list of business questions that are actionable, for instance “does having a logo in the opening shot affect VTR?” We suggest taking feasibility into account ahead of time, for instance if a product disclaimer is necessary to have for legal reasons, there is no reason to assess the impact a disclaimer has on VTR.

2. Raw Component Extraction
Use Google Cloud technologies, such as the Google Cloud Video Intelligence API, and underlying video files to extract raw components from each video ad. For instance, but not limited to, objects appearing in the video at a particular timestamp, presence of text and its location on the screen, or the presence of specific sounds.

3. Feature Engineering
Using the raw components extracted in step 2, engineer features that align to the business questions defined in step 1. For example, if the business question is “does having a logo in the opening shot affect VTR”, create a feature that labels each video as either 1, having a logo in the opening shot or 0, not having a logo in the opening shot. Repeat this for each feature.

4. Modeling
Create an ML model using the engineered features from step 3, using VTR as the target in the model.

5. Interpretation
Extract statistically significant features from the ML model and interpret their effect on VTR. For example, “there is an xx% observed uplift in VTR when there is a logo in the opening shot.”


Feature Engineering


Data Extraction

Consider 2 different YouTube Video Ads for a web browser, each highlighting a different product feature. Ad A has text that says “Built In Virus Protection'', while Ad B has text that says “Automatic Password Saving”.

The raw text can be extracted from each video ad and allow for the creation of tabular datasets, such as the below. For brevity and simplicity, the example carried forward will deal with text features only and forgo the timestamp dimension.

 Ad

 Detected Raw Text

 Ad A

 Built In Virus Protection

 Ad B

 Automatic Password Saving


Preprocessing

After extracting the raw components in each ad, preprocessing may need to be applied, such as removing case sensitivity and punctuation.

 Ad

 Detected Raw Text

 Processed Text

 Ad A

 Built IVirus Protection

 built ivirus protection

 Ad B

 Automatic Password Saving

 automatic password saving


Manual Feature Engineering

Consider a scenario where the goal is to answer the business question, “does having a textual reference to a product feature affect VTR?”

This feature could be built manually by exploring all the text in all the videos in the sample and creating a list of tokens or phrases that indicate a textual reference to a product feature. However, this approach can be time consuming and limits scaling.

Image of pseudo code for manual feature engineering
Pseudo code for manual feature engineering

AI Based Feature Engineering

Instead of manual feature engineering as described above, the text detected in each video ad creative can be passed to an LLM along with a prompt that performs the feature engineering automatically.

For example, if the goal is to explore the value of highlighting a product feature in a video ad, ask an LLM if the text “‘built in virus protection’ is a feature callout”, followed by asking the LLM if the text “‘automatic password saving’ is a feature callout”.

The answers can be extracted and transformed to a 0 or 1, to later be passed to a machine learning model.

 Ad

 Raw Text

 Processed Text

 Has Textual Reference to Feature

 Ad A

 Built IVirus Protection

 built ivirus protection

 Yes

 Ad B

 Automatic Password Saving

 automatic password saving

 Yes



Modeling


Training Data

The result of the feature engineering step is a dataframe with columns that align to the initial business questions, which can be joined to a dataframe that has the VTR for each video ad in the sample.

 Ad

 Has Textual Reference to Feature

 VTR*

 Ad A

 Yes

 10%

 Ad B

 Yes

 50%


*Values are random and not to be interpreted in any way.

Modeling is done using fixed effects, bootstrapping and ElasticNet. More information can be found here in the post Introducing Discovery Ad Performance Analysis, written by Manisha Arora and Nithya Mahadevan.

Interpretation

The model output can be used to extract significant features, coefficient values, and standard deviation.

Coefficient Value (+/- X%)
Represents the absolute percentage uplift in VTR. Positive value indicates positive impact on VTR and a negative value indicates a negative impact on VTR.

Significant Value (True/False)
Represents whether the feature has a statistically significant impact on VTR.

 Feature

 Coefficient*

 Standard Deviation*

 Significant?*

 Has Textual Reference to Feature

0.0222

0.000033

True


*Values are random and not to be interpreted in any way.

In the above hypothetical example, the feature “Has Feature Callout” has a statistically significant, positive impact of VTR. This can be interpreted as “there is an observed 2.22% absolute uplift in VTR when an ad has a textual reference to a product feature.”

Challenges

Challenges of the above approach are:

  • Interactions among the individual features input into the model are not considered. For example, if “has logo” and “has logo in the lower left” are individual features in the model, their interaction will not be assessed. However, a third feature can be engineered combining the above as “has large logo + has logo in the lower left”.
  • Inferences are based on historical data and not necessarily representative of future ad creative performance. There is no guarantee that insights will improve VTR.
  • Dimensionality can be a concern as given the number of components in a video ad.

Activation Strategies


Ads Creative Studio

Ads Creative Studio is an effective tool for businesses to create multiple versions of a video by quickly combining text, images, video clips or audio. Use this tool to create new videos quickly by adding/removing features in accordance with model output.

Image of sample video creation features in Ads creative studio
Sample video creation features in Ads creative studio

Video Experiments

Design a new creative, varying a component based on the insights from the analysis, and run an AB test. For example, change the size of the logo and set up an experiment using Video Experiments.


Summary


Identifying which components of a YouTube Ad affect VTR is difficult, due to the number of components contained in the ad, but there is an incentive for advertisers to optimize their creatives to improve VTR. Google Cloud technologies, GenAI models and ML can be used to answer creative centric business questions in a scalable and actionable way. The resulting insights can be used to optimize YouTube ads and achieve business outcomes.


Acknowledgements

We would like to thank our collaborators at Google, specifically Luyang Yu, Vijai Kasthuri Rangan, Ahmad Emad, Chuyi Wang, Kun Chang, Mike Anderson, Yan Sun, Nithya Mahadevan, Tommy Mulc, David Letts, Tony Coconate, Akash Roy Choudhury, Alex Pronin, Toby Yang, Felix Abreu and Anthony Lui.