Tag Archives: case study

Reddit improved app startup speed by over 50% using Baseline Profiles and R8

Posted by Ben Weiss – Developer Relations Engineer, and Lauren Darcey – Senior Engineering Manager, Reddit

Reddit is one of the world’s largest internet forums, bringing together countless communities looking for entertainment, answers to everyday questions, and so much more.

Recently, the team optimized its Android app to reduce startup speed and improve rendering performance using Baseline Profiles. But the team didn’t stop there. Reddit app developers also enabled Android’s R8 compiler in full mode to maximize bytecode optimization and used Jetpack Compose to rewrite legacy UI, improving both the user and developer experience.

Maximizing optimization using Baseline Profiles and R8 full mode

The Reddit Android app has undergone countless performance upgrades over the years. Reddit developers have long since cleared the list of quick and easy tasks for optimization, but the team still wants to improve the app, bringing its performance to the next level and ensuring it runs well on every Android device.

“Reddit is looking for any strategic improvement to its app performance so we can make the app experience better for new and existing users,” said Rob McWhinnie, a staff engineer at Reddit. “Baseline Profiles fit this use case well since they are based on critical user journeys.”

Reddit’s platform engineering team used screen-specific performance metrics and observability to help its feature teams improve key metrics like time to interactive and scroll performance. Baseline Profiles were a natural fit to help improve these metrics and the user experience behind them, so the team integrated them to make tracking and optimizing easier, using insights from geodata and device classes.

The team built Baseline Profiles for five critical user journeys so far, like scrolling the home feed, logging in, launching the full-screen video player, navigating between subreddits and scrolling their feeds, and using the chat feature.

Simplifying Baseline Profile management in their continuous integration processes, enabled Reddit to remove the need for manual maintenance and streamlining optimization. Now, Baseline Profiles are automatically regenerated for each release.

Enabling Android’s R8 optimization compiler in full mode was another area Reddit engineers worked on. The team had already used R8 in compatibility mode, but some of Reddit’s legacy code would’ve made implementing R8’s more aggressive features difficult. The team worked through the app’s existing technical debt first, making it easier to integrate R8's full mode capabilities and maximize Android app optimization.

Quote card with image of Catherine Chi, Senior Engineer at Reddit that reads: 'It’s now trivial to work with a team to instrument Baseline Profiles for their critical user journeys. We turn them around in a couple of hours and see results in production a week later.

Improvements with Baseline Profiles and R8 full mode

Reddit's Baseline Profiles and R8 full mode optimization led to multiple performance improvements across the app, with early benchmarks of the first Baseline Profile for feeds showing a 51% median startup time improvement. While responses from Redditors initially confirmed large startup improvements, Baseline Profile optimizations for less frequent journeys, like logging in, saw fewer user reports.

Baseline Profiles for the home feed had a 36% reduction in frozen frames' 95th percentile. Baseline Profiles for the community feed also delivered strong screen load and scroll performance improvements. At the 90th percentile, screen Time To Interactive improved by 12% and time to first draw decreased by 22%. Reddit’s scrolling performance also saw a 12% reduction in P90 slow frames.

The upgrade to R8 full mode led to an increase in Google Play average ratings. The proportion of global positive ratings (fours and fives) increased by four percent, with a notable decrease in negative reports. R8 full mode also reduced total application-not-responding errors by almost 30%.

Overall, the app saw cold start improvements of 20%, scroll performance improvements of 15%, and widespread enhancements in lower-end devices and emerging markets. Google Play vitals saw improvements in slow cold starts, a 10% reduction in excessive frozen frames, and a 30% reduction in excessive slow frames. Nearly 75% of screens, refactored using Jetpack Compose, experienced performance gains.

Quote card with image of Lauren Darcey, Senior Engineering Manager at Reddit that reads: 'When you find a feature that users love and engage with, taking the time to refine and optimize it can be the difference between a good and a great experience for your users.

Further optimizations using Jetpack Compose

Reddit adopted Jetpack Compose years ago and has since rebuilt much of its UI with the toolkit, benefitting both the app and its design system. According to the Reddit team, Google’s ongoing support for Compose’s stability and performance made it a strong fit as Reddit scaled its app, allowing for more efficient feature development and better performance.

One major example is Reddit’s feed rewrite using Compose, which resulted in more maintainable code and an improved developer experience. Compose enabled teams to focus on future work instead of being bogged down by legacy code, allowing them to fix bugs quickly and improve overall app stability.

“The R8 and Compose upgrades were important to deploy in relative isolation and stabilize,” said Drew Heavner, a staff engineer at Reddit. “We feel like we got great outcomes from this work for all teams adopting our modern tech stack and Compose.”

After upgrading to the September 2024 release of Compose, the latest iteration, Reddit saw significant performance gains across the board. Cold start times improved by 13%, excessive slow frames decreased by 25%, and frozen frames dropped by 10%. Low- and mid-tier devices saw even greater improvements where app start times improved by up to 40%, especially in markets with lower-performing devices.

Screens using Reddit’s modern Compose-powered design stack showed substantial improvements in both slow and frozen frame rates. For example, the home feed saw a 23% reduction in frozen frames, and scrolling performance visibly improved according to internal reviews. These updates were well received among users and reflected a 17% increase in the app’s Google Play average rating.

Quote card with image of the Android Bot peeking in from the right side that reads: Compose continues to deliver great new features for a more responsive user experience. It also provides stability and performance improvements we get to take advantage of.” — Eric Kuck, a Principal Engineer at Reddit

Up-leveling UX through optimization

Adding value to an app isn’t just about introducing new features—it's about refining and optimizing the ones users already love. Investing in performance improvements made Reddit’s key features faster and more reliable, enhancing the overall user experience. These optimizations not only improved app startup and runtime performance but also simplified development workflows, increasing both developer satisfaction and app stability.

The focus on high-traffic features, such as feeds, has demonstrated the power of performance tuning, with substantial gains in user engagement and satisfaction. As the app has become more efficient, both users and developers have benefitted from a cleaner codebase and faster performance.

Looking ahead, Reddit plans to extend the usage of Baseline Profiles to other critical user journeys, including Reddit’s post and comment experiences, ensuring even more users benefit from these ongoing performance improvements.

Reddit’s platform engineers also want to continue collaborating with feature teams to integrate performance improvements across the app. These efforts will ensure that as the app evolves, it remains a smooth, fast, and engaging experience for all Redditors.

“Adding new features isn’t the only way to add value to an experience for users,” said Lauren Darcey, a senior engineering manager at Reddit. “When you find a feature that users love and engage with, taking the time to refine and optimize it can be the difference between a good and a great experience for your users.”

Get started

Improve your app performance using Baseline Profiles, R8 full mode, and Jetpack Compose.

X improved login success rate by 2x after adopting passkeys

Posted by Niharika Arora – Developer Relations Engineer

From breaking news and entertainment to sports and politics, X is a social media app that aims to help nearly 500 million users worldwide get the full story with all the live commentary. Recently, X developers revamped the Android app’s login process so users never miss out on the conversations they’re interested in. Using the Credential Manager API, the team implemented new passkey authentication for quicker, easier, and safer access to the app.

Simplifying login with passkeys

Today, traditional password-based authentication systems are less secure and more prone to cyber attacks. Many users often choose easy-to-guess passwords, which bad actors can easily crack using brute force attacks. They also reuse the same passwords across multiple accounts, meaning if one password is hacked, all accounts are compromised.

Passkeys address the growing concern of account security from weak passwords and phishing attacks by eliminating the need for passwords entirely. The feature provides a safer, more seamless sign-in experience, freeing users from having to remember their usernames or passwords.

“Passkeys are a simpler, more secure way to log in, replacing passwords with pins or biometric data like fingerprints or facial recognition,” said Kylie McRoberts, head of safety at X. “We explored using passkeys to make signing in easier and safer for users, helping protect their accounts without the hassle of remembering passwords.”

Since implementing passkeys, the X team has seen a substantial reduction in login times and metrics showing improved login flow. With passkeys, the app’s successful login rate has doubled compared to when it only relied on passwords. The team has also seen a decline in password reset requests from users who have enabled passkeys.

According to X developers, adopting passkeys even came with benefits beyond enhanced security and a simplified login experience, like lower costs and improved UX.

“Passkeys allowed us to cut down on expenses related to SMS-based two-factor authentication because they offer strong, inherent authentication,” said Kylie. “And with the ease of login, users are more likely to engage with our platform since there’s less friction to remember or reset passwords.”

Passkeys rely on public-key cryptography to authenticate users and provide them with private keys. That means websites and apps can see and store the public key, but never the private key, which is encrypted and stored by the user’s credential provider. As keys are unique and tied to the website or app, they cannot be phished, further enhancing their security.

We achieved an 80% code reduction in the authentication module, a 90% resolution of legacy edge case bugs, and an 85% decrease in GIS, One Tap, and Smart Lock code using passkeys.” — Saurabh Arora, Staff Engineer at X.

Seamless integration using the Credential Manager API

To integrate passkeys, X developers used Android’s Credential Manager API, which made the process “extremely smooth,” according to Kylie. The API unifies Smart Lock, One Tap, and Google Sign-In into a single, streamlined workflow. This also allowed developers to remove hundreds of lines of code, boosting implementation and reducing maintenance overhead.

In the end, the migration to Credential Manager took X developers only two weeks to complete, followed by an additional two weeks to fully support passkeys. This was a “very fast migration” and the team “didn’t expect it to be that simple and straightforward,” said Saurabh Arora, a staff engineer at X. Thanks to Credential Manager’s simple, coroutine-powered API, the complexities of handling multiple authentication options were essentially removed, reducing code, the likelihood of bugs, and overall developer efforts.

X developers saw a significant improvement in developer velocity by integrating the Credential Manager API. With their shift to passkey adoption through Credential Manager API, they achieved an:

    • 80% code reduction in the authentication module
    • 90% resolution of legacy edge case bugs
    • 85% decrease in GIS, One Tap, and Smart Lock handling code

Using the Credential Manager API's top-level methods, like createCredential and getCredential, simplified integration by removing custom logic complexities surrounding individual protocols. This uniform approach also meant X developers could use a single, consistent interface to handle various authentication types, such as passkeys, passwords, and federated sign-ins like Sign in with Google.

“With Credential Manager’s simple API methods, we could retrieve passkeys, passwords, and federated tokens with a single call, cutting down on branching logic and making response handling cleaner,” said Saurabh. “Using different API methods, like createCredential() and getCredential(), also simplified credential storage, letting us handle passwords and passkeys in one place.”

X developers didn’t face many challenges when adopting Sign in With Google using the Credential Manager API. Replacing X’s previous Google Sign In, One Tap, and Smart Lock code with a simpler Credential Manager implementation meant developers no longer had to handle connection or disconnection statuses and activity results, reducing the margin of error.

A UI example of passkeys on X

A future with passkeys

X's integration of passkeys shows that achieving a more secure and user-friendly authentication experience can be achieved. By leveraging Credential Manager API, X developers simplified the integration process, reduced potential bugs, and improved both security and developer velocity—all while sharpening the user experience.

“Our advice for developers considering passkey integration would be to take advantage of the Credential Manager API,” said Saurabh. “It really simplifies the process and reduces code you need to write and maintain, making implementation better for developers.”

Looking ahead, X plans to further enhance the user experience by allowing sign-ups with passkeys alone and providing a dedicated passkey management screen.

Get started

Learn how to improve your app’s login UX using passkeys and the Credential Manager API.

FlipaClip optimizes for large screens and sees a 54% increase in tablet users

Posted by Miguel Montemayor – Developer Relations Engineer

FlipaClip is an app for creating dynamic and engaging 2D animations. Its powerful toolkit allows animators of all levels to bring their ideas to life, and its developers are always searching for new ways to help its users create anything they can imagine.

Increasing tablet support was pivotal in improving FlipaClip users’ creativity, giving them more space and new methods of animating the stories they want to tell. Now, users on these devices can more naturally bring their visions to life thanks to Android’s intuitive features, like stylus compatibility and unique large screen menu interfaces.

Large screens are a natural canvas for animation

FlipaClip initially launched as a phone app, but as tablets became more mainstream, the team knew it needed to adapt its app to take full advantage of larger screens because they are more natural animating platforms. After updating the app, tablet users quickly became a core revenue-generating audience for FlipaClip, representing more than 40% of the app’s total revenue.

“We knew we needed to prioritize the large screen experience,” said Tim Meson, the lead software engineer and co-founder of FlipaClip. “We believe the tablet experience is the ideal way to use FlipaClip because it gives users more space and precision to create.”

The FlipaClip team received numerous user requests to optimize styluses on tablets, like pressure sensitivity and tilt for styluses and new brush types. So it gave their users exactly what they wanted. Not only did they implement stylus support, but they also redesigned the large screen drawing area, allowing for more customization with moveable tool menus and the ability to hide extra tools.

Now, unique menu interfaces and stylus support provide a more immersive and powerful creative experience for FlipaClip’s large screen users. By implementing many of the features its users requested and optimizing existing workspaces, FlipaClip increased its US tablet users by 54% in just four months. The quality of the animations made by FlipaClip artists also visibly increased, according to the team.


We knew we needed to prioritize the large screen experience...because it gives users more space and precision to create - Tim Meson; Lead Software Engineer and Co-founder of FlipaClip

Improving large screen performance

One of the key areas the FlipaClip team focused on was achieving low-latency drawing, which is critical for a smooth and responsive experience, especially with a stylus. To help with this, the team created an entire drawing engine from the ground up using Android NDK. This engine also improved the overall app responsiveness regardless of the input method.

“Focusing on GPU optimizations helped create more responsive brushes, a greater variety of brushes, and a drawing stage better suited for tablet users with more customization and more on-screen real estate,” said Tim.

Previously, FlipClip drawings were rendered using CPU-backed surfaces, resulting in suboptimal performance, especially on lower-end devices. By utilizing the GPU for rendering and consolidating touch input with the app’s historical touch data, the FlipaClip team significantly improved responsiveness and fluidity across a range of devices.

“The improved performance enabled us to raise canvas size limits closer to 2K resolution,” said Tim. “It also resolved several reported application-not-responding errors by preventing excessive drawing attempts on the screen.”

After optimizing for large screens and reducing their crash rate across device types, FlipaClip’s user satisfaction improved, with a 15% improvement in their Play Store rating for large screen devices. The performance enhancements to the drawing engine were particularly well received among users, leading to better engagement and overall positive feedback.

Using Android Vitals, a tool in the Google Play Console for monitoring the technical quality of Android apps, was invaluable in identifying performance issues across the devices FlipaClip users were on. This helped its engineers pinpoint specific devices lacking drawing performance and provided critical data to guide their optimizations.

FlipaClip UI examples across large screen devices

Listening to user feedback

Large screen users are Android’s fastest-growing audience, reaching over 300 million users worldwide. Allowing users to enjoy their favorite apps across device types while making use of the larger screen on tablets, means a more engaging experience for users to love.

“One key takeaway for us was always to take the time to review user feedback and app stability reports,” said Tim. “From addressing user requests for additional stylus support to pinpointing specific devices to improve drawing performance, these insights have been invaluable for improving the app and addressing pain points of large screen users.”

The FlipaClip team noted that developing for Android stood out in several ways compared to other platforms. One key difference is the libraries provided by the Android team, which are continuously updated and improved, allowing its engineers to seamlessly address and resolve any issues without requiring users to upgrade their Android OS.

“Libraries like Jetpack Compose can be updated independently of the device's system version, which is incredibly efficient,” said Tim. “Plus, Android documentation has gotten a lot better over the years. The documentation for large screens is a great example. The instructions are more thorough, and all the code examples and codelabs make it so much easier to understand.”

FlipaClip engineers plan to continue optimizing the app’s UI for larger screens and improve its unique drawing tools. The team also wants to introduce more groundbreaking animation tools, seamless cross-device syncing, and tablet-specific gestures to improve the overall animation experience on large screen devices.

Get started

Learn how to improve your UX by optimizing for large screens.

AllTrails gains over 1 million downloads after implementing its Wear OS app

Posted by Kseniia Shumelchyk – Developer Relations Engineer

With more than 65 million global users, AllTrails is one of the world’s most popular and trusted platforms for outdoor exploration. The app is designed to be the ultimate adventure companion, so the AllTrails team always works to improve users’ outdoor experience using the latest technology. Recently, its developers created a new Wear OS application. Now, users can access their favorite AllTrails features using their favorite Android wearables.

Growing the AllTrails ecosystem

AllTrails has had a great deal of growth from its Android users, and the app’s developers wanted to meet the needs of this growing segment by delivering new ways to get outside. That meant creating an ecosystem of connected experiences, and Wear OS was the perfect starting point. The team started by building essential functions for controlling the app, like pausing, resuming, and finishing hikes, straight from wearables.

“We know that the last thing you want as you’re pulling into the trailhead is to fumble with your phone and look for the trail, so we wanted to bring the trails to your fingertips,” said Sydney Cho, director of product management at AllTrails. “There’s so much cool stuff we want to do with our Wear OS app, but we decided to start by focusing on the fundamentals.”

After implementing core controls, AllTrails developers added more features to take advantage of the watch screen, like a circular progress ring to show users how far they are on their current route. Implementing new user interfaces is efficient since Compose for Wear OS provides built-in Material components for developers, like a CircularProgressIndicator.

AllTrails’ mobile app warns users when they start to wander off-trail with wrong-turn alerts. AllTrails developers incorporated these alerts into the new Wear OS app, so users can get notified straight from their wrists and keep their phones in their pockets.

The new AllTrails Wear OS application has been super popular among its user base, and the team has received substantial positive feedback on the new wearable experience. AllTrails Wear OS app has had over 1 million downloads since implementing the Wear OS app.

'We’re seeing a lot of growth from Android users, and we want to provide them an ecosystem of connected experiences. Wearables are a core part of that experience.'— Sydney Cho, Director of product management at AllTrails

Streamlined development with Compose for Wear OS

To build the new wearable experience, AllTrails developers used Jetpack Compose for Wear OS. The modern declarative toolkit simplifies UI development by letting developers create reusable code blocks for basic functions, allowing for fast and efficient wearable app development.

“Compose for Wear OS definitely sped up development,” said Sydney. “It also gave our dev team exposure to the toolkit, which we’re obviously huge fans of and use for the majority of our new development.”

This was the first app AllTrails developers created entirely using Jetpack Compose, even though they currently use it for parts of the mobile app. Even with their brief experience using the toolkit, they knew it would greatly improve development, so it was an obvious choice for the Wear OS integration.

“Jetpack Compose allowed us to iterate much more quickly,” said Sydney. “It’s incredibly simple to create composables, and the simplicity of previewing the app in various states is extremely helpful.”



Connecting health and fitness via Health Connect

AllTrails developers saw another opportunity to improve the user experience while building the new Wear OS application by integrating Health Connect. Health Connect is one of Android’s latest API offerings that gives users a simpler way to consolidate and share their health and fitness data across applications.

When users opt-in for Health Connect, they can share their various health and fitness data between applications, giving them a more comprehensive understanding of their activity regardless of the apps tracking it.

“Health Connect allows our users to sync their AllTrails activity recordings, like hiking, biking, running, and so on, directly on their phone,” said Sydney. “This activity can then be viewed within Health Connect or from other apps, giving users more freedom to see all their physical activity data, regardless of which app it was recorded on.”

Health Connect streamlines health data management using simple APIs and a straightforward data model. It acts as a centralized repository, consolidating health and fitness data from various apps, simply by having each app write its data to Health Connect. This means that even partial adoption of the API can yield benefits.

AllTrails developers enjoyed how easy it was to integrate Health Connect, thanks to its straightforward and well-documented APIs that were “very simple but extremely powerful.”

moving asset of 3D Droid figure on the right gesticulating toward tect on the left that reads 'AllTrails +1million downloads since implementing the Wear OS app'

What’s ahead with Wear OS

Implementing a new Wear OS application did more than give AllTrails’ users a new way to interact with the app. It lets them put their phones back in their pockets so they can enjoy more of what’s on the trail. By prioritizing core functionalities like nearby trail access, recording control, and real-time alerts, AllTrails delivered a seamless and intuitive wearable experience, enriching UX with impressive user adoption and retention rates.

Get started

Learn more about building wearable apps with design and developer guidance for Wear OS.

SAP integrated NavigationSuiteScaffold in just 5 minutes to create adaptive navigation UI

Posted by Alex Vanyo – Developer Relations Engineer

SAP Mobile Start is an app that centralizes access to SAP's mobile business suite, a hub for users to keep track of their companies’ processes and data so they can efficiently manage their daily to-dos while on the move.

Recently, SAP Mobile Start developers prioritized building an adaptive app that looks great across devices, including tablets and foldables, to create a more seamless user experience. Using Jetpack Compose and Material 3 design, the team efficiently implemented intuitive, user-friendly features to increase accessibility across its users’ preferred devices.


Adaptive design across devices

With over 300 million daily active users on foldables, tablets, and Chromebooks today, building apps that adapt to varied screen sizes is important for providing an optimal user experience. But simply stretching the UI to fit different screen sizes can drastically alter it from its original form, obscuring the interface and impairing the user experience.

“We focused on situations where we could make better use of available space on large screens,” said Laura Bergmann, UX designer for SAP. “We wanted to get rid of screens that are stretched from edge to edge, full-screen drill-downs or dialogs, and use space more efficiently.”

Now, after optimizing for different devices, SAP Mobile Start dynamically adjusts its layouts by swapping components and showing or hiding content based on the available window size instead of stretching UI elements to match a device's screen.

The SAP team also implemented canonical layouts, common UI designs that split a screen into panes according to its size. By separating content into panes, SAP’s users can manage their business workflows more productively. Depending on the window size class, the supporting pane adjusts the UI without additional custom logic. For example, compact windows typically utilize one pane, while larger windows can utilize multiple.

“Adopting the new canonical layouts from Google helped us focus more on designing unique app capabilities for SAP’s business scenarios,” said Laura. “With the available navigational elements and patterns, we can now channel our efforts into creating a more engaging user experience without reinventing the wheel.”

SAP developers started by implementing supporting panes to create multi-pane layouts that efficiently utilize on-screen space. The first place developers added supporting panes was on the app’s “To-Do” details page. To-dos used to be managed in a single pane, making it difficult to review the comments and tickets simultaneously. Now, tickets and comments are reviewed in primary and secondary panes on the same screen using SupportingPaneScaffold.

We focused on making better use of the available space in large screens. We wanted to move away from UIs that are stretched to adaptive layouts that enhance productivity.”  — Laura Bergmann, UX designer at SAP

Fast implementation using Compose Material 3 Adaptive library

SAP Mobile Start is built entirely with Jetpack Compose, Android’s modern declarative toolkit for building native UI. Compose helped SAP developers build new UI faster and easier than ever before thanks to composables, reusable code blocks for building common UI components. The team also used Compose Navigation to integrate seamless navigation between composables, optimizing travel between new UI on all screens.

It took developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app’s navigation UI to different window sizes, switching between a bottom navigation bar and a vertical navigation rail. It also eliminated the need for custom logic, which previously determined the navigation component based on various window size classes. The NavigationSuiteScaffold also reduced the custom navigation UI logic code by 59%, from 379 lines to 156.

“Jetpack Compose simplified UI development,” said Aditya Arora, lead Android developer. “Its declarative nature, coupled with built-in support for Material Design and dark theme, significantly increased our development efficiency. By simply describing the desired UI, we've reduced code complexity and improved maintainability.”

SAP developers used live edit and layout inspector in Android Studio to test and optimize the app for large screens. These features were “total game changers” for the SAP team because they helped iterate and inspect layout issues faster when optimizing for new screens.

With its @PreviewScreenSizes annotation and device streaming powered by Firebase, Jetpack Compose also made testing the app's UI across various screen sizes easier. SAP developers look forward to Compose Screenshot Testing being completed, which will further streamline UI testing and ensure greater visual consistency within the app.

Using Jetpack Compose, SAP developers also quickly and easily implemented new Material 3 design concepts from the Compose M3 Adaptive library. Material 3 design emphasizes personalizing the app experience, improving interactions with modern visual aesthetics.

Compose's flexibility made replacing the standard Material Theme with their own custom Fiori Horizon Theme simple, ensuring a consistent visual appearance across SAP apps. “As early adopters of the Compose M3 Adaptive library, we collaborated with Google to refine the API,” said Aditya. “Since our app is completely Compose-based, leveraging the new Compose Material 3 Adaptive library was a piece of cake.”

A list layout adapting to and from a list detail layout depending on the window size

As large-screen devices like tablets, foldables, and Chromebooks become more popular, building layouts that adapt to varied screen sizes becomes increasingly crucial. For SAP Mobile Start developers, reimagining their app across devices using Jetpack Compose and Material 3 design guidelines was simple. Using Android’s collection of tools and resources, creating adaptive UIs for all the new form factors hitting the market today is faster and easier than ever.

“Optimizing for large screens is crucial. The market for tablets, foldables, and Chromebooks is booming. Don't miss out on this opportunity to improve your user experience and expand your app's reach,” said Aditya.

Get started

Learn how to improve your UX by optimizing for large screens and foldables using Jetpack Compose and Material 3 design.

TalkBack uses Gemini Nano to increase image accessibility for users with low vision

Posted by Terence Zhang – Developer Relations Engineer and Lisie Lillianfeld - Product Manager

TalkBack is Android’s screen reader in the Android Accessibility Suite that describes text and images for Android users who have blindness or low vision. The TalkBack team is always working to make Android more accessible. Today, thanks to Gemini Nano with multimodality, TalkBack automatically provides users with blindness or low vision more vivid and detailed image descriptions to better understand the images on their screen.

Increasing accessibility using Gemini Nano with multimodality

Advancing accessibility is a core part of Google’s mission to build for everyone. That’s why TalkBack has a feature to describe images when developers didn’t include descriptive alt text. This feature was powered by a small ML model called Garcon. However, Garcon produced short, generic responses and couldn’t specify relevant details like landmarks or products.

The development of Gemini Nano with multimodality was the perfect opportunity to use the latest AI technology to increase accessibility with TalkBack. Now, when TalkBack users opt in on eligible devices, the screen reader uses Gemini Nano’s new multimodal capabilities to automatically provide users with clear, detailed image descriptions in apps including Google Photos and Chrome, even if the device is offline or has an unstable network connection.

“Gemini Nano helps fill in missing information,” said Lisie Lillianfeld, product manager at Google. “Whether it’s more details about what’s in a photo a friend sent or the style and cut of clothing when shopping online.”

Going beyond basic image descriptions

Here’s an example that illustrates how Gemini Nano improves image descriptions: When Garcon is presented with a panorama of the Sydney, Australia shoreline at night, it might read: “Full moon over the ocean.” Gemini Nano with multimodality can paint a richer picture, with a description like: “A panoramic view of Sydney Opera House and the Sydney Harbour Bridge from the north shore of Sydney, New South Wales, Australia.”

“It's amazing how Nano can recognize something specific. For instance, the model will recognize not just a tower, but the Eiffel Tower,” said Lisie. “This kind of context takes advantage of the unique strengths of LLMs to deliver a helpful experience for our users.”

Using an on-device model like Gemini Nano was the only feasible solution for TalkBack to provide automatically generated detailed image descriptions for images, even while the device is offline.

“The average TalkBack user comes across 90 unlabeled images per day, and those images weren't as accessible before this new feature,” said Lisie. The feature has gained positive user feedback, with early testers writing that the new image descriptions are a “game changer” and that it’s “wonderful” to have detailed image descriptions built into TalkBack.


Gemini Nano with multimodality was critical to improving the experience for users with low vision. Providing detailed on-device image descriptions wouldn’t have been possible without it. — Lisie Lillianfeld, Product Manager at Google

Balancing inference verbosity and speed

One important decision the Android accessibility team made when implementing Gemini Nano with multimodality was between inference verbosity and speed, which is partially determined by image resolution. Gemini Nano with multimodality currently accepts images in either 512 pixels or 768 pixels.

“The 512-pixel resolution emitted its first token almost two seconds faster than 768 pixels, but the output wasn't as detailed,” said Tyler Freeman, a senior software engineer at Google. “For our users, we decided a longer, richer description was worth the increased latency. We were able to hide the perceived latency a bit by streaming the tokens directly to the text-to-speech system, so users don’t have to wait for the full text to be generated before hearing a response.”

A hybrid solution using Gemini Nano and Gemini 1.5 Flash

TalkBack developers also implemented a hybrid AI solution using Gemini 1.5 Flash. With this server-based AI model, TalkBack can provide the best of on-device and server-based generative AI features to make the screen reader even more powerful.

When users want more details after hearing an automatically generated image description from Gemini Nano, TalkBack gives the user an option to listen to more by running the image through Gemini Flash. When users focus on an image, they can use a three-finger tap to open the TalkBack menu and select the “Describe Image” option to send the image to Gemini 1.5 Flash on the server and get even more details.

By combining the unique advantages of both Gemini Nano's on-device processing with the full power of cloud-based Gemini 1.5 Flash, TalkBack provides blind and low-vision Android users a helpful and informative experience with images. The “describe image” feature powered by Gemini 1.5 Flash launched to TalkBack users on more Android devices, so even more users can get detailed image descriptions.


Animated UI example of TalkBack in action, describing a photo of a sunny view of Sydney Harbor, Australia, with the Sydney Opera House and Sydney Harbour Bridge in the frame.

Compact model, big impact

The Android accessibility team recommends developers looking to use the Gemini Nano with multimodality prototype and test on a powerful, server-side model first. There developers can understand the UX faster, iterate on prompt engineering, and get a better idea of the highest quality possible using the most capable model available.

While Gemini Nano with multimodality can include missing context to improve image descriptions, it’s still best practice for developers to provide detailed alt text for all images on their apps or websites. If the alt text is not provided, TalkBack can help fill in the gaps.

The Android accessibility team’s goal is to create inclusive and accessible features, and leveraging Gemini Nano with multimodality to provide vivid and detailed image descriptions automatically is a big step towards that. Furthermore, their hybrid approach towards AI, combining the strengths of both Gemini Nano on device and Gemini 1.5 Flash in the server, showcases the transformative potential of AI in promoting inclusivity and accessibility and highlights Google's ongoing commitment to building for everyone.

Get started

Learn more about Gemini Nano for app development.


This blog post is part of our series: Spotlight Week on Android 15, where we provide resources — blog posts, videos, sample code, and more — all designed to help you prepare your apps and take advantage of the latest features in Android 15. You can read more in the overview of Spotlight Week: Android 15, which will be updated throughout the week.

Instagram’s early adoption of Ultra HDR transforms user experience in only 3 months

Posted by Mayuri Khinvasara Khabya – Developer Relations Engineer, Google; in partnership with Bismark Ito - Android Developer, Rex Jin - Android Developer and Bei Yi - Partner Engineering

Meta’s Instagram is one of the world's most popular social networking apps that helps people connect, find communities, and grow their businesses in new and innovative ways. Since its release in 2010, photographers and creators alike have embraced the platform, making it a go-to hub of artistic expression and creativity.

Instagram developers saw an opportunity to build a richer media experience by becoming an early adopter of Ultra HDR image format, a new feature introduced with Android 14. With its adoption of Ultra HDR, Instagram completely transformed and improved its user experience in just 3 months.

Enhancing Instagram photo quality with Ultra HDR

The development team wanted to be an early adopter of Ultra HDR because photos and videos are Instagram's most important form of interaction and expression, and improving image quality aligns with Meta’s goal of connecting people, communities, and businesses. “Android rapidly adopts the latest media technology so that we can bring the benefits to users,” said Rex Jin, an Android developer on the Instagram Media Platform team.

Instagram developers started implementing Ultra HDR in late September 2023. Ultra HDR images store more information about light intensity for more detailed highlights, shadows, and crisper colors. It also enables capturing, editing, sharing, and viewing HDR photos, a significant improvement over standard dynamic range (SDR) photos while still being backward compatible. Users can seamlessly post, view, edit, and apply filters to Ultra HDR photos without compromising image quality.

Since the update, Instagram has seen a large surge in Ultra HDR photo uploads. Users have also embraced their new ability to edit up to 10 Ultra HDR images simultaneously and share photos that retain the full color and dynamic camera capture range. Instagram’s pioneering integration of Ultra HDR earned industry-wide recognition and praise when it was announced at Samsung Unpacked and in a Pixel Feature Drop.

Image sharing is how Instagram started and we want to ensure we always provide the best and greatest image quality to users and creators. — Bei Yi,, partner engineering at Meta

Pioneering Ultra HDR integrations

Being early adopters of Android 14 meant working with beta versions of the operating system and addressing the challenges associated with implementing a brand-new feature that’s never been tested publicly. For example, Instagram developers needed to find innovative solutions to handle the expanded color space and larger file sizes of Ultra HDR images while maintaining compatibility with Instagram's diverse editing features and filters.

The team found solutions during the development process by using code examples for HDR photo capture and rendering. Instagram also partnered with Google's Android Camera & Media team to address the challenges of displaying Ultra HDR images, share its developer experience, and provide feedback during integration. The partnership helped speed up the integrations, and the feedback shared was implemented faster.

“With Android being an open source project, we can build more optimized media solutions with better performance on Instagram,” said Bismark Ito, an Android developer at Instagram. “I feel accomplished when I find a creative solution that works on a range of devices with different hardware capabilities.”

UI image of an uploaded Instagram post that was taken using Ultra HDR
UI image of an uploaded Instagram post that was taken using Ultra HDR

Building for the future with Android 15

Ultra HDR has significantly enhanced Instagram’s photo-sharing experience, and Meta is already planning to expand support to more devices and add future image and video quality improvements. With the upcoming Android 15 release, the company plans to explore new APIs and features that amplify its mission of connecting people, communities, and businesses.

As the Ultra HDR development process showed, being the first to adopt a new feature involves navigating new challenges to give users the best possible experience. However, collaborating with Google teams and Android’s open source community can help make the process smoother.

Get started

Learn how to revolutionize your app’s user experience with Ultra HDR images.

The Recorder app on Pixel sees a 24% boost in engagement with Gemini Nano-powered feature

Posted by Terence Zhang – Developer Relations Engineer and Kristi Bradford - Product Manager

Google Pixel’s Recorder app allows people to record, transcribe, save, and share audio. To make it easier for users to manage and revisit their recordings, Recorder’s developers turned to Gemini Nano, a powerful on-device large language model (LLM). This integration introduces an AI-powered audio summarization feature to help users more easily find the right recordings and quickly grasp key points.

Earlier this month, Gemini Nano got a power boost with the introduction of the new Gemini Nano with Multimodality model. The Recorder app is already leveraging this upgrade to summarize longer voice recordings, with improved processing for grammar and nuance.

Meeting user needs with on-device AI

Recorder developers initially experimented with a cloud-based solution, achieving impressive levels of performance and quality. However, to prioritize accessibility and privacy for their users, they sought an on-device solution. The development of Gemini Nano presented a perfect opportunity to build the concise audio summaries users were looking for, all while keeping data processing on the device.

Gemini Nano is Google’s most efficient model for on-device tasks. “Having the LLM on-device is beneficial to users because it provides them with more privacy, less latency, and it works wherever they need since there’s no internet required,” said Kristi Bradford, the product manager for Pixel’s essential apps.

To achieve better results, Recorder also fine-tuned the model using data that matches its use case. This is done using low order rank adaptation (LoRA), which enables Gemini Nano to consistently output three-bullet point descriptions of the transcript that include any speaker names, key takeaways, and themes.

AICore, an Android system service that centralizes runtime, delivery, and critical safety components for LLMs, significantly streamlined Recorder's adoption of Gemini Nano. The availability of a developer SDK for running GenAI workloads allowed the team to build the transcription summary feature in just four months, with only four developers. This efficiency was achieved by eliminating the need for maintaining in-house models.

Since its release, Recorder users have been using the new AI-powered summarization feature averaging 2 to 5 times daily, and the number of overall saved recordings increased by 24%. This feature has contributed to a significant increase in app engagement and user retention overall. The Recorder team also noted that feedback about the new feature has been positive, with many users citing the time the new AI-powered summarization feature saves them.

“We were surprised by how truly capable the model was… before and after LoRA tuning.” — Kristi Bradford, product manager for Pixel’s essential apps

The next big evolution: Gemini Nano with multimodality

Recorder developers also implemented the latest Gemini Nano model, known as Gemini Nano with multimodality, to further improve its summarization feature on Pixel 9 devices. The new model is significantly larger than the previous one on Pixel 8 devices, and it’s more capable, accurate, and scalable. The new model also has expanded token support that lets Recorder summarize much longer transcripts than before. Gemini Nano with multimodality is currently only available on Pixel 9 devices.

Integrating Gemini Nano with multimodality required another round of fine-tuning. However, Recorder developers were able to use the original Gemini Nano model's fine-tuning dataset as a foundation, streamlining the development process.

To fully leverage the new model's capabilities, Recorder developers expanded their dataset with support for longer voice recordings, implemented refined evaluation methods, and established launch criteria metrics focused on grammar and nuance. The inclusion of grammar as a new metric for assessing inference quality was made possible solely by the enhanced capabilities of Gemini Nano with Multimodality.

UI example

Doing more with on-device AI

“Given the novelty of GenAI, the whole team had fun learning how to use it,” said Kristi. “Now, we’re empowered to push the boundaries of what we can accomplish while meeting emerging user needs and opportunities. It’s truly brought a new level of creativity to problem-solving and experimentation. We’ve already demoed at least two more GenAI features that help people get time back internally for early feedback, and we’re excited about the possibilities ahead.”

Get started

Learn more about how to bring the benefits of on-device AI with Gemini Nano to your apps.

Max implemented UI changes 30% faster using Jetpack Compose

Posted by Tomáš Mlynarič, Developer Relations Engineer

Max®, which launched in the US on May 23, 2023, is an enhanced streaming platform from Warner Bros. Discovery, delivering unparalleled quality content for everyone in the household. Max developers want to provide the best UX possible, and they’re always searching for new ways to do that. That’s why Max developers built the app using Jetpack Compose, Android’s modern declarative toolkit for creating native UI. Building Max’s UI with Compose set the app up for long-term success, enabling developers to build new experiences in a faster and easier way.

Compose streamlines development

Max is the latest app from Warner Bros. Discovery and builds on the company’s prior learnings from HBO Max and discovery+. When Max development began in late 2022, developers had already used Compose to build the content discovery feature on discovery+—one of its core UI features.

“It was natural to continue our adoption of Compose to the Max platform,” said Boris D’Amato, Sr. Staff Software Engineer at Max.

Given the team’s previous experience using Compose on discovery+, they knew it would streamline development and improve the app’s maintainability. In the end, building Max with Compose reduced the app’s boilerplate code, increased the re-usability of its UI elements, and boosted developer productivity overall.

“Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” said Boris.

Today, Max’s UI is built almost entirely with Compose, and developers estimate that adopting Compose allowed them to implement UI changes 30% faster than with Views. Thanks to the toolkit’s modular nature, developers could build highly reusable components and adapt or combine them to form new UI elements, creating a more cohesive app design.

Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” — Boris D’Amato, Sr. Staff Software Engineer at Max

More improvements with Compose

Today, Compose is so integral to Max's design that the app's entire UI architecture is designed specifically to support Compose. For example, developers built a system to dynamically render server-driven, editorially curated content and user-personalized recommendations without having to ship a new version of the app. To support this system, developers relied on the best practices when architecting Compose apps, leveraging Compose's smart recompositioning and skipability for the smoothest experience possible.

Much like the discovery+ platform, Compose is also used for Max’s content discovery feature. This feature helps Max serve tailored content to each user based on how they use the app. Thanks to Compose, it was easy for developers to ensure this feature worked as intended because it allowed them to test each part in manageable segments.

“One of the features most impacted by using Compose was our content discovery system. Compose enabled us to create a highly dynamic and interactive interface that adapts in real-time to user context and preferences,” said Boris.

Adapting to users’ unique needs is another reason Compose has impressed Max developers. Compose makes it easy to support the many different screens and form factors available on the market today. With the Window size classes API, Max can scale its UI in real time to accommodate screen size and shape variations for tablets and foldables.

Examples of UX on large and small screens

The future with Compose

Since adopting Compose, the Max team has noticed increased interest from prospective job candidates excited about working with the latest Android technologies.

“Whenever we mention that Max is built using Compose, the excitement in the candidates is palpable. It indicates that we’re investing in keeping our tech stack updated and our focus on the developer experience,” said Boris.

Looking ahead, the Max team plans to lean further into its Compose codebase and make even more use of the toolkit’s features, like animation APIs, predictive gestures, and widgets.

“I absolutely recommend Jetpack Compose. Compose's declarative approach to UI development allows for a more intuitive and efficient design process, making implementing complex UIs and animations easy. Once you try Compose, there’s no going back,” said Boris.

Get started

Optimize your UI development with Jetpack Compose.

Developers for adidas CONFIRMED build features 30% faster using Jetpack Compose

Posted by Nick Butcher – Product Manager for Jetpack Compose, and Florina Muntenescu – Developer Relations Engineer

adidas CONFIRMED is an app for the brand’s most loyal fans who want its latest, curated collections that aren’t found anywhere else. The digital storefront gives streetwear, fashion, and style enthusiasts access to adidas' most exclusive drops and crossovers so they can shop them as soon as they go live. The adidas CONFIRMED team wants to provide users a premium experience, and it’s always exploring new ways to elevate the app’s UX. Today, its developers are more equipped than ever to improve the in-app experience using Jetpack Compose, Android’s modern declarative toolkit for building UI.

Improving the UX with Jetpack Compose

adidas CONFIRMED designers conduct quarterly consumer surveys for feedback from users regarding new app flows and UI enhancements. Their surveys revealed that 80% of the app’s users prefer animated visuals because animations encourage them to explore and interact with the app more. adidas CONFIRMED developers wanted to implement new design elements and animations across the app’s interface to strengthen engagement, but the app’s previous View-based system limited their ability to create engaging UX in a scalable way.

“We decided to build dynamic elements and animations across many of our screens and user journeys,” said Rodrigo Represa, an Android engineer at adidas. “We had an ambitious list of UI updates we wanted to make and started looking for solutions to help us achieve them.”

Switching to Compose allowed adidas CONFIRMED developers to create features faster than ever. The improvement in engineering efficiency has been noticeable, with the team estimating that Compose enables them to create new features roughly 30% faster than with Views. Today, more than 80% of the app’s UI has been migrated to Compose.

“I can build the same feature with Compose about 30% faster than with Views.” — Rodrigo Represa, Android engineer at adidas

Innovating the in-app experience

As part of the app’s new interface update, adidas CONFIRMED developers created an exciting, animated experience called Shoes Tournament. This competition positions different brand-collaborator sneakers head to head in a digital tournament where users vote for their favorite shoe. It took two developers only three months to build this feature from the ground up using Compose. And users loved it — it increased the app’s weekly active users by 8%.

UX screen of shoe tournament. Top shoe is clicked. Text reads: It took adidas' Android devs only 3 months to build this feature from the ground up using Compose.

Before transitioning to Compose, it was hard for the team to customize the adidas CONFIRMED app to incorporate branding from its collaborators. With Compose, it’s easy. For instance, the app’s developers can now create a dynamic design system using CompositionLocals. This functionality helps developers update the app's appearance during collab launches, providing a more appealing user experience while maintaining a consistent and clean design.

One of the most exciting animations adidas CONFIRMED developers added utilized device sensors. Users can view and interact with the products they’re looking at on product display pages by simply moving their devices, just as if they were holding the product in real life. Developers used Compose to create realistic lighting effects for the animation to make the viewing experience more engaging.

An easier way to build UI

Using composables allowed adidas CONFIRMED developers to reuse existing components. As both the flagship adidas app and the adidas CONFIRMED app are part of the same monorepo, engineers could reuse composables across both apps, like forms and lists, enabling them to implement new features quickly and easily.

“The accelerated development with Compose provided our team of seven with more time, enabling us to strike a healthy balance between delivering new functionalities and ensuring the long-term health and sustainability of our app,” said Rodrigo.

Compose also helped to improve app stability and performance for the team. They noticed a significant reduction in app-related crashes, and have seen virtually no UI-related crashes, since migrating the app to Compose. The team is proud to provide a 99.9% crash-free user experience.

Compose’s efficiency not only accelerated development, but also helped us achieve our business goals.” — Rodrigo Represa, Android engineer at adidas

A better app built with the future in mind

Compose opened doors to implementing new features faster than ever. With Compose’s clean and concise usage of Kotlin, it was easy for developers to create the ambitious and engaging interface adidas CONFIRMED users wanted. And the team doesn’t plan to stop there.

The adidas CONFIRMED team wants to lean further into its new codebase and fully adopt Compose moving forward. They also want to bring the app to new screens using more of the Compose suite and are currently developing an app widget using Jetpack Glance. This new experience will provide users with a streamlined feed of new product information for an even more efficient user experience.

“I recommend Compose because it simplifies development and is a more intuitive and powerful approach to building UI,” said Rodrigo.

Get started

Optimize your UI development with Jetpack Compose.