Category Archives: Android Developers Blog

An Open Handset Alliance Project

#WeArePlay | 4 stories of founders building apps for the LGBTQIA+ community

Posted by Robbie McLachlan, Developer Marketing

#WeArePlay celebrates the inspiring journeys of people behind apps and games on Google Play. In honor of Pride Month, we are highlighting founders who have built tools to empower the LGBTQIA+ community. From dating apps to mental health tools, to storytelling platforms - these founders are paving the way for more inclusive technology.


npckc is a game creator from Kanto, Japan whose stories portray the trans experience

npckc – Game Creator, Kanto, Japan

Born in Hong Kong and raised in Canada, npckc is a trilingual translator based in Japan. A self-taught programmer, they create games that feature stories and characters which are often from marginalized communities. One such game is "one night, hot springs" where players follow Haru, a trans woman, as she embarks on a visit to the hot springs. Players have praised the game's realistic portrayal of trans experiences and the relaxing music composed by npckc's partner, sdhizumi. As a finalist in Google Play's Indie Games Festival in Japan, they hope to attend more gaming conventions to connect with fellow developers in person.


Anshul and Rohan from Mumbai, India built a mental health support app geared to the LGBTQIA+ community’s needs

Anshul and Rohan – App Creators, Mumbai, India

After Anshul returned to India from London, he met Rohan and the pair bonded over their mental health struggles. Together they shared a dream; to create something in the wellness space. This became Evolve, an app with guided meditations, breathing exercises, and daily affirmations. When the pandemic hit, the pair saw first-hand how underserved the LGBTQIA+ community was in mental health support. For Rohan, who identifies as a gay man, this realization hit close to home. Together, Anshul and Rohan redeveloped Evolve towards the LGBTQIA+ community’s specific needs - building a safe space where users can share their experiences, seek mentorship, and build a supportive community.


BáiYù from Indiana, U.S. created a platform to publish authentic, queer visual novels and indie games

BáiYù – Game Creator, Indiana, USA

Queer developer BáiYù loves writing stories, and started making games at age 16. Part of a game-development community, BáiYù wanted an affordable way to help get their creations out. So they set up Project Ensō, publishing queer visual novels and narrative indie games. With 10 titles on Google Play, BáiYù supports other developers from under-represented groups to share their own authentic stories on Project Ensō, even polishing their games before release. The most popular title on Project Ensō is “Yearning: A Gay Story”, in which gamers play a newly-out gay man navigating his freshman year of college. BáiYù's efforts have had a profound impact on players, with many sharing how these games have positively transformed their lives.


Alex and Jake from Nevada, U.S. built an inclusive dating app and social community for everyone

BáiYù – Game Creator, Indiana, USA

Alex and Jake grew up in an environment that didn’t accept the LGBTQIA+ community. They started building apps together after a mutual friend introduced them. When they realized that queer people were looking for a platform that offered support and meaningful connections, they created Taimi. Taimi is not just a dating app for LGBTQIA+ people; it's also a social network where they can bond, build community, and feel safe. Alex and Jake are also proud to partner with NGOs that provide mental health support for the community.


Discover more stories of app and game creators in #WeArePlay.



How useful did you find this blog post?

#WeArePlay | Meet the people creating apps and games in Australia

Posted by Robbie McLachlan – Developer Marketing

Last year #WeArePlay went on a virtual tour of India, Europe and Japan to spotlight the stories of app and game founders. Today, we’re continuing our tour across the world with our next stop: Australia

From an app helping people during natural disasters to a game promoting wellbeing through houseplants, meet the 50 apps and games companies building growing businesses on Google Play.

Let’s take a quick road trip across the territories.

Tristen's app gives accurate information to people during natural disasters

Tristen, founder of Disaster Science
Tristen, founder of Disaster Science

Meet Tristen from Canberra, founder of Disaster Science. When Tristen was stranded by a bushfire with friends during a holiday, he realized the need to have accurate information in a crisis situation. Moved to help others, he leveraged his software development skills to create his app, Bushfire.io. It collects data from multiple sources to give people an overview of fires, floods, road closures, and vital weather updates.

He has recently added real-time satellite imagery and has plans to expand further internationally, with coverage of region-specific events like cyclones, earthquakes, evacuations and heat warnings.


Christine and Lauren's promotes wellbeing through houseplants

Christine and Lauren, co-founders of Kinder World
Christine and Lauren, co-founders of Kinder World

Friends Christine and Lauren from Melbourne co-founded gaming company Kinder World. As a child, Lauren used video games to soothe the pain of her chronic ear infections. That was how she discovered they could be a healing experience for people—a sentiment she dedicated her career to. She partnered with engineer Christina to make Kinder World: Cozy Plants.

In the game, players enter the comforting, botanical world of houseplants, home decoration, steaming hot coffee, and freshly baked cookies. Since going viral on several social media platforms, the app has seen huge growth.


Kathryn's app helps reduce stress and anxiety in children

Kathryn, founder of Courageous Kids
Kathryn, founder of Courageous Kids

Kathryn from Melbourne is the founder of Courageous Kids. When Kathryn's son was anxious and fearful whenever she dropped him off at school, as a doctor, her instincts for early intervention kicked in. She sought advice from pediatric colleagues to create stories to explain his day, making him the main character. Friends in a similar situation began to ask her for advice and use the stories for their own children so she created Courageous Kids.

A library of real-world stories for parents to personalize, Courageous Kids helps children to visualize their day and manage their expectations. Her app has become popular among families of sensitive and autistic children, and Kathryn is now working with preschools to give even more kids the tools to feel confident.


Discover more #WeArePlay stories from Australia, and stories from across the globe.



How useful did you find this blog post?

3 fun experiments to try for your next Android app, using Google AI Studio

Posted by Paris Hsu – Product Manager, Android Studio

We shared an exciting live demo from the Developer Keynote at Google I/O 2024 where Gemini transformed a wireframe sketch of an app's UI into Jetpack Compose code, directly within Android Studio. While we're still refining this feature to make sure you get a great experience inside of Android Studio, it's built on top of foundational Gemini capabilities which you can experiment with today in Google AI Studio.

Specifically, we'll delve into:

    • Turning designs into UI code: Convert a simple image of your app's UI into working code.
    • Smart UI fixes with Gemini: Receive suggestions on how to improve or fix your UI.
    • Integrating Gemini prompts in your app: Simplify complex tasks and streamline user experiences with tailored prompts.

Note: Google AI Studio offers various general-purpose Gemini models, whereas Android Studio uses a custom version of Gemini which has been specifically optimized for developer tasks. While this means that these general-purpose models may not offer the same depth of Android knowledge as Gemini in Android Studio, they provide a fun and engaging playground to experiment and gain insight into the potential of AI in Android development.

Experiment 1: Turning designs into UI code

First, to turn designs into Compose UI code: Open the chat prompt section of Google AI Studio, upload an image of your app's UI screen (see example below) and enter the following prompt:

"Act as an Android app developer. For the image provided, use Jetpack Compose to build the screen so that the Compose Preview is as close to this image as possible. Also make sure to include imports and use Material3."

Then, click "run" to execute your query and see the generated code. You can copy the generated output directly into a new file in Android Studio.

Image uploaded: Designer mockup of an application's detail screen
Image uploaded: Designer mockup of an application's detail screen

Moving image showing a custom chat prompt being created from the imagev provided in Google AI Studio
Google AI Studio custom chat prompt: Image → Compose

Moving image showing running the generated code in Android Studio
Running the generated code (with minor fixes) in Android Studio

With this experiment, Gemini was able to infer details from the image and generate corresponding code elements. For example, the original image of the plant detail screen featured a "Care Instructions" section with an expandable icon — Gemini's generated code included an expandable card specifically for plant care instructions, showcasing its contextual understanding and code generation capabilities.


Experiment 2: Smart UI fixes with Gemini in AI Studio

Inspired by "Circle to Search", another fun experiment you can try is to "circle" problem areas on a screenshot, along with relevant Compose code context, and ask Gemini to suggest appropriate code fixes.

You can explore with this concept in Google AI Studio:

    1. Upload Compose code and screenshot: Upload the Compose code file for a UI screen and a screenshot of its Compose Preview, with a red outline highlighting the issue—in this case, items in the Bottom Navigation Bar that should be evenly spaced.

Example: Preview with problem area highlighted
Example: Preview with problem area highlighted

    2. Prompt Gemini: Open the chat prompt section and enter

    "Given this code file describing a UI screen and the image of its Compose Preview, please fix the part within the red outline so that the items are evenly distributed."
Screenshot of Google AI Studio: Smart UI Fixes with Gemini
Google AI Studio: Smart UI Fixes with Gemini

    3. Gemini's solution: Gemini returned code that successfully resolved the UI issue.

Screenshot of Example: Generated code fixed by Gemini
Example: Generated code fixed by Gemini

Example: Preview with fixes applied
Example: Preview with fixes applied

Experiment 3: Integrating Gemini prompts in your app

Gemini can streamline experimentation and development of custom app features. Imagine you want to build a feature that gives users recipe ideas based on an image of the ingredients they have on hand. In the past, this would have involved complex tasks like hosting an image recognition library, training your own ingredient-to-recipe model, and managing the infrastructure to support it all.

Now, with Gemini, you can achieve this with a simple, tailored prompt. Let's walk through how to add this "Cook Helper" feature into your Android app as an example:

    1. Explore the Gemini prompt gallery: Discover example prompts or craft your own. We'll use the "Cook Helper" prompt.

Gemini prompt gallery in Google AI for Developers
Google AI for Developers: Prompt Gallery

    2. Open and experiment in Google AI Studio: Test the prompt with different images, settings, and models to ensure the model responds as expected and the prompt aligns with your goals.

Moving image showing the Cook Helper prompt in Google AI for Developers
Google AI Studio: Cook Helper prompt

    3. Generate the integration code: Once you're satisfied with the prompt's performance, click "Get code" and select "Android (Kotlin)". Copy the generated code snippet.

Screengrab of using 'Get code' to obtain a Kotlin snippet in Google AI Studio
Google AI Studio: get code - Android (Kotlin)

    4. Integrate the Gemini API into Android Studio: Open your Android Studio project. You can either use the new Gemini API app template provided within Android Studio or follow this tutorial. Paste the copied generated prompt code into your project.

That's it - your app now has a functioning Cook Helper feature powered by Gemini. We encourage you to experiment with different example prompts or even create your own custom prompts to enhance your Android app with powerful Gemini features.

Our approach on bringing AI to Android Studio

While these experiments are promising, it's important to remember that large language model (LLM) technology is still evolving, and we're learning along the way. LLMs can be non-deterministic, meaning they can sometimes produce unexpected results. That's why we're taking a cautious and thoughtful approach to integrating AI features into Android Studio.

Our philosophy towards AI in Android Studio is to augment the developer and ensure they remain "in the loop." In particular, when the AI is making suggestions or writing code, we want developers to be able to carefully audit the code before checking it into production. That's why, for example, the new Code Suggestions feature in Canary automatically brings up a diff view for developers to preview how Gemini is proposing to modify your code, rather than blindly applying the changes directly.

We want to make sure these features, like Gemini in Android Studio itself, are thoroughly tested, reliable, and truly useful to developers before we bring them into the IDE.

What's next?

We invite you to try these experiments and share your favorite prompts and examples with us using the #AndroidGeminiEra tag on X and LinkedIn as we continue to explore this exciting frontier together. Also, make sure to follow Android Developer on LinkedIn, Medium, YouTube, or X for more updates! AI has the potential to revolutionize the way we build Android apps, and we can't wait to see what we can create together.

The Third Beta of Android 15

Posted by Matthew McCullough – VP of Product Management, Android Developer


Android 15 logo

Today's Android 15 Beta 3 release takes Android 15 to Platform Stability, which means that the developer APIs and all app-facing behaviors are now final for you to review and integrate into your apps, and apps targeting Android 15 can be made available in Google Play. Thank you for all of your continued feedback in getting us to this milestone.

Android 15 continues our work to build a platform that helps improve your productivity while giving you new capabilities to produce superior media and AI experiences, take advantage of device form factors, minimize battery impact, maximize smooth app performance, and protect user privacy and security, all on the most diverse lineup of devices.

Android delivers enhancements and new features year-round, and your feedback on the Android beta program plays a key role in helping Android continuously improve. The Android 15 developer site has lots more information about the beta, including how to get it on devices and the release timeline. We’re looking forward to hearing what you think, and thank you in advance for your continued help in making Android a platform that works for everyone.

New in Android 15 Beta 3

Android 15 Production Timeline

Given where we are in the release cycle, there are just a few new things in the Android 15 Beta 3 release for you to consider when developing your apps.

Improved user experience for passkeys and Credential Manager

Users will be able to sign-into apps that target Android 15 using passkeys in a single step with facial recognition, fingerprint, or screen lock. If they accidentally dismiss the prompt to use a passkey to sign-in, they will be able to see the passkey or other Credential Manager suggestions in autofill conditional user interfaces, such as keyboard suggestions or dropdowns.

Single-step UI experience

Single step UI experience demonstrating before on the left which required two taps and after on the right which only requires one

Fallback UI experience

Fallback UI experience showing password, passkey, and sign in with Google options across Keyboard chips and on screen dropdown options

Credential Provider integration for the single-step UI

Registered credential providers will be able to use upcoming APIs in the Jetpack androidx.credentials library to hand off the user authentication mechanism to the system UI, enabling the single-step authentication experience on devices running Android 15.

App integration for autofill fallback UI

When you present the user with a selector at sign-in using Credential Manager APIs, you can associate a Credential Manager request with a given view, such as a username or a password field. When the user focuses on one of these views, Credential Manager gets an associated request, and provider-aggregated resulting credentials are displayed in autofill fallback UIs, such as inline or dropdown suggestions.

WebSQL deprecated in Android WebView

The setDatabaseEnabled and getDatabaseEnabled WebSettings are now deprecated. These settings are used for WebSQL support inside Webview. WebSQL is removed in Chrome and is now deprecated on Android Webview. These methods will become a no-op on all Android versions in the next 12 months.

The World Wide Web Consortium (W3C) encourages apps needing web databases to adopt Web Storage API technologies like localStorage and sessionStorage, or IndexedDB. SQLite Wasm in the browser backed by the Origin Private File System outlines a replacement set of technologies based on the SQLite database, compiled to Web Assembly (Wasm), and backed by the origin private file system to enable more direct migration of WebSQL code.

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 15.

Testing your app involves installing your production app using Google Play or other means onto a device or emulator running Android 15 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply even if you don't yet target Android 15:

    • Support for 16KB page sizes - Beginning with Android 15, Android supports devices that are configured to use a page size of 16 KB. If your app or library uses the NDK, either directly or indirectly through an SDK, then you will likely need to rebuild your app for it to work on these devices.
    • Private space support - Private space is a new feature in Android 15 that lets users create a separate space on their device where they can keep sensitive apps away from prying eyes, under an additional layer of authentication.

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 15-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 15 and use the compatibility framework to help quickly detect issues.

Get started with Android 15

Today's beta release has everything you need to try out Android 15 features, test your apps, and give us feedback. Now that we’re in the beta phase, you can check here to get information about enrolling your device; Enrolling supported Pixel devices will deliver this and future Android Beta updates over-the-air. If you don’t have a supported device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you're already in the Android 14 QPR beta program on a supported device, you'll automatically get updated to Android 15 Beta 3.

For the best development experience with Android 15, we recommend that you use the latest version of Android Studio Koala. Once you’re set up, here are some of the things you should do:

    • Try the new features and APIs - your feedback is critical during the early part of the developer preview and beta program. Report issues in our tracker on the feedback page.
    • Test your current app for compatibility - learn whether your app is affected by changes in Android 15; install your app onto a device or emulator running Android 15 and extensively test it.
    • Update your app with the Android SDK Upgrade Assistant - The latest Android Studio Koala Feature Drop release now covers android 15 API changes and walks you through the steps to upgrade your targetSdkVersion with the Android SDK Upgrade Assistant.
Android SDK Upgrade Assistant in Android Studio Koala Feature Drop
Android SDK Upgrade Assistant in Android Studio Koala Feature Drop

We’ll update the beta system images and SDK regularly throughout the remainder of the Android 15 release cycle. Read more here.

For complete information, visit the Android 15 developer site.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

All trademarks, logos and brand names are the property of their respective owners.

3 must-know updates from Google Play at I/O ’24

Posted by Nick Sharma – Product Manager, Google Play

At Google Play, we’re passionate about helping people discover experiences they’ll love while empowering developers like you to bring your ideas to life and build successful businesses. At this year’s Google I/O, we shared our latest developments that will help you acquire and engage users, optimize your revenue, and reinforce trust with secure, high-quality experiences.

If you missed this year’s event, check out our recap video below, or read on for our top 3 announcements.

#1: Enhanced store listings: More ways to reach the right audience

Your store listing is often your first chance to make a good impression and acquire new users. You can already tailor your store listing in a number of ways to optimize your conversions for different audiences.

    • Now, you can also create listings based on what users search for. Tailoring your store listings by search keywords will not only make listing content more relevant, it can also help you target users actively seeking the benefits your app provides.
    • Not sure what keywords to choose? Play Console will now give you keyword suggestions for potentially impactful store listings.
Increase your store listing's relevance and conversions by displaying content tailored to users by search keywords

#2: Expanded payment options: More ways for customers to pay for your content

Our extensive payment method library, which includes traditional payment methods like credit cards and over 300 local forms of payment in more than 65 markets, continues to grow.

    • We enabled Pix in Brazil, allowing you to offer millions of customers their preferred payment method.
    • We also enhanced support for UPI in India to streamline subscription purchases.
    • With our new installment subscriptions feature, you can offer customers the option to pay over time for long-term subscriptions, helping increase your signups and lifetime value.
Installment subscriptions are now available for users in Brazil, France, Italy, and Spain

#3: SDK Console improvements: Build high-quality and safer app experiences

We're making it easier to build high-quality and safer app experiences with enhancements made possible by SDK Console.

    • You can now get better guidance on how to fix crashes or errors in Android Studio and receive notifications from SDK owners about non-compliant versions in Play Console.
    • Plus, you can share crash or ANR data with SDK owners directly through Play Console.
Sare crash or ANR data with SDK owners in Play Console
Developers can now share crash or ANR data with SDK owners in Play Console

That’s it for our top 3 announcements, there’s so much more to discover from this year’s event. Check out this blog post for more Google Play announcements at this year’s Google I/O.

Max implemented UI changes 30% faster using Jetpack Compose

Posted by Tomáš Mlynarič, Developer Relations Engineer

Max®, which launched in the US on May 23, 2023, is an enhanced streaming platform from Warner Bros. Discovery, delivering unparalleled quality content for everyone in the household. Max developers want to provide the best UX possible, and they’re always searching for new ways to do that. That’s why Max developers built the app using Jetpack Compose, Android’s modern declarative toolkit for creating native UI. Building Max’s UI with Compose set the app up for long-term success, enabling developers to build new experiences in a faster and easier way.

Compose streamlines development

Max is the latest app from Warner Bros. Discovery and builds on the company’s prior learnings from HBO Max and discovery+. When Max development began in late 2022, developers had already used Compose to build the content discovery feature on discovery+—one of its core UI features.

“It was natural to continue our adoption of Compose to the Max platform,” said Boris D’Amato, Sr. Staff Software Engineer at Max.

Given the team’s previous experience using Compose on discovery+, they knew it would streamline development and improve the app’s maintainability. In the end, building Max with Compose reduced the app’s boilerplate code, increased the re-usability of its UI elements, and boosted developer productivity overall.

“Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” said Boris.

Today, Max’s UI is built almost entirely with Compose, and developers estimate that adopting Compose allowed them to implement UI changes 30% faster than with Views. Thanks to the toolkit’s modular nature, developers could build highly reusable components and adapt or combine them to form new UI elements, creating a more cohesive app design.

Compose significantly reduced the time required to implement UI changes, solving the pain point of maintaining a large, complex UI codebase and making it easier to iterate on the app's design and user experience,” — Boris D’Amato, Sr. Staff Software Engineer at Max

More improvements with Compose

Today, Compose is so integral to Max's design that the app's entire UI architecture is designed specifically to support Compose. For example, developers built a system to dynamically render server-driven, editorially curated content and user-personalized recommendations without having to ship a new version of the app. To support this system, developers relied on the best practices when architecting Compose apps, leveraging Compose's smart recompositioning and skipability for the smoothest experience possible.

Much like the discovery+ platform, Compose is also used for Max’s content discovery feature. This feature helps Max serve tailored content to each user based on how they use the app. Thanks to Compose, it was easy for developers to ensure this feature worked as intended because it allowed them to test each part in manageable segments.

“One of the features most impacted by using Compose was our content discovery system. Compose enabled us to create a highly dynamic and interactive interface that adapts in real-time to user context and preferences,” said Boris.

Adapting to users’ unique needs is another reason Compose has impressed Max developers. Compose makes it easy to support the many different screens and form factors available on the market today. With the Window size classes API, Max can scale its UI in real time to accommodate screen size and shape variations for tablets and foldables.

Examples of UX on large and small screens

The future with Compose

Since adopting Compose, the Max team has noticed increased interest from prospective job candidates excited about working with the latest Android technologies.

“Whenever we mention that Max is built using Compose, the excitement in the candidates is palpable. It indicates that we’re investing in keeping our tech stack updated and our focus on the developer experience,” said Boris.

Looking ahead, the Max team plans to lean further into its Compose codebase and make even more use of the toolkit’s features, like animation APIs, predictive gestures, and widgets.

“I absolutely recommend Jetpack Compose. Compose's declarative approach to UI development allows for a more intuitive and efficient design process, making implementing complex UIs and animations easy. Once you try Compose, there’s no going back,” said Boris.

Get started

Optimize your UI development with Jetpack Compose.

Developers for adidas CONFIRMED build features 30% faster using Jetpack Compose

Posted by Nick Butcher – Product Manager for Jetpack Compose, and Florina Muntenescu – Developer Relations Engineer

adidas CONFIRMED is an app for the brand’s most loyal fans who want its latest, curated collections that aren’t found anywhere else. The digital storefront gives streetwear, fashion, and style enthusiasts access to adidas' most exclusive drops and crossovers so they can shop them as soon as they go live. The adidas CONFIRMED team wants to provide users a premium experience, and it’s always exploring new ways to elevate the app’s UX. Today, its developers are more equipped than ever to improve the in-app experience using Jetpack Compose, Android’s modern declarative toolkit for building UI.

Improving the UX with Jetpack Compose

adidas CONFIRMED designers conduct quarterly consumer surveys for feedback from users regarding new app flows and UI enhancements. Their surveys revealed that 80% of the app’s users prefer animated visuals because animations encourage them to explore and interact with the app more. adidas CONFIRMED developers wanted to implement new design elements and animations across the app’s interface to strengthen engagement, but the app’s previous View-based system limited their ability to create engaging UX in a scalable way.

“We decided to build dynamic elements and animations across many of our screens and user journeys,” said Rodrigo Represa, an Android engineer at adidas. “We had an ambitious list of UI updates we wanted to make and started looking for solutions to help us achieve them.”

Switching to Compose allowed adidas CONFIRMED developers to create features faster than ever. The improvement in engineering efficiency has been noticeable, with the team estimating that Compose enables them to create new features roughly 30% faster than with Views. Today, more than 80% of the app’s UI has been migrated to Compose.

“I can build the same feature with Compose about 30% faster than with Views.” — Rodrigo Represa, Android engineer at adidas

Innovating the in-app experience

As part of the app’s new interface update, adidas CONFIRMED developers created an exciting, animated experience called Shoes Tournament. This competition positions different brand-collaborator sneakers head to head in a digital tournament where users vote for their favorite shoe. It took two developers only three months to build this feature from the ground up using Compose. And users loved it — it increased the app’s weekly active users by 8%.

UX screen of shoe tournament. Top shoe is clicked. Text reads: It took adidas' Android devs only 3 months to build this feature from the ground up using Compose.

Before transitioning to Compose, it was hard for the team to customize the adidas CONFIRMED app to incorporate branding from its collaborators. With Compose, it’s easy. For instance, the app’s developers can now create a dynamic design system using CompositionLocals. This functionality helps developers update the app's appearance during collab launches, providing a more appealing user experience while maintaining a consistent and clean design.

One of the most exciting animations adidas CONFIRMED developers added utilized device sensors. Users can view and interact with the products they’re looking at on product display pages by simply moving their devices, just as if they were holding the product in real life. Developers used Compose to create realistic lighting effects for the animation to make the viewing experience more engaging.

An easier way to build UI

Using composables allowed adidas CONFIRMED developers to reuse existing components. As both the flagship adidas app and the adidas CONFIRMED app are part of the same monorepo, engineers could reuse composables across both apps, like forms and lists, enabling them to implement new features quickly and easily.

“The accelerated development with Compose provided our team of seven with more time, enabling us to strike a healthy balance between delivering new functionalities and ensuring the long-term health and sustainability of our app,” said Rodrigo.

Compose also helped to improve app stability and performance for the team. They noticed a significant reduction in app-related crashes, and have seen virtually no UI-related crashes, since migrating the app to Compose. The team is proud to provide a 99.9% crash-free user experience.

Compose’s efficiency not only accelerated development, but also helped us achieve our business goals.” — Rodrigo Represa, Android engineer at adidas

A better app built with the future in mind

Compose opened doors to implementing new features faster than ever. With Compose’s clean and concise usage of Kotlin, it was easy for developers to create the ambitious and engaging interface adidas CONFIRMED users wanted. And the team doesn’t plan to stop there.

The adidas CONFIRMED team wants to lean further into its new codebase and fully adopt Compose moving forward. They also want to bring the app to new screens using more of the Compose suite and are currently developing an app widget using Jetpack Glance. This new experience will provide users with a streamlined feed of new product information for an even more efficient user experience.

“I recommend Compose because it simplifies development and is a more intuitive and powerful approach to building UI,” said Rodrigo.

Get started

Optimize your UI development with Jetpack Compose.

Top 3 Updates with Compose across Form Factors at Google I/O ’24

Posted by Chris Arriola – Developer Relations Engineer

Google I/O 2024 was filled with lots of updates and announcements around helping you be more productive as a developer. Here are the top 3 announcements around Jetpack Compose and Form Factors from Google I/O 2024:

#1 New updates in Jetpack Compose

The June 2024 release of Jetpack Compose is packed with new features and improvements such as shared element transitions, lazy list item animations, and performance improvements across the board.

With shared element transitions, you can create delightful continuity between screens in your app. This feature works together with Navigation Compose and predictive back so that transitions can happen as users navigate your app. Another highly requested feature—lazy list item animations—is also now supported for lazy lists giving it the ability to animate inserts, deletions, and reordering of items.

Jetpack Compose also continues to improve runtime performance with every release. Our benchmarks show a faster time to first pixel of 17% in our Jetsnack Compose sample. Additionally, strong skipping mode graduated from experimental to production-ready status further improving the performance of Compose apps. Simply update your app to take advantage of these benefits.

Read What’s new in Jetpack Compose at I/O ‘24 for more information.


#2 Scaling across screens with new Compose APIs and Tools

During Google I/O, we announced new tools and APIs to make it easier to build across screens with Compose. The new Material 3 adaptive library introduces new APIs that allow you to implement common adaptive scenarios such as list-detail, and supporting pane. These APIs allow your app to display one or two panes depending on the available size for your app.

Watch Building UI with the Material 3 adaptive library and Building adaptive Android apps to learn more. If you prefer to read, you can check out About adaptive layouts in our documentation.

We also announced that Compose for TV 1.0.0 is now available in beta. The latest updates to Compose for TV include better performance, input support, and a whole range of improved components that look great out of the box. New in this release, we’ve added lists, navigation, chips, and settings screens. We’ve also added a new TV Material Catalog app and updated the developer tools in Android Studio to include a new project wizard to get a running start with Compose for TV.

Finally, Compose for Wear OS has added features such as SwipeToReveal, an expandableItem, and a range of WearPreview supporting annotations. During Google I/O 2024, Compose for Wear OS graduated visual improvements and fixes from beta to stable. Learn more about all the updates to Wear OS by checking out the technical session.

Check out case studies from SoundCloud and Adidas to see how apps are leveraging Compose to build their apps and learn more about all the updates for Compose across screens by reading more here!


#3 Glance 1.1

Jetpack Glance is Android’s modern recommended framework for building widgets. The latest version, Glance 1.1, is now stable. Glance is built on top of Jetpack Compose allowing you to use the same declarative syntax that you’re used to when building widgets.

This release brings a new unit test library, Error UIs, and new components. Additionally, we’ve released new Canonical Widget Layouts on GitHub to allow you to get started faster with a set of layouts that align with best practices and we’ve published new design guidance published on the UI design hub — check it out!

To learn more about using Glance, check out Build beautiful Android widgets with Jetpack Glance. Or if you want something more hands-on, check out the codelab Create a widget with Glance.


You can learn more about the latest updates to Compose and Form Factors by checking out the Compose Across Screens and the What’s new in Jetpack Compose at I/O ‘24 blog posts or watching the spotlight playlist!

Enabling safe AI experiences on Google Play

Posted by Prabhat Sharma – Director, Trust and Safety, Play, Android, and Chrome

The rapid advancements in generative AI unlock opportunities for developers to create new immersive and engaging app experiences for users everywhere. In this time of fast-paced change, we are excited to continue enabling developers to create innovative, high-quality apps while maintaining the safe and trusted experience people expect from Google Play. Our goal is to make AI helpful for everyone, enriching the app ecosystem and enhancing user experiences.

Ensuring safety for apps with generative AI features

Over the past year, we’ve expanded our review capabilities to address new complexities that come with apps with generative AI features. We’re using new technology like large language models (LLMs) to quickly analyze app submissions, including vast amounts of text to identify potential issues like sexual content or hate speech, and flag them for people on our global team to take a closer look. This combination of human expertise and increased AI efficiency helps us improve the app review experience for developers and create a safer app environment for everyone.

Additionally, we have strengthened Play’s existing policies to address emerging concerns and feedback from users and developers, and keep pace with evolving technologies like generative AI. For example, last October, we shared that all generative AI apps must give users a way to report or flag offensive content without having to leave the app.

Building apps with generative AI features in a responsible way

Google Play's policies, which have long supported the foundation of our user safety efforts, are deeply rooted in a continuous collaboration between Play and developers. They provide a framework for responsible app development, and help ensure that Play remains a trusted platform around the world. As generative AI is still in its early stages, we have received feedback from developers seeking clarity on the requirements for apps on Play that feature AI-created content. Today we are responding to that feedback and providing guidance to help developers enhance the quality and safety of AI-powered apps, avoid potential issues or delays in app submissions, foster trust among users, and contribute to a thriving and responsible app ecosystem on Google Play:

    • Review Google Play policies: Google Play’s policies help us provide a safe and high-quality experience, therefore we don’t allow apps that feature generative AI that can be inappropriate or harmful to users. Make sure you review our AI-Generated Content Policy and ensure that any of your apps meet these requirements to avoid them being rejected or removed from Google Play. 

      In particular, apps that generate content using AI must:

        • Give users a way to report or flag offensive content. Monitoring and prioritizing user feedback is especially important for apps with generative AI features, where user interactions directly shape the content and experience.
      Moving image of AI Art Generator app UI experience on an Android mobile device
      Note: Images are examples and subject to change

    • Promote your app responsibly: Advertising your app is an important tool in growing your business, and it's critical to do it in a way that's safe and respectful of users. Ultimately you’re responsible for how your app is marketed and advertised, so review your marketing materials to ensure that your ads accurately represent your app's capabilities, and that all ads and promotional content associated with your app, across all platforms, meet our App Promotion requirements. For example, advertising your app for an inappropriate use case may result in it being removed from Google Play.

    • Rigorously test AI tools and models: You are accountable for the experience in your apps, so it’s critical for you to understand the underlying AI tools and models used to create media and to ensure that these tools are reliable and that the outputs are aligned with Google Play's policies and respect user safety and privacy. Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content. For example, you can use our closed testing feature to share early versions of your app and ask for specific feedback on if your users get generated results that they expect.

      This thorough understanding and testing especially applies to generative AI, so we recommend that you start documenting this testing because we may ask to review it in the future to help us better understand how you keep your users protected.

As the AI landscape evolves, we will continue to update our policies and developer tools to address emerging needs and complexities. This includes introducing new app onboarding capabilities in the future to make the process of submitting a generative AI app to Play even more transparent and streamlined. We’ll also share best practices and resources, like our People + AI Guidebook, to support developers in building innovative and responsible apps that enrich the lives of users worldwide.

As always, we're your partners in keeping users safe and are open to your feedback so we can build policies that help you lean into AI to scale your business on Play in ways that delight and protect our shared users.

Top 3 Updates for Building with AI on Android at Google I/O ‘24

Posted by Terence Zhang – Developer Relations Engineer

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Moving image of Gemini Nano operating in Adobe

Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.