Tag Archives: Android

Enabling safe AI experiences on Google Play

Posted by Prabhat Sharma – Director, Trust and Safety, Play, Android, and Chrome

The rapid advancements in generative AI unlock opportunities for developers to create new immersive and engaging app experiences for users everywhere. In this time of fast-paced change, we are excited to continue enabling developers to create innovative, high-quality apps while maintaining the safe and trusted experience people expect from Google Play. Our goal is to make AI helpful for everyone, enriching the app ecosystem and enhancing user experiences.

Ensuring safety for apps with generative AI features

Over the past year, we’ve expanded our review capabilities to address new complexities that come with apps with generative AI features. We’re using new technology like large language models (LLMs) to quickly analyze app submissions, including vast amounts of text to identify potential issues like sexual content or hate speech, and flag them for people on our global team to take a closer look. This combination of human expertise and increased AI efficiency helps us improve the app review experience for developers and create a safer app environment for everyone.

Additionally, we have strengthened Play’s existing policies to address emerging concerns and feedback from users and developers, and keep pace with evolving technologies like generative AI. For example, last October, we shared that all generative AI apps must give users a way to report or flag offensive content without having to leave the app.

Building apps with generative AI features in a responsible way

Google Play's policies, which have long supported the foundation of our user safety efforts, are deeply rooted in a continuous collaboration between Play and developers. They provide a framework for responsible app development, and help ensure that Play remains a trusted platform around the world. As generative AI is still in its early stages, we have received feedback from developers seeking clarity on the requirements for apps on Play that feature AI-created content. Today we are responding to that feedback and providing guidance to help developers enhance the quality and safety of AI-powered apps, avoid potential issues or delays in app submissions, foster trust among users, and contribute to a thriving and responsible app ecosystem on Google Play:

    • Review Google Play policies: Google Play’s policies help us provide a safe and high-quality experience, therefore we don’t allow apps that feature generative AI that can be inappropriate or harmful to users. Make sure you review our AI-Generated Content Policy and ensure that any of your apps meet these requirements to avoid them being rejected or removed from Google Play. 

      In particular, apps that generate content using AI must:

        • Give users a way to report or flag offensive content. Monitoring and prioritizing user feedback is especially important for apps with generative AI features, where user interactions directly shape the content and experience.
      Moving image of AI Art Generator app UI experience on an Android mobile device
      Note: Images are examples and subject to change

    • Promote your app responsibly: Advertising your app is an important tool in growing your business, and it's critical to do it in a way that's safe and respectful of users. Ultimately you’re responsible for how your app is marketed and advertised, so review your marketing materials to ensure that your ads accurately represent your app's capabilities, and that all ads and promotional content associated with your app, across all platforms, meet our App Promotion requirements. For example, advertising your app for an inappropriate use case may result in it being removed from Google Play.

    • Rigorously test AI tools and models: You are accountable for the experience in your apps, so it’s critical for you to understand the underlying AI tools and models used to create media and to ensure that these tools are reliable and that the outputs are aligned with Google Play's policies and respect user safety and privacy. Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content. For example, you can use our closed testing feature to share early versions of your app and ask for specific feedback on if your users get generated results that they expect.

      This thorough understanding and testing especially applies to generative AI, so we recommend that you start documenting this testing because we may ask to review it in the future to help us better understand how you keep your users protected.

As the AI landscape evolves, we will continue to update our policies and developer tools to address emerging needs and complexities. This includes introducing new app onboarding capabilities in the future to make the process of submitting a generative AI app to Play even more transparent and streamlined. We’ll also share best practices and resources, like our People + AI Guidebook, to support developers in building innovative and responsible apps that enrich the lives of users worldwide.

As always, we're your partners in keeping users safe and are open to your feedback so we can build policies that help you lean into AI to scale your business on Play in ways that delight and protect our shared users.

Top 3 Updates for Building with AI on Android at Google I/O ‘24

Posted by Terence Zhang – Developer Relations Engineer

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Moving image of Gemini Nano operating in Adobe

Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.

#WeArePlay | How Zülal is using AI to help people with low vision

Posted by Leticia Lago – Developer Marketing

Born in Istanbul, Türkiye with limited sight, Zülal has been a power-user of visual assistive technologies since the age of 4. When she lost her sight completely at 10 years old, she found herself reliant on technology to help her see and experience the world around her.

Today, Zülal is the founder of FYE, her solution to the issues she found with other visual assistive technologies. The app empowers people with low vision to be inspired by the world around them. Employing a team of 4, she heads up technological development and user experience for the app.

Zülal shared her story in our latest film for #WeArePlay, which celebrates people around the world building apps and games. She shared her journey from uploading pictures of her parents to a computer to get descriptions of them as a child, to developing her own visual assistive app. Find out what’s next for Zülal and how she is using AI to help people like herself.

Tell us more about the inspiration behind FYE.

Today, there are around 330 million people with severe to moderate visual impairment. Visual assistive technology is life-changing for these people, giving them back a sense of independence and a connection to the world around them. I’m a poet and composer, and in order to create I needed this tech so that I could see and describe the world around me. Before developing FYE, the visual assistive technology I was relying on was falling short. I wanted to take back control. I didn’t want to sit back, wait and see what technology could do for me - I wanted to harness its power. So I did.

Why was it important for you to build FYE?

I never wanted to be limited by having low vision. I’ve always thought, how can I make this better? How can I make my life better? I want to do everything, because I can. I really believe that there’s nothing I can’t do. There’s nothing WE can’t do. Having a founder like me lead the way in visual assistive technology illustrates just that. We’re taking back control of how we experience the world around us.

What’s different about FYE?

With our app, I believe our audience can really see the world again. It uses a combination of AI and human input to describe the world around them to our users. It incorporates an AI model trained on a dataset of over 15 million data points, so it really encompasses all the varying factors that make up the world of everyday visual experiences. The aim was to have descriptions as vivid as if I was describing my surroundings myself. It’s the small details that make a big difference.

What’s next for your app?

We already have personalized AI outputs so the user can create different AI assistants to suit different situations. You can use it to work across the internet as you’re browsing or shopping. I use it a lot for cooking - where the AI can adapt and learn to suit any situation. We are also collaborating with places where people with low vision might struggle, like the metro and the airport. We’ve built in AI outputs in collaboration with these spaces so that anyone using our app will be able to navigate those spaces with confidence. I’m currently working on evolving From Your Eyes as an organization, reimagining the app as one element of the organization under the new name FYE. Next, we’re exploring integrations with smart glasses and watches to bring our app to wearables.

Discover more #WeArePlay stories and share your favorites.



How useful did you find this blog post?

Android Device Streaming, powered by Firebase, is now in Beta

Posted by Adarsh Fernando, Senior Product Manager, Android Developer Tools

Validating your app on a range of Android screens is an important step to developing a high quality Android app. However, getting access to the device you need, when you need it, can be challenging and time consuming. From trying to reproduce a device specific behavior on a Samsung device to testing your adaptive app layouts on the Google Pixel Fold, having the right device at the right time is critical.

To address this app developer use case, we created Android Device Streaming, powered by Firebase. With just a few clicks, you and your team can access real physical devices, such as the latest Pixel and Samsung devices, and use them in the IDE in many of the ways you would use a physical device sitting on your desk.

Animation of using Device Streaming in Android Studio
Android Device Streaming, powered by Firebase, available in Android Studio Jellyfish

Today, Android Device Streaming is in beta and is available to all Android developers using Android Studio Jellyfish or later. We’ve also added new devices to the catalog and introduced flexible pricing that provides low-cost access to the latest Android devices.

Read below to learn what changes are in this release, as well as common questions around uses, security, and pricing. However, if you want to get started right away and try Android Device Streaming at no cost, see our getting started guide.

What can you do with Android Device Streaming?

If you’ve ever used Device Mirroring, you know that Android Studio lets you see the screen of your local physical device within the IDE window. Without having to physically reach out to your device, you’re able to change the device orientation, change the posture of foldables, simulate pressing physical buttons, interact with your app, and more. Android Device Streaming leverages these same capabilities, allowing you to connect and interact with remote physical devices provided by Firebase.

Screen capture of using the debugger with Android Device Streaming
Using the Debugger with Android Device Streaming

When you use Android Studio to request a device from Android Device Streaming, the IDE establishes a secure ADB over SSL connection to the device. The connection also lets you use familiar tools in Android Studio that communicate with the device, such as the Debugger, Profiler, Device Explorer, Logcat, Compose Live Edit, and more. These tools let you more accurately validate, test, and debug the behavior of your app on real OEM hardware.

What devices would my team have access to?

Android Device Streaming gives you and your team access to a number of devices running Android versions 8.1 through 14. You have access to the latest flagship devices from top device manufacturers, such as Google Pixel and Samsung. You can expand testing your app across more form factors with access to the latest foldables and tablets, such as the Samsung Tab S8 Ultra.

Screen capture of browsing the list of devices and selecting the one you want to use in Android Studio
Browse and select devices you want to use from Android Studio

And we’re frequently adding new devices to our existing catalog of 20+ device models, such as the following recent additions:

    • Samsung Galaxy Z Fold5
    • Samsung Galaxy S23 Ultra
    • Google Pixel 8a

Without having to purchase expensive devices, each team member can access Firebase’s catalog of devices in just a few clicks, for as long as they need—giving your team confidence that your app looks great across a variety of popular devices.


Google OEM partner logos - Samsung, Google Pixel, Oppo, and Xiaomi

As we mentioned at Google I/O ‘24, we’re partnering with top Original Equipment Manufacturers (OEMs), such as Samsung, Google Pixel, Oppo, and Xiaomi, to expand device selection and availability even further in the months to come. This helps the catalog of devices grow and stay ahead of ecosystem trends, so that you can validate that your apps work great on the latest devices before they reach the majority of your users.

Is Android Device Streaming secure?

Android Device Streaming, powered by Firebase, takes the security and privacy of your device sessions very seriously. Firebase devices are hosted in secure global data centers and Android Studio uses an SSL connection to connect to the device.

A device that you’ve used to install and test your app on is never shared with another user or Google service before being completely erased and factory reset. When you’re done using a device, you can do this yourself by clicking “Return and Erase Device” to fully erase and factory reset it. The same applies if the session expires and the device is returned automatically.

Screen capture of Reuturn and Erase Device function in Android Device Streaming
When your session ends, the device is fully erased and factory reset.

How much does Android Device Streaming cost?

Depending on your Firebase project’s pricing plan, Android Device Streaming users can use Android Device Streaming with the following pricing:

    • On June 1, 2024, for a promotional period:
        • (no cost) Spark plan: 120 no cost minutes per project, per month
        • Blaze plan: 120 no cost minutes per project, per month, 15 cents for each additional minute
    • On or around February, 2025, the promotional period will end and billing will be based on the following quota limits:
        • (no cost) Spark plan: 30 no cost minutes per project, per month
        • Blaze plan: 30 no cost minutes per project, per month, 15 cents for each additional minute

With no monthly or yearly contracts, Android Device Streaming’s per-minute billing provides unparalleled flexibility for you and your team. And importantly, you don’t pay for any period of time required to set up the device before you connect, or erase the device after you end your session. This allows you and your team to save time and costs compared to purchasing and managing your own device lab.

To learn more, see Usage levels, quotas, and pricing.

What’s next

We’re really excited for you and your team to try Android Device Streaming, powered by Firebase. We think it’s an easy and cost-effective way for you to access the devices you need, when you need them, and right from your IDE, so that you can ensure the best quality and functionality of your app for your users.

The best part is, you can try out this new service in just a few clicks and at no cost. And our economical per-minute pricing provides increased flexibility for your team to go beyond the monthly quota, so that you only pay for the time you’re actively connected to a device—no subscriptions or long-term commitments required.

You can expect that the service will be adding more devices from top OEM partners to the catalog, to ensure that device selection remains up-to-date and becomes increasingly diverse. Try Android Device Streaming today and share your experience with the Android developer committee on LinkedIn, Medium, YouTube, or X.

Top 3 Updates for Building Excellent Apps at Google I/O ‘24

Posted by Tram Bui, Developer Programs Engineer, Developer Relations

Google I/O 2024 was filled with the latest Android updates, equipping you with the knowledge and tools you need to build exceptional apps that delight users and stand out from the crowd.

Here are our top three announcements for building excellent apps from Google I/O 2024:

#1: Enhancing User Experience with Android 15

Android 15 introduces a suite of enhancements aimed at elevating the user experience:

    • Edge-to-Edge Display: Take advantage of the default edge-to-edge experience offered by Android 15. Design interfaces that seamlessly extend to the edges of the screen, optimizing screen real estate and creating an immersive visual experience for users.
    • Predictive Back: Predictive back can enhance navigation fluidity and intuitiveness. The system animations are no longer behind a Developer Option, which means users will be able to see helpful preview animations. Predictive back support is available for both Compose and Views.

#2: Stylus Support on Large Screens

Android's enhanced stylus support brings exciting capabilities:

    • Stylus Handwriting: Android now supports handwriting input in text fields for both Views and Compose. Users can seamlessly input text using their stylus without having to switch input methods, which can offer a more natural and intuitive writing experience.
    • Reduced Stylus Latency: To enhance the responsiveness of stylus interactions, Android introduces two new APIs designed to lower stylus latency. Android developers have seen great success with our low latency libraries, with Infinite Painter achieving a 5x reduction in latency from from 60-90 ms down to 8-16 ms.

#3: Wear OS 5: Watch Face Format, Conservation, and Performance

In the realm of Wear OS, we are focused on power conservation and performance enhancements:

    • Enhanced Watch Face Format: We've introduced improvements to the Watch Face Format, making it easier for developers to customize and optimize watch faces. These enhancements can enable the creation of more responsive, visually appealing watch faces that delight users.
    • Power Conservation: Wear OS 5 prioritizes power efficiency and battery conservation. Now available in developer preview along with a new emulator, you can leverage these improvements to create Wear OS apps that deliver exceptional battery life without compromising functionality.

There you have it— the top updates from Google I/O 2024 to help you build excellent apps. Excited to explore more? Check out the full playlist for deeper insights into these announcements and other exciting updates unveiled at Google I/O.

A Developer’s Roadmap to Predictive Back (Views)

Posted by Ash Nohe and Tram Bui – Developer Relations Engineers

Before you read on, this topic is scoped to Views. Predictive Back with Compose is easier to implement and not included in this blog post. To learn how to implement Compose with Predictive Back, see the Add predictive back animations codelab and the I/O workshop Improve the user experience of your Android app.

This blog post aims to shed light on various dependencies and requirements to support predictive back animations in your views based app.

First, view the Predictive Back Requirements table to understand if a particular animation requires a manifest flag, a compileSDK version, additional libraries or hidden developer options to function.

Then, start your quest. Here are your milestones:

  1. Upgrade Kotlin milestone
  2. Back-to-home animation milestone
  3. Migrate all activities milestone
  4. Fragment milestone
  5. Material Components (Views) milestone
  6. [Optional] AndroidX transitions milestone
Milestones

Upgrade Kotlin milestone

The first milestone is to upgrade to Kotlin 1.8.0 or higher, which is required for other Predictive Back dependencies.

Upgrade to Kotlin 1.8.0 or higher

Back-to-home animation milestone

The back-to-home animation is the keystone predictive back animation.

To get this animation, add android:enableOnBackInvokedCallback=true in your AndroidManifest.xml for your root activity if you are a multi-activity app (see per-activity opt-in) or at the application level if you are a single-activity app. After this, you’ll see both the back-to-home animation and a cross-task animation where applicable, which are visible to users in Android 15+ and behind a developer option in Android 13 and 14.

If you are intercepting back events in your root activity (e.g. MainActivity), you can continue to do so but you’ll need to use supported APIs and you won’t get the back-to-home animation. For this reason, we generally recommend you only intercept back events for UI logic; for example, to show a dialog asking the user to save before they quit.

See the Add support for the predictive back gesture guide for more details.

Milestone grid

Migrate all activities milestone

If you are a multi-activity app, you’ll need to opt-in and handle back events within those activities too to get a system controlled cross-activity animation. Learn more about per-activity opt-in, available for devices running Android 14+. The cross-activity animation is visible to users in Android 15+ and behind a developer option in Android 13 and 14.

Custom cross activity animations are also available with overrideActivityTransition.

Milestone grid

Fragment milestone

Next, you’ll want to focus on your fragment animations and transitions. This requires updating to AndroidX fragment 1.7.0 and transition 1.5.0 or later and using Animator or AndroidX Transitions. Assuming these requirements are met, your existing fragment animations and transitions will animate in step with the back gesture. You can also use material motion with fragments. Most material motions support predictive back as of 1.12.02-alpha02 or higher, including MaterialFadeThrough, MaterialSharedAxis and MaterialFade.

Don’t strive to make your fragment transitions look like the system’s cross-activity transition. We recommend this full screen surface transition instead.

Learn more about Fragments and Predictive Back.

Milestone grid

Material Components milestone

Finally, you’ll want to take advantage of the Material Component View animations available for Predictive Back. Learn more about available components.

Milestone grid

After this, you’ve completed your quest to support Predictive Back animations in your view based app.

[Optional] AndroidX Transitions milestone

If you’re up for more, you might also ensure your AndroidX transitions are supported with Predictive Back. Read more about AndroidX Transitions and the Predictive Back Progress APIs.

Milestone grid

Other Resources