
Game on: Elevate your gameplay across mobile and PC

We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.
Today, we’re sharing a closer look at what’s new from Play. We’re expanding our support for native PC games with a new earnback program and making Google Play Games on PC generally available this year with major upgrades. Check out the video or keep reading below.
Google Play connects developers with over 2 billion monthly active players1 worldwide. Our tools and features help you engage these players across a wide range of devices to drive engagement and revenue. But we know the gaming landscape is constantly evolving. More and more players enjoy the immersive experiences on PC and want the flexibility to play their favorite games on any screen.
That’s why we’re making even bigger investments in our PC gaming platform. Google Play Games on PC was launched to help mobile games reach more players on PC. Today, we’re expanding this support to native PC games, enabling more developers to connect with our massive player base on mobile.
For games that are designed with a PC-first audience in mind, we’ve added even more helpful tools to our native PC program. Games like Wuthering Waves, Remember of Majesty, Genshin Impact, and Journey of Monarch have seen great success on the platform. Based on feedback from early access partners, we’re taking the program even further, with comprehensive support across game development, distribution, and growth on the platform.
We’re opening up the program for all native PC games - including PC-only games - this year. Learn more about the eligibility requirements and how to join the program.
Bringing your game to PC unlocks a whole new audience of engaged players. To help maximize your discoverability, we’re making all mobile games available3 on PC by default with the option to opt out anytime.
Games will display a playability badge indicating their compatibility with PC. "Optimized" means that a game meets all of our quality standards for a great gaming experience while "playable" means that the game meets the minimum requirements to play well on a PC. With the support of our new custom control mappings, many games can be playable right out of the box. Learn more about the playability criteria and how to optimize your games for PC today.
To enhance our PC experience, we’ve made major upgrades to the platform. Now, gamers can enjoy the full Google Play Games on PC catalog on even more devices, including AMD laptops and desktops. We’re partnering with PC OEMs to make Google Play Games accessible right from the start menu on new devices starting this year.
We’re also bringing new features for players to customize their gaming experiences. Custom controls is now available to help tailor their setup for optimal comfort and performance. Rolling out this month, we’re adding a handy game sidebar for quick adjustments and enabling multi-account and multi-instance support by popular demand.
To help you boost engagement, we’re also rolling out a more seamless Play Points4 experience on PC. Play Points balance is now easier to track and more rewarding, with up to 10x points boosters5 on Google Play Games. This means more opportunities for players to earn and redeem points for in-game items and discounts, enhancing the overall PC experience.
More developers are launching games on PC than ever, presenting an opportunity to reach a rapidly growing audience on PC. We want to make it easier for developers to reach great players with Google Ads. We’re working on a solution to help developers run user acquisition campaigns for both mobile emulated and native PC titles within Google Play Games on PC. We’re still in the early stages of partner testing, but we look forward to sharing more details later this year.
We're celebrating all that’s to come to Google Play Games on PC with players and developers. Take a look at the behind-the-scenes from our social channels and editorial features on Google Play. At GDC, you can dive into the complete gaming experience that is available on the best Android gaming devices. If you’ll be there, please stop by and say hello - we’re at the Moscone Center West Hall!
We’re stepping up our multiplatform gaming offering with exciting news dropping at this year’s Game Developers Conference (GDC). We’re bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we’ll be diving into all of the latest games coming to Play, plus new developer tools that’ll help improve gameplay across the Android ecosystem.
Today, we’re sharing a closer look at what’s new from Android. We’re making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we’re enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out the video or keep reading below.
These days, games require more processing power for realistic graphics and cutting-edge visuals. Vulkan is an API used for low level graphics that helps developers maximize the performance of modern GPUs, and today we’re making it the official graphics API for Android. This unlocks advanced features like ray tracing and multithreading for realistic and immersive gaming visuals. For example, Diablo Immortal used Vulkan to implement ray tracing, bringing the world of Sanctuary to life with spectacular special effects, from fiery explosions to icy blasts.
For casual games like Pokémon TCG Pocket, which draws players into the vibrant world of each Pokémon, Vulkan helps optimize graphics across a broad range of devices to ensure a smooth and engaging experience for every player.
We’re excited to announce that Android is transitioning to a modern, unified rendering stack with Vulkan at its core. Starting with our next Android release, more devices will use Vulkan to process all graphics commands. If your game is running on OpenGL, it will use ANGLE as a system driver that translates OpenGL to Vulkan. We recommend testing your game on ANGLE today to ensure it’s ready for the Vulkan transition.
We’re also partnering with major game engines to make Vulkan integration easier. With Unity 6, you can configure Vulkan per device while older versions can access this setting through plugins. Over 45% of sessions from new games on Unity* use Vulkan, and we expect this number to grow rapidly.
To simplify workflows further, we’re teaming up with the Samsung Austin Research Center to create an integrated GPU profiler toolchain for Vulkan and AI/ML optimization. Coming later this year, this tool will enable developers to make graphics, memory and compute workloads more efficient.
Android Dynamic Performance Framework (ADPF) enables developers to adjust between the device and game’s performance in real-time based on the thermal state of the device, and it’s getting a big update today to provide longer and smoother gameplay sessions. ADPF is designed to work across a wide range of devices including models like the Pixel 9 family and the Samsung S25 Series. We’re excited to see MMORPGs like Lineage W integrating ADPF to optimize performance on their core target devices.
Here’s how we're enhancing ADPF with better performance and simplified integration:
Once you’ve launched your game, Play Console offers the tools to monitor and improve your game's performance. We’re newly including Low Memory Killers (LMK) in Android vitals, giving you insight into memory constraints that can cause your game to crash. Android vitals is your one-stop destination for monitoring metrics that impact your visibility on the Play Store like slow sessions. You can find this information next to reach and devices which provides updates on your game's user distribution and notifies developers for device-specific issues.
We're launching a pilot program to simplify the process of bringing PC games to mobile. It provides support starting from Android game development all the way through publishing your game on Play. Starting this month, games like DREDGE and TABS Mobile are growing their mobile audience using this program. Many more are following in their footsteps this year, including Disco Elysium. You can express your interest to join the PC to mobile program.
You can learn more about Android game development from our developer site. We can’t wait to see your title join the ranks of these amazing games built for Android. And if you’ll be at GDC next week, we’d love to say hello - stop by at the Moscone Center West Hall!
Jetpack WindowManager keeps getting better. WindowManager gives you tools to build adaptive apps that work seamlessly across all kinds of large screen devices. Version 1.4, which is stable now, introduces new features that make multi-window experiences even more powerful and flexible. While Jetpack Compose is still the best way to create app layouts for different screen sizes, 1.4 makes some big improvements to activity embedding, including activity stack spinning, pane expansion, and dialog full-screen dim. Multi-activity apps can easily take advantage of all these great features.
WindowManager 1.4 introduces a range of enhancements. Here are some of the highlights.
We’ve updated the WindowSizeClass API to support custom values. We changed the API shape to make it easy and extensible to support custom values and add new values in the future. The high level changes are as follows:
Here’s a migration example:
// old val sizeClass = WindowSizeClass.compute(widthDp, heightDp) when (sizeClass.widthSizeClass) { COMPACT -> doCompact() MEDIUM -> doMedium() EXPANDED -> doExpanded() else -> doDefault() } // new val sizeClass = WindowSizeClass.BREAKPOINTS_V1 .computeWindowSizeClass(widthDp, heightDp) when { sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_EXPANDED_LOWER_BOUND) -> { doExpanded() } sizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_MEDIUM_LOWER_BOUND) -> { doMedium() } else -> { doCompact() } }
Some things to note in the new API:
Activity stack pinning provides a way to keep an activity stack always on screen, no matter what else is happening in your app. This new feature lets you pin an activity stack to a specific window, so the top activity stays visible even when the user navigates to other parts of the app in a different window. This is perfect for things like live chats or video players that you want to keep on screen while users explore other content.
private fun pinActivityStackExample(taskId: Int) { val splitAttributes: SplitAttributes = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.66f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) .build() val pinSplitRule = SplitPinRule.Builder() .setDefaultSplitAttributes(splitAttributes) .build() SplitController.getInstance(applicationContext).pinTopActivityStack(taskId, pinSplitRule) }
The new pane expansion feature, also known as interactive divider, lets you create a visual separation between two activities in split-screen mode. You can make the pane divider draggable so users can resize the panes – and the activities in the panes – on the fly. This gives users control over how they want to view the app’s content.
val splitAttributesBuilder: SplitAttributes.Builder = SplitAttributes.Builder() .setSplitType(SplitAttributes.SplitType.ratio(0.33f)) .setLayoutDirection(SplitAttributes.LayoutDirection.LEFT_TO_RIGHT) if (WindowSdkExtensions.getInstance().extensionVersion >= 6) { splitAttributesBuilder.setDividerAttributes( DividerAttributes.DraggableDividerAttributes.Builder() .setColor(getColor(context, R.color.divider_color)) .setWidthDp(4) .setDragRange( DividerAttributes.DragRange.DRAG_RANGE_SYSTEM_DEFAULT) .build() ) } val splitAttributes: SplitAttributes = splitAttributesBuilder.build()
WindowManager 1.4 gives you more control over how dialogs dim the background. With dialog full-screen dim, you can choose to dim just the container where the dialog appears or the entire task window for a unified UI experience. The entire app window dims by default when a dialog opens (see EmbeddingConfiguration.DimAreaBehavior.ON_TASK).To dim only the container of the activity that opened the dialog, use EmbeddingConfiguration.DimAreaBehavior.ON_ACTIVITY_STACK. This gives you more flexibility in designing dialogs and makes for a smoother, more coherent user experience. Temu is among the first developers to integrate this feature, the full-screen dialog dim has reduced screen invalid touches by about 5%.
WindowManager 1.4 makes building apps that work flawlessly on foldables straightforward by providing more information about the physical capabilities of the device. The new WindowInfoTracker#supportedPostures API lets you know if a device supports tabletop mode, so you can optimize your app's layout and features accordingly.
val currentSdkVersion = WindowSdkExtensions.getInstance().extensionVersion val message = if (currentSdkVersion >= 6) { val supportedPostures = WindowInfoTracker.getOrCreate(LocalContext.current).supportedPostures buildString { append(supportedPostures.isNotEmpty()) if (supportedPostures.isNotEmpty()) { append(" ") append( supportedPostures.joinToString( separator = ",", prefix = "(", postfix = ")")) } } } else { "N/A (WindowSDK version 6 is needed, current version is $currentSdkVersion)" }
WindowManager 1.4 includes several API changes and additions to support the new features. Notable changes include:
To start using Jetpack WindowManager 1.4 in your Android projects, update your app dependencies in build.gradle.kts to the latest stable version:
dependencies { implementation("androidx.window:window:1.4.0") // or, if you're using the WindowManager testing library: testImplementation("androidx.window:window-testing:1.4.0") }
Happy coding!
At Google, we are committed to empowering developers as they build exceptional health and fitness experiences. Core to that commitment is Health Connect, an Android platform that allows health and fitness apps to store and share the same on-device data. Android devices running Android 14 or that have the pre-installed APK will automatically have Health Connect by default in Settings. For pre-Android 14 devices, Health Connect is available for download from the Play Store.
We're excited to announce significant Health Connect updates like the Jetpack SDK Beta, new datatypes and new permissions that will enable richer, more insightful app functionalities.
We are excited to announce the beta release of our Jetback SDK! Since its initial release, we've dedicated significant effort to improving data completeness, with a particular focus on enriching the metadata associated with each data point.
In the latest SDK, we’re introducing two key changes designed to ensure richer metadata and unlock new possibilities for you and your users:
To deliver more accurate and insightful data, the Beta introduces a requirement to specify one of four recording methods when writing data to Health Connect. This ensures increased data clarity, enhanced data analysis and improved user experience:
If your app currently does not set metadata when creating a record:
Before
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, ) // error: metadata is not provided
After
StepsRecord(
count = 888,
startTime = START_TIME,
endTime = END_TIME,
metadata = Metadata.manualEntry()
)
If your app currently calls Metadata constructor when creating a record:
Before
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata( clientRecordId = "client id", recordingMethod = RECORDING_METHOD_MANUAL_ENTRY, ), // error: Metadata constructor not found )
After
StepsRecord( count = 888, startTime = START_TIME, endTime = END_TIME, metadata = Metadata.manualEntry(clientRecordId = "client id"), )
You will be required to specify device type when creating a Device object. A device object will be required for Automatically (RECORDING_METHOD_AUTOMATICALLY_RECORDED) or Actively (RECORDING_METHOD_ACTIVELY_RECORDED) recorded data.
Before
Device() // error: type not provided
After
Device(type = Device.Companion.TYPE_PHONE)
We believe these updates will significantly improve the quality of data within your applications and empower you to create more insightful user experiences. We encourage you to explore the Jetpack SDK Beta and review the updated Metadata page and familiarize yourself with these changes.
To enable richer, background-driven health and fitness experiences while maintaining user trust, Health Connect now features a dedicated background reads permission.
This permission allows your app to access Health Connect data while running in the background, provided the user grants explicit consent. Users retain full control, with the ability to manage or revoke this permission at any time via Health Connect settings.
Let your app read health data even in the background with the new Background Reads permission. Declare the following permission in your manifest file:
<application>
<uses-permission android:name="android.permission.health.READ_HEALTH_DATA_IN_BACKGROUND" />
...
</application>
Use the Feature Availability API to check if the user has the background read feature available, according to the version of Health Connect they have on their devices.
By default, when granted read permission, your app can access historical data from other apps for the preceding 30 days from the initial permission grant. To enable access to data beyond this 30-day window, Health Connect introduces the PERMISSION_READ_HEALTH_DATA_HISTORY permission. This allows your app to provide new users with a comprehensive overview of their health and wellness history.
Users are in control of their data with both background reads and history reads. Both capabilities require developers to declare the respective permissions, and users must grant the permission before developers can access their data. Even after granting permission, users have the option of revoking access at any time from Health Connect settings.
Health Connect now offers expanded data types, enabling developers to build richer user experiences and provide deeper insights. Check out the following new data types:
These new data types empower developers to create more connected and insightful health and fitness applications, providing users with a holistic view of their well-being.
To learn more about all new APIs and bug fixes, check out the full release notes.
Whether you are just getting started with Health Connect or are looking to implement the latest features, there are many ways to learn more and have your voice heard.
We can’t wait to see what you create!
Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they'll be on a lock screen near you.
A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets.
A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked="true" in its manifest entry.
A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget.
A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we've made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device.
A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category "not_keyguard" in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility.
A: No, there are no specific CDD requirements solely for lock screen widgets. However, it's crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features.
A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP.
A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired.
A: Yes! Hardware providers can pre-set and automatically display default widgets.
A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices.
Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com
This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.
Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.
The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.
To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:
Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!
Here’s what this looks like in code:
val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build() val editedMediaItem = EditedMediaItem.Builder(mediaItem).build() val transformer = Transformer.Builder(context) .addListener(/* Add a Transformer.Listener instance here for completion events */) .build() transformer.start(editedMediaItem, outputFilePath)
Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.
Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:
val transformer =
Transformer.Builder(context)
.addListener(...)
.setVideoMimeType(MimeTypes.VIDEO_H265)
.setAudioMimeType(MimeTypes.AUDIO_AAC)
.build()
Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:
$ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath
The next operation we’ll try is Trimming.
Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:
// Configure the trim operation by adding a ClippingConfiguration to // the media item val clippingConfiguration = MediaItem.ClippingConfiguration.Builder() .setStartPositionMs(3000) .setEndPositionMs(8000) .build() val mediaItem = MediaItem.Builder() .setUri(mediaItemUri) .setClippingConfiguration(clippingConfiguration) .build() // Transformer also has a trim optimization feature we can enable. // This will prioritize Transmuxing over Transcoding where possible. // See more about Transmuxing further down in this post. val transformer = Transformer.Builder(context) .addListener(...) .experimentalSetTrimOptimizationEnabled(true) .build()
With FFmpeg:
$ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath
Next, we can mute the audio in the exported video file.
val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setRemoveAudio(true) .build()
The corresponding FFmpeg command:
$ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath
And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.
val scaleEffect = ScaleAndRotateTransformation.Builder() .setScale(0.5f, 0.5f) .build() val editedMediaItem = EditedMediaItem.Builder(mediaItem) .setEffects( /* audio */ Effects(emptyList(), /* video */ listOf(scaleEffect)) ) .build()
An FFmpeg command could look like this:
$ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath
Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.
Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:
(Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)
Input video format: 10s 720p H264 video with AAC audio
- Transcoding to H265 video and AAC audio: ~1300ms
- Trimming video to 00:03-00:08: ~2300ms
- Muting audio: ~200ms
- Resizing video to half height and width: ~1200ms
Input video format: 25s 360p VP8 video with Vorbis audio
- Transcoding to H265 video and AAC audio: ~3400ms
- Trimming video to 00:03-00:08: ~1700ms
- Muting audio: ~1600ms
- Resizing video to half height and width: ~4800ms
Input video format: 4s 8k H265 video with AAC audio
- Transcoding to H265 video and AAC audio: ~2300ms
- Trimming video to 00:03-00:08: ~1800ms
- Muting audio: ~2000ms
- Resizing video to half height and width: ~3700ms
One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.
When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:
We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.
A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.
Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder's output format and the input format must be compatible.
If the optimization fails, Transformer automatically falls back to normal export.
As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.
To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!
Imagen 3, our most advanced image generation model, is now available through Vertex AI in Firebase, making it even easier to integrate it to your Android apps.
Designed to generate well-composed images with exceptional details, reduced artifacts, and rich lighting, Imagen 3 represents a significant leap forward in image generation capabilities.
Imagen 3 unlocks exciting new possibilities for Android developers. Generated visuals can adapt to the content of your app, creating a more engaging user experience. For instance, your users can generate custom artwork to enhance their in-app profile. Imagen can also improve your app's storytelling by bringing its narratives to life with delightful personalized illustrations.
You can experiment with image prompts in Vertex AI Studio, and learn how to improve your prompts by reviewing the prompt and image attribute guide.
The integration of Imagen 3 is similar to adding Gemini access via Vertex AI in Firebase. Start by adding the gradle dependencies to your Android project:
dependencies { implementation(platform("com.google.firebase:firebase-bom:33.10.0")) implementation("com.google.firebase:firebase-vertexai") }
Then, in your Kotlin code, create an ImageModel instance by passing the model name and optionally, a model configuration and safety settings:
val imageModel = Firebase.vertexAI.imagenModel( modelName = "imagen-3.0-generate-001", generationConfig = ImagenGenerationConfig( imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75), addWatermark = true, numberOfImages = 1, aspectRatio = ImagenAspectRatio.SQUARE_1x1 ), safetySettings = ImagenSafetySettings( safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE personFilterLevel = ImagenPersonFilterLevel.ALLOW_ADULT ) )
Finally generate the image by calling generateImages:
val imageResponse = imageModel.generateImages( prompt = "An astronaut riding a horse" )
Retrieve the generated image from the imageResponse and display it as a bitmap as follow:
val image = imageResponse.images.first() val uiImage = image.asBitmap()
Explore the comprehensive Firebase documentation for detailed API information.
Access to Imagen 3 using Vertex AI in Firebase is currently in Public Preview, giving you an early opportunity to experiment and innovate. For pricing details, please refer to the Vertex AI in Firebase pricing page.
Start experimenting with Imagen 3 today! We're looking forward to seeing how you’ll leverage Imagen 3's capabilities to create truly unique, immersive and personalized Android experiences.
In just a few days, on Thursday, March 13 at 10AM PT, we’ll be dropping our winter episode of #TheAndroidShow, on YouTube and on developer.android.com!
Mobile World Congress - the annual event in Barcelona where Android device makers show off their latest devices, kicked off yesterday. In our winter episode we’ll take a look at these foldables, tablets and wearables and tell you what you need to get building.
Plus we’ve got some news to share, like a new update for Gemini in Android Studio and some new goodies for games developers ahead of the Game Developer Conference (GDC) in San Francisco later this month. And of course, with the launch of Android XR in December, we’ll also be taking a look at how to get building there. It’s a packed show, and you don’t want to miss it!
Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:
These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.
These new devices are just one of the many things we’ll cover in our winter episode, you don’t want to miss it! If you watch live on YouTube, we’ll have folks standing by to answer your questions in the comments. See you on March 13 on YouTube or at developer.android.com/events/show!
Widgets can bring more productive, delightful and customized experiences to users' home screens, but they can be tricky to design to ensure a high quality focused experience. In this blog post, we’ll cover how easy Widget Canonical Layouts can make this process.
But, what is a Canonical Layout? It is a common layout pattern that works for various screen sizes. You can use them as a starting point, ready-to-use compositions that help layouts adapt for common use cases and screen sizes. Widgets also provide Canonical Layouts to get started crafting higher quality widgets.
The Widget Canonical Layouts Figma makes previewing your widget content in multiple breakpoints and layout types. Join me in our Figma design resource to explore how they can simplify designing a widget for one of our sample apps, JetNews.
Jetnews is a sample news reading app, built with Jetpack Compose. With the experience in mind, the primary user journey is reading articles.
With our content and user journey established, we’ll take a glance at which canonical layouts would make sense.
We want to show at least a few new articles with a headline, truncated description, and possible thumbnail. Which brings us to the Image + Text Grid layout and maybe the list layout.
Within our new Figma Widget Canonical Layout preview, we can add in some mock content to check out how these layouts will look in various sizes.
Now that we’ve previewed our content in both the grid and list layouts, we don’t have to choose between just one!
The grid layout better displays our content for larger sizes, where we have some more room to take advantage of multiple columns and a larger thumbnail image. While the list is working nicely for smaller sizes, giving a one column layout with a smaller thumbnail.
But we can adapt even further to allow the user to have more resizing flexibility and anticipate different OEM grid sizing. For JetNews, we decided on an additional extra small layout to accommodate a smaller grid size and vertical height while still using the List layout. For this size I decided to remove the thumbnail all together to give the title and action space.
Consider these in-between design tweaks as needed (between any of the breakpoints), that can be applied as general rules in your widget designs.
Here are a few guidelines to borrow:
Last, I’ll swap the app icon, round up all the breakpoint sizes, and provide an option with brand colors.
These are ready to send over to dev! Tune in for the code along to check out how to implement the final widget.
You can find the Widget Canonical Layouts at our new Figma Community Page: figma.com/@androiddesign. Stay tuned for more Android Figma resources.
Check out the official Android documentation for detailed information and best practices Widgets on Android and more on Widget Quality Tiers, and join us for the rest of Widget Spotlight week!
This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.