Widgets on lock screen: FAQ

Posted by Tyler Beneke – Product Manager, and Lucas Silva – Software Engineer

Widgets are now available on your Pixel Tablet lock screens! Lock screen widgets empower users to create a personalized, always-on experience. Whether you want to easily manage smart home devices like lights and thermostats, or build dashboards for quick access and control of vital information, this blog post will answer your key questions about lock screen widgets on Android. Read on to discover when, where, how, and why they'll be on a lock screen near you.

Lock screen widgets
Lock screen widgets in clock-wise order: Clock, Weather, Stocks, Timers, and Google Home App. In the top right is a customization call-to-action.

Q: When will lock screen widgets be available?

A: Lock screen widgets will be available in AOSP for tablets and mobile starting with the release after Android 16 (QPR1). This update is scheduled to be pushed to AOSP in late Summer 2025. Lock screen widgets are already available on Pixel Tablets.

Q: Are there any specific requirements for widgets to be allowed on the lock screen?

A: No, widgets allowed on the lock screen have the same requirements as any other widgets. Widgets on the lock screen should follow the same quality guidelines as home screen widgets including quality, sizing, and configuration. If a widget launches an activity from the lock screen, users must authenticate to launch the activity, or the activity should declare android:showWhenLocked="true" in its manifest entry.

Q: How can I test my widget on the lock screen?

A: Currently, lock screen widgets can be tested on Pixel Tablet devices. You can enable lock screen widgets and add your widget.

Q: Which widgets can be displayed in this experience?

A: All widgets are compatible with the lock screen widget experience. To prioritize user choice and customization, we've made all widgets available. For the best experience, please make sure your widget supports dynamic color and dynamic resizing. Lock screen widgets are sized to approximately 4 cells wide by 3 cells tall on the launcher, but exact dimensions vary by device.

Q: Can my widget opt-out of the experience?

A:Important: Apps can choose to restrict the use of their widgets on the lock screen using an opt-out API. To opt-out, use the widget category "not_keyguard" in your appwidget info xml file. Place this file in an xml-36 resource folder to ensure backwards compatibility.

Q: Are there any CDD requirements specifically for lock screen widgets?

A: No, there are no specific CDD requirements solely for lock screen widgets. However, it's crucial to ensure that any widgets and screensavers that integrate with the framework adhere to the standard CDD requirements for those features.

Q: Will lock screen widgets be enabled on existing devices?

A: Yes, lock screen widgets were launched on the Pixel Tablet in 2024 Other device manufacturers may update their devices as well once the feature is available in AOSP.

Q: Does the device need to be docked to use lock screen widgets?

A: The mechanism that triggers the lock screen widget experience is customizable by the OEM. For example, OEMs can choose to use charging or docking status as triggers. Third-party OEMs will need to implement their own posture detection if desired.

Q: Can OEMs set their own default widgets?

A: Yes! Hardware providers can pre-set and automatically display default widgets.

Q: Can OEMs customize the user interface for lock screen widgets?

A: Customization of the lock screen widget user interface by OEMs is not supported in the initial release. All lock screen widgets will have the same developer experience on all devices.

Lock screen widgets are poised to give your users new ways to interact with your app on their devices. Today you can leverage your existing widget designs and experiences on the lock screen with Pixel Tablets. To learn more about building widgets, please check out our resources on developer.android.com


This blog post is part of our series: Spotlight Week on Widgets, where we provide resources—blog posts, videos, sample code, and more—all designed to help you design and create widgets. You can read more in the overview of Spotlight Week: Widgets, which will be updated throughout the week.

Empowering San Antonio’s underrepresented communities

Google Fiber’s Community Connections program provides gigabit internet service to local nonprofits to help them serve their constituencies and meet their organizational goals. Today, Empower House, the first GFiber Community Connection in San Antonio, shares how access to high-speed internet is helping the nonprofit foster an environment of empowerment and equity for people of all backgrounds.  

Thumbnail

With rising socio economic challenges in East San Antonio, a diverse group of women founded Martinez Street Women’s Center to improve reproductive health services and create a space for marginalized women and girls to access the necessary resources and education to help them succeed. 

Twenty-six years later, with a second location on the southeast side of the city and a third on the west side, we are now called Empower House and have expanded to meet the needs of our diverse community — all through the lens of restorative justice. Through community health, youth programming and advocacy opportunities, we have advanced our collective journey toward justice together with those most impacted by systematic inequity. 


Empower House believes in the strength shared information and education offer, and coupled with access to resources, we provide opportunities for the empowerment of our community. We shared that vision with the Esperanza Peace and Justice Center, and in 2017 added a music and talk radio station, KXEP 101.5 FM The Empower House. Hosting six weekly shows and, as a part of the Pacifica Network, KXEP has been a thriving community connector, sharing and amplifying inspiring stories from women of color, LBGTQ+ individuals and other traditionally marginalized communities. 


As our radio station continues to be a centerpiece with our community content creation, it was imperative to be innovative in finding solutions to grow the production. We recently built out a radio station and student studio to expand and modernize our space by providing our radio hosts with the necessary resources to strengthen their programming. 


The renovation was enabled in part through GFiber’s Community Connections program, which paid for the technology of the new student studio and generously installed 1 Gig network service for free. We are grateful and humbled to be the first recipient of GFiber’s Community Connections program in San Antonio. 


Through GFiber’s high-speed connection, we’re able to work quicker and smarter. Our radio studio has produced 14 local radio shows. We also added an on-demand feature on our website and a radio app that makes our programming more accessible on the go. Our online radio visitors continue to increase with current listenership at more than 17,000, who have downloaded more than 900 episodes to date from our website. 

With GFiber’s support, we outfitted our Student Studio enabling us to teach students of all ages the skills necessary to harness the technology for storytelling, recording and editing. The voices of the most marginalized amongst us have traditionally been underrepresented in radio and our rich history and culture deserves to be uplifted and highlighted by those of us that live it.


Additionally, GFiber provides 1 Gig service to our Digital Equity Center, which serves as a free WiFi hub for the community, with laptops and printers available for public use. 

Today, our extensive programming has served thousands of San Antonians, creating transformative change for future generations of families. And with the support of GFiber, we will continue to carry out our mission for decades to come. 

To learn more about our radio programming, please visit https://empowerhousesa.org/radio/.  

Posted by Becca Najera, Assistant Director, Empower House SA





CalCam: Transforming Food Tracking with the Gemini API

CalCam, a calorie-tracking app, uses the Gemini API to analyze meal photos, providing users with fast and accurate nutritional information. Polyverse, CalCam's creator, highlights Gemini API's speed, accuracy, and structured JSON output are crucial for CalCam's seamless user experience and efficient development, allowing for easy integration and detailed food analysis.

Use Gemini in the side panel of Google Slides in seven new languages

What’s changing

Beginning today, you can use Gemini in the side panel of Google Slides, which includes the ability to generate images, in the following seven new languages: 
  • French 
  • German 
  • Italian 
  • Japanese 
  • Korean 
  • Portuguese 
  • Spanish 
With Gemini in the side panel of your Workspace apps, you can get help summarizing, brainstorming, and generating content by utilizing insights gathered from your emails, documents, and more—all without switching applications or tabs. Check out our original announcements for Gemini in the side panel of Slides, Docs, Sheets, and Drive, and Gmail for even more information. 


Additional details 

  • Users may see the “Alpha” badge as we bring more features into Gemini in the side panel of Google Workspace. 
  • Image generation of people is not supported in these additional languages at this time. 

Getting started 

  • Admins: The default setting for Gemini features in Workspace services is on. See how you can manage access to AI features in Workspace services. 
  • End users: 
    • Gemini in the side panel will work according to the language you set in your Google account (myaccount.google.com/language). If you’re accessing other Gemini for Google Workspace features that are supported in English only, you will need to set your Google Account language to English. 
    • You can access the side panel by clicking on “Ask Gemini” (spark button) in the top right corner of Slides on the web. Visit the Help Center to learn more about collaborating with Gemini in the side panel of Slides

Rollout pace 

Availability 

Available to Google Workspace: 
  • Business Standard and Plus 
  • Enterprise Standard and Plus 
  • Customers with the Gemini Education or Gemini Education Premium add-on 
  • Customers with the Gemini Business or Gemini Enterprise add-on* 
*As of January 15, 2025, we’re no longer offering the Gemini Business and Gemini Enterprise add-ons for sale. Please refer to this announcement for more details. 

Resources

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 135 (135.0.7049.4) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Chrome Release Team
Google Chrome

Common media processing operations with Jetpack Media3 Transformer

Posted by Nevin Mital – Developer Relations Engineer, and Kristina Simakova – Engineering Manager

Android users have demonstrated an increasing desire to create, personalize, and share video content online, whether to preserve their memories or to make people laugh. As such, media editing is a cornerstone of many engaging Android apps, and historically developers have often relied on external libraries to handle operations such as Trimming and Resizing. While these solutions are powerful, integrating and managing external library dependencies can introduce complexity and lead to challenges with managing performance and quality.

The Jetpack Media3 Transformer APIs offer a native Android solution that streamline media editing with fast performance, extensive customizability, and broad device compatibility. In this blog post, we’ll walk through some of the most common editing operations with Transformer and discuss its performance.

Getting set up with Transformer

To get started with Transformer, check out our Getting Started documentation for details on how to add the dependency to your project and a basic understanding of the workflow when using Transformer. In a nutshell, you’ll:

    • Create one or many MediaItem instances from your video file(s), then
    • Apply item-specific edits to them by building an EditedMediaItem for each MediaItem,
    • Create a Transformer instance configured with settings applicable to the whole exported video,
    • and finally start the export to save your applied edits to a file.
Aside: You can also use a CompositionPlayer to preview your edits before exporting, but this is out of scope for this blog post, as this API is still a work in progress. Please stay tuned for a future post!

Here’s what this looks like in code:

val mediaItem = MediaItem.Builder().setUri(mediaItemUri).build()
val editedMediaItem = EditedMediaItem.Builder(mediaItem).build()
val transformer = 
  Transformer.Builder(context)
    .addListener(/* Add a Transformer.Listener instance here for completion events */)
    .build()
transformer.start(editedMediaItem, outputFilePath)

Transcoding, Trimming, Muting, and Resizing with the Transformer API

Let’s now take a look at four of the most common single-asset media editing operations, starting with Transcoding.

Transcoding is the process of re-encoding an input file into a specified output format. For this example, we’ll request the output to have video in HEVC (H265) and audio in AAC. Starting with the code above, here are the lines that change:

val transformer = 
  Transformer.Builder(context)
    .addListener(...)
    .setVideoMimeType(MimeTypes.VIDEO_H265)
    .setAudioMimeType(MimeTypes.AUDIO_AAC)
    .build()

Many of you may already be familiar with FFmpeg, a popular open-source library for processing media files, so we’ll also include FFmpeg commands for each example to serve as a helpful reference. Here’s how you can perform the same transcoding with FFmpeg:

$ ffmpeg -i $inputVideoPath -c:v libx265 -c:a aac $outputFilePath

The next operation we’ll try is Trimming.

Specifically, we’ll set Transformer up to trim the input video from the 3 second mark to the 8 second mark, resulting in a 5 second output video. Starting again from the code in the “Getting set up” section above, here are the lines that change:

// Configure the trim operation by adding a ClippingConfiguration to
// the media item
val clippingConfiguration =
   MediaItem.ClippingConfiguration.Builder()
     .setStartPositionMs(3000)
     .setEndPositionMs(8000)
     .build()
val mediaItem =
   MediaItem.Builder()
     .setUri(mediaItemUri)
     .setClippingConfiguration(clippingConfiguration)
     .build()

// Transformer also has a trim optimization feature we can enable.
// This will prioritize Transmuxing over Transcoding where possible.
// See more about Transmuxing further down in this post.
val transformer = 
  Transformer.Builder(context)
    .addListener(...)
    .experimentalSetTrimOptimizationEnabled(true)
    .build()

With FFmpeg:

$ ffmpeg -ss 00:00:03 -i $inputVideoPath -t 00:00:05 $outputFilePath

Next, we can mute the audio in the exported video file.

val editedMediaItem = 
  EditedMediaItem.Builder(mediaItem)
    .setRemoveAudio(true)
    .build()

The corresponding FFmpeg command:

$ ffmpeg -i $inputVideoPath -c copy -an $outputFilePath

And for our final example, we’ll try resizing the input video by scaling it down to half its original height and width.

val scaleEffect = 
  ScaleAndRotateTransformation.Builder()
    .setScale(0.5f, 0.5f)
    .build()
val editedMediaItem =
  EditedMediaItem.Builder(mediaItem)
    .setEffects(
      /* audio */ Effects(emptyList(), 
      /* video */ listOf(scaleEffect))
    )
    .build()

An FFmpeg command could look like this:

$ ffmpeg -i $inputVideoPath -filter:v scale=w=trunc(iw/4)*2:h=trunc(ih/4)*2 $outputFilePath

Of course, you can also combine these operations to apply multiple edits on the same video, but hopefully these examples serve to demonstrate that the Transformer APIs make configuring these edits simple.

Transformer API Performance results

Here are some benchmarking measurements for each of the 4 operations taken with the Stopwatch API, running on a Pixel 9 Pro XL device:

(Note that performance for operations like these can depend on a variety of reasons, such as the current load the device is under, so the numbers below should be taken as rough estimates.)

Input video format: 10s 720p H264 video with AAC audio

  • Transcoding to H265 video and AAC audio: ~1300ms
  • Trimming video to 00:03-00:08: ~2300ms
  • Muting audio: ~200ms
  • Resizing video to half height and width: ~1200ms

Input video format: 25s 360p VP8 video with Vorbis audio

  • Transcoding to H265 video and AAC audio: ~3400ms
  • Trimming video to 00:03-00:08: ~1700ms
  • Muting audio: ~1600ms
  • Resizing video to half height and width: ~4800ms

Input video format: 4s 8k H265 video with AAC audio

  • Transcoding to H265 video and AAC audio: ~2300ms
  • Trimming video to 00:03-00:08: ~1800ms
  • Muting audio: ~2000ms
  • Resizing video to half height and width: ~3700ms

One technique Transformer uses to speed up editing operations is by prioritizing transmuxing for basic video edits where possible. Transmuxing refers to the process of repackaging video streams without re-encoding, which ensures high-quality output and significantly faster processing times.

When not possible, Transformer falls back to transcoding, a process that involves first decoding video samples into raw data, then re-encoding them for storage in a new container. Here are some of these differences:

Transmuxing

    • Transformer’s preferred approach when possible - a quick transformation that preserves elementary streams.
    • Only applicable to basic operations, such as rotating, trimming, or container conversion.
    • No quality loss or bitrate change.
Transmux

Transcoding

    • Transformer's fallback approach in cases when Transmuxing isn't possible - Involves decoding and re-encoding elementary streams.
    • More extensive modifications to the input video are possible.
    • Loss in quality due to re-encoding, but can achieve a desired bitrate target.
Transcode

We are continuously implementing further optimizations, such as the recently introduced experimentalSetTrimOptimizationEnabled setting that we used in the Trimming example above.

A trim is usually performed by re-encoding all the samples in the file, but since encoded media samples are stored chronologically in their container, we can improve efficiency by only re-encoding the group of pictures (GOP) between the start point of the trim and the first keyframes at/after the start point, then stream-copying the rest.

Since we only decode and encode a fixed portion of any file, the encoding latency is roughly constant, regardless of what the input video duration is. For long videos, this improved latency is dramatic. The optimization relies on being able to stitch part of the input file with newly-encoded output, which means that the encoder's output format and the input format must be compatible.

If the optimization fails, Transformer automatically falls back to normal export.

What’s next?

As part of Media3, Transformer is a native solution with low integration complexity, is tested on and ensures compatibility with a wide variety of devices, and is customizable to fit your specific needs.

To dive deeper, you can explore Media3 Transformer documentation, run our sample apps, or learn how to complement your media editing pipeline with Jetpack Media3. We’ve already seen app developers benefit greatly from adopting Transformer, so we encourage you to try them out yourself to streamline your media editing workflows and enhance your app’s performance!

Chrome Beta for Desktop Update

The Chrome team is excited to announce the promotion of Chrome 135 to the Beta channel for Windows, Mac and Linux. Chrome 135.0.7049.3 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Chrome Release Team
Google Chrome

Dev Channel Update for ChromeOS / ChromeOS Flex

 The Dev channel is being updated to OS version 16209.5.0 (Browser version 135.0.7049.0) for most ChromeOS devices. 
If you find new issues, please let us know one of the following ways
Alon Bajayo,
Google ChromeOS Release Team