Tag Archives: #TheAndroidShow

#TheAndroidShow: Multimodal for Gemini in Android Studio, news for gaming devs, the latest devices at MWC, XR and more!

Posted by Anirudh Dewani – Director, Android Developer Relations

We just dropped our Winter episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time we were in Barcelona to give you the latest from Mobile World Congress and across the Android Developer world. We unveiled a big update to Gemini in Android Studio (multi-modal support, so you can translate image to code) and we shared some news for games developers ahead of GDC later this month. Plus we unpacked the latest Android hardware devices from our partners coming out of Mobile World Congress and recapped all of the latest in Android XR. Let’s dive in!


Multimodality image-to-code, now available for Gemini in Android Studio

At every stage of the development lifecycle, Gemini in Android Studio has become your AI-powered companion. Today, we took the wraps off a new feature: Gemini in Android Studio now supports multimodal image to code, which lets you attach images directly to your prompts! This unlocks a wealth of new possibilities that improve collaboration and design workflows. You can try out this new feature by downloading the latest canary - Android Studio Narwal, and read more about multimodal image attachment – now available for Gemini in Android Studio.

Building excellent games with better graphics and performance

Ahead of next week’s Games Developer Conference (GDC), we announced new developer tools that will help improve gameplay across the Android ecosystem. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplay sessions. Learn more about how we're building excellent games with better graphics and performance.


A deep dive into Android XR

Since we unveiled Android XR in December, it's been exciting to see developers preparing their apps for the next generation of Android XR devices. In the latest episode of #TheAndroidShow we dove into this new form factor and spoke with a developer who has already been building. Developing for this new platform leverages your existing Android development skills and familiar tools like Android Studio, Kotlin, and Jetpack libraries. The Android XR SDK Developer Preview is available now, complete with an emulator, so you can start experimenting and building XR experiences immediately! Visit developer.android.com/xr for more.


New Android foldables and tablets, at Mobile World Congress

Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

    • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen - making it as compact or expansive as needed.
    • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
    • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.


Watch the Winter episode of #TheAndroidShow

That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.

New devices at MWC, gaming news, XR & Gemini in Android Studio: Tune in for our winter episode of #TheAndroidShow on March 13!

Posted by Anirudh Dewani, Director – Android Developer Relations

In just a few days, on Thursday, March 13 at 10AM PT, we’ll be dropping our winter episode of #TheAndroidShow, on YouTube and on developer.android.com!

Mobile World Congress - the annual event in Barcelona where Android device makers show off their latest devices, kicked off yesterday. In our winter episode we’ll take a look at these foldables, tablets and wearables and tell you what you need to get building.

Plus we’ve got some news to share, like a new update for Gemini in Android Studio and some new goodies for games developers ahead of the Game Developer Conference (GDC) in San Francisco later this month. And of course, with the launch of Android XR in December, we’ll also be taking a look at how to get building there. It’s a packed show, and you don’t want to miss it!

Some new Android foldables and tablets, at Mobile World Congress

Mobile World Congress is a big moment for Android, with partners from around the world showing off their latest devices. And if you’re already building adaptive apps, we wanted to share some of the cool new foldable and tablets that our partners released in Barcelona:

    • OPPO: OPPO launched their Find N5, their slim 8.93mm foldable with a 8.12” large screen - making it as compact or expansive as needed.
    • Xiaomi: Xiaomi debuted the Xiaomi Pad 7 series. Xiaomi Pad 7 provides a crystal-clear display and, with the productivity accessories, users get a desktop-like experience with the convenience of a tablet.
    • Lenovo: Lenovo showcased their Yoga Tab Plus, the latest powerful tablet from their lineup designed to empower creativity and productivity.

These new devices are a great reason to build adaptive apps that scale across screen sizes and device types. Plus, Android 16 removes the ability for apps to restrict orientation and resizability at the platform level, so you’ll want to prepare. To help you get started, the Compose Material 3 adaptive library enables you to quickly and easily create layouts across all screen sizes while reducing the overall development cost.

Tune in to #TheAndroidShow: March 13 at 10AM PT

These new devices are just one of the many things we’ll cover in our winter episode, you don’t want to miss it! If you watch live on YouTube, we’ll have folks standing by to answer your questions in the comments. See you on March 13 on YouTube or at developer.android.com/events/show!

#TheAndroidShow: live from Droidcon, including the biggest update to Gemini in Android Studio and more SDK releases for Android!

Posted by Matthew McCullough – Vice President, Product Management, Android Developer

We just dropped our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time are live from Droidcon in London, giving you the latest in Android Developer news including the biggest update to Gemini in Android Studio as well as sharing that there will be more frequent SDK releases for Android, including two next year. Let’s dive in!



Gemini in Android Studio: now helping you at every stage of the development cycle

AI has the ability to accelerate your development experience, and help you be more productive. That's why we introduced Gemini in Android Studio, your AI-powered development companion, designed to make it easier and faster for you to build high quality Android apps, faster. Today, we're launching the biggest set of updates to Gemini in Android Studio since launch: now for the first time, Gemini brings the power of AI with features at every stage of the development lifecycle, directly into your Android Studio IDE experience.



More frequent Android SDK releases starting next year

Android has always worked to get innovation in the hands of users faster. In addition to our annual platform releases, we’ve invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel's popular feature drop updates. Building off the success those quarterly Pixel releases have had towards bringing innovation faster to Pixel users, Android will have more frequent SDK releases going forward, with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers. Stay informed on upcoming releases for the 2025 calendar.



Make the investment in adaptive, for Large Screens: 20% increased app spend

Your users, especially in the premium segment, don’t just buy a phone anymore, they buy into a whole ecosystem of devices. So the experiences you build should follow your users seamlessly across the many screens they own. Take large screens, for instance – foldables, tablets, ChromeOS Devices: there are now over 300 million active Android large-screen devices. This summer, Samsung released their new foldables - the Galaxy Z Fold6 and Z Flip6, and at Google we released our own - the Pixel 9 Pro Fold. We’re also investing in a number of platform features to improve how users interact with these devices, like the developer preview of Desktop Windowing that we’ve been working on in collaboration with Samsung - optimizing these large screen devices for productivity. High quality apps optimized for large screens have several advantages on Play as well: like improved visibility in the Play Store and eligibility for featuring in curated collections and editorial articles. Apps now get separate ratings and reviews for different form factors, making positive feedback more visible.

And it’s paying off for those that make the investment: we’ve seen that using a tablet, flip, or fold increases app spend by ~20%. Flipaclip is proof of this: they’ve seen a 54% growth in tablet users in the past four months. It has never been easier to build for large screens - with Compose APIs and Android Studio support specifically for building adaptive UIs.



Kotlin Multiplatform for sharing business logic across Android and iOS

Many of you build apps for multiple platforms, requiring you to write platform-specific code or make compromises in order to reuse code across platforms. We’ve seen the most value in reducing duplicated code for business logic. So earlier this year, we announced official support for Kotlin Multiplatform (KMP) for shared business logic across Android and iOS. KMP, developed by JetBrains, reduces development time and duplicated code, while retaining the flexibility and benefits of native programming.

At Google, we’ve been migrating Workspace apps, starting with the Google Docs app, to use KMP for shared business logic across Android, iOS and Web. In the community there are a growing number of companies using KMP and getting significant benefits. And it’s not just apps - we’ve seen a 30% increase in the number of KMP libraries developed this year.

To make it easier for you to leverage KMP in your apps, we’ve been working on migrating many of our Jetpack libraries to take advantage of KMP. For example, Lifecycle, ViewModel, and Paging are KMP compatible libraries. Meanwhile, libraries like Room, DataStore, and Collections have KMP support, so they work out-of-the-box on Android and iOS. We’ve also added a new template to Android Studio so you can add a shared KMP module to your existing Android app and begin sharing business logic across platforms. Kickstart your Kotlin Multiplatform journey with this comprehensive guide.


Watch the Fall episode of #TheAndroidShow

That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.

Set a reminder: tune in for our Fall episode of #TheAndroidShow on October 31, live from Droidcon!

Posted by Anirudh Dewani – Director, Android Developer Relations

In just a few days, on Thursday, October 31st at 10AM PT, we’ll be dropping our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com!

In our quarterly show, this time we’ll be live from Droidcon in London, giving you the latest in Android Developer news with demos of Jetpack Compose and more. You can set a reminder to watch the livestream on YouTube, or click here to add to your calendar.


In our Fall episode, we’ll be taking the lid off the biggest update to Gemini in Android Studio, so you don’t want to miss out! We also had a number of recent wearable, foldable and large screen device launches and updates, and we’ll be unpacking what you need to know to get building for these form factors.

Get your #AskAndroid questions answered live!

And we’ve assembled a team of experts from across Android to answer your #AskAndroid questions on building excellent apps, across devices - share your questions now and tune in to see if they are answered live on the show!

#TheAndroidShow is your conversation with the Android developer community, this time hosted by Simona Milanović and Alejandra Stamato. You'll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on October 31 at 10AM PT, live on YouTube and on developer.android.com/events/show!

Instagram’s early adoption of Ultra HDR transforms user experience in only 3 months

Posted by Mayuri Khinvasara Khabya – Developer Relations Engineer, Google; in partnership with Bismark Ito - Android Developer, Rex Jin - Android Developer and Bei Yi - Partner Engineering

Meta’s Instagram is one of the world's most popular social networking apps that helps people connect, find communities, and grow their businesses in new and innovative ways. Since its release in 2010, photographers and creators alike have embraced the platform, making it a go-to hub of artistic expression and creativity.

Instagram developers saw an opportunity to build a richer media experience by becoming an early adopter of Ultra HDR image format, a new feature introduced with Android 14. With its adoption of Ultra HDR, Instagram completely transformed and improved its user experience in just 3 months.

Enhancing Instagram photo quality with Ultra HDR

The development team wanted to be an early adopter of Ultra HDR because photos and videos are Instagram's most important form of interaction and expression, and improving image quality aligns with Meta’s goal of connecting people, communities, and businesses. “Android rapidly adopts the latest media technology so that we can bring the benefits to users,” said Rex Jin, an Android developer on the Instagram Media Platform team.

Instagram developers started implementing Ultra HDR in late September 2023. Ultra HDR images store more information about light intensity for more detailed highlights, shadows, and crisper colors. It also enables capturing, editing, sharing, and viewing HDR photos, a significant improvement over standard dynamic range (SDR) photos while still being backward compatible. Users can seamlessly post, view, edit, and apply filters to Ultra HDR photos without compromising image quality.

Since the update, Instagram has seen a large surge in Ultra HDR photo uploads. Users have also embraced their new ability to edit up to 10 Ultra HDR images simultaneously and share photos that retain the full color and dynamic camera capture range. Instagram’s pioneering integration of Ultra HDR earned industry-wide recognition and praise when it was announced at Samsung Unpacked and in a Pixel Feature Drop.

Image sharing is how Instagram started and we want to ensure we always provide the best and greatest image quality to users and creators. — Bei Yi,, partner engineering at Meta

Pioneering Ultra HDR integrations

Being early adopters of Android 14 meant working with beta versions of the operating system and addressing the challenges associated with implementing a brand-new feature that’s never been tested publicly. For example, Instagram developers needed to find innovative solutions to handle the expanded color space and larger file sizes of Ultra HDR images while maintaining compatibility with Instagram's diverse editing features and filters.

The team found solutions during the development process by using code examples for HDR photo capture and rendering. Instagram also partnered with Google's Android Camera & Media team to address the challenges of displaying Ultra HDR images, share its developer experience, and provide feedback during integration. The partnership helped speed up the integrations, and the feedback shared was implemented faster.

“With Android being an open source project, we can build more optimized media solutions with better performance on Instagram,” said Bismark Ito, an Android developer at Instagram. “I feel accomplished when I find a creative solution that works on a range of devices with different hardware capabilities.”

UI image of an uploaded Instagram post that was taken using Ultra HDR
UI image of an uploaded Instagram post that was taken using Ultra HDR

Building for the future with Android 15

Ultra HDR has significantly enhanced Instagram’s photo-sharing experience, and Meta is already planning to expand support to more devices and add future image and video quality improvements. With the upcoming Android 15 release, the company plans to explore new APIs and features that amplify its mission of connecting people, communities, and businesses.

As the Ultra HDR development process showed, being the first to adopt a new feature involves navigating new challenges to give users the best possible experience. However, collaborating with Google teams and Android’s open source community can help make the process smoother.

Get started

Learn how to revolutionize your app’s user experience with Ultra HDR images.

The Recorder app on Pixel sees a 24% boost in engagement with Gemini Nano-powered feature

Posted by Terence Zhang – Developer Relations Engineer and Kristi Bradford - Product Manager

Google Pixel’s Recorder app allows people to record, transcribe, save, and share audio. To make it easier for users to manage and revisit their recordings, Recorder’s developers turned to Gemini Nano, a powerful on-device large language model (LLM). This integration introduces an AI-powered audio summarization feature to help users more easily find the right recordings and quickly grasp key points.

Earlier this month, Gemini Nano got a power boost with the introduction of the new Gemini Nano with Multimodality model. The Recorder app is already leveraging this upgrade to summarize longer voice recordings, with improved processing for grammar and nuance.

Meeting user needs with on-device AI

Recorder developers initially experimented with a cloud-based solution, achieving impressive levels of performance and quality. However, to prioritize accessibility and privacy for their users, they sought an on-device solution. The development of Gemini Nano presented a perfect opportunity to build the concise audio summaries users were looking for, all while keeping data processing on the device.

Gemini Nano is Google’s most efficient model for on-device tasks. “Having the LLM on-device is beneficial to users because it provides them with more privacy, less latency, and it works wherever they need since there’s no internet required,” said Kristi Bradford, the product manager for Pixel’s essential apps.

To achieve better results, Recorder also fine-tuned the model using data that matches its use case. This is done using low order rank adaptation (LoRA), which enables Gemini Nano to consistently output three-bullet point descriptions of the transcript that include any speaker names, key takeaways, and themes.

AICore, an Android system service that centralizes runtime, delivery, and critical safety components for LLMs, significantly streamlined Recorder's adoption of Gemini Nano. The availability of a developer SDK for running GenAI workloads allowed the team to build the transcription summary feature in just four months, with only four developers. This efficiency was achieved by eliminating the need for maintaining in-house models.

Since its release, Recorder users have been using the new AI-powered summarization feature averaging 2 to 5 times daily, and the number of overall saved recordings increased by 24%. This feature has contributed to a significant increase in app engagement and user retention overall. The Recorder team also noted that feedback about the new feature has been positive, with many users citing the time the new AI-powered summarization feature saves them.

“We were surprised by how truly capable the model was… before and after LoRA tuning.” — Kristi Bradford, product manager for Pixel’s essential apps

The next big evolution: Gemini Nano with multimodality

Recorder developers also implemented the latest Gemini Nano model, known as Gemini Nano with multimodality, to further improve its summarization feature on Pixel 9 devices. The new model is significantly larger than the previous one on Pixel 8 devices, and it’s more capable, accurate, and scalable. The new model also has expanded token support that lets Recorder summarize much longer transcripts than before. Gemini Nano with multimodality is currently only available on Pixel 9 devices.

Integrating Gemini Nano with multimodality required another round of fine-tuning. However, Recorder developers were able to use the original Gemini Nano model's fine-tuning dataset as a foundation, streamlining the development process.

To fully leverage the new model's capabilities, Recorder developers expanded their dataset with support for longer voice recordings, implemented refined evaluation methods, and established launch criteria metrics focused on grammar and nuance. The inclusion of grammar as a new metric for assessing inference quality was made possible solely by the enhanced capabilities of Gemini Nano with Multimodality.

UI example

Doing more with on-device AI

“Given the novelty of GenAI, the whole team had fun learning how to use it,” said Kristi. “Now, we’re empowered to push the boundaries of what we can accomplish while meeting emerging user needs and opportunities. It’s truly brought a new level of creativity to problem-solving and experimentation. We’ve already demoed at least two more GenAI features that help people get time back internally for early feedback, and we’re excited about the possibilities ahead.”

Get started

Learn more about how to bring the benefits of on-device AI with Gemini Nano to your apps.

#TheAndroidShow: diving into the latest from Made by Google, including wearables, Foldable, Gemini and more!

Posted by Anirudh Dewani, Director – Android Developer Relations

We just dropped our summer episode of #TheAndroidShow, on YouTube and on developer.android.com, where we unpacked all of the goodies coming out of this month’s Made by Google event and what you as Android developers need to know. With two new Wear OS 5 watches, we show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 9 Pro Fold, we show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor.

Building for Pixel 9 Pro Fold with Adaptive UIs

With foldables like the Pixel 9 Pro Fold, users have options for how to engage and multitask based on the display they are using and the folded state of their device. Building apps that adapt based on screen size and device postures allows you to scale your UI for mobile, foldables, tablets and beyond. You can read more about how to get started building for devices like the Pixel 9 Pro Fold, or learn more about building for large screens.

Preparing for Pixel Watch 3: Wear OS 5 and Larger Displays

With Pixel Watch 3 ringing in the stable release of Wear OS 5, there’s never been a better time to prepare your app for the behavior changes from Wear OS 5 and larger screen sizes from Pixel. We covered how to get started building for wearables like Pixel Watch 3, and you can learn more about building for Wear OS 3.

Gemini Nano, with multi-modality

We also took you behind the scenes with Gemini Nano with multimodality, Google’s latest model for on-device AI. Gemini Nano, the smallest version of the Gemini model family, can be executed on-device on capable Android devices including the latest Pixel 9. We caught up with the team to hear more about how the Pixel Recorder team used Gemini Nano to summarize users’ transcripts of audio recordings, with data remaining on-device.

And some voices from Android devs like you!

Across the show, we heard from some amazing developers building excellent apps, across devices. Like Rex Jin and Bismark Ito, Android Developers at Meta: they told us how the team at Instagram was able to add Ultra HDR in less than three months, dramatically improving the user experience. Later, SAP told us how within 5 minutes, they integrated NavigationSuiteScaffold, swiftly adapting their navigation UI to different window sizes. And AllTrails told us they are seeing 60% higher monthly retention from Wear OS users… pretty impressive!


Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and #TheAndroidShow is your conversation with the Android developer community, this time hosted by Huyen Tue Dao and John Zoeller. You'll hear the latest from the developers and engineers who build Android. You can watch the full show on YouTube Comment start and on developer.android.com/events/show!

#TheAndroidShow: the latest from MWC, Gemini Nano, Android 15 and more!

Posted by Anirudh Dewani, Director of Android Developer Relations


Last week, Android device makers released a slew of new devices, and today we’re unpacking what that means for developers, as well as the latest in Gemini Nano, Android 15, Jetpack Compose and more, in another episode of our quarterly show, #TheAndroidShow:

The lastest wearables and foldables – get building!

Android device makers unveiled their latest wearables and foldables last week at Mobile World Congress, and we were on the ground in Barcelona taking a look at those new devices and how you can get started building on top of them. A few of our favorites:

    • Xiaomi Watch 2,the latest smart watch from the Xiaomi team. This device is powered by Wear OS by Google and provides upgraded camera, fitness, and sleep experiences to allow users to get the most from their device.
    • PORSCHE DESIGN HONOR Magic V2 RSR, the world’s thinnest inward foldable smartphone. This is the latest foldable for Android and was designed with the user experience at the forefront, including human-centric eye comfort technology.

Compose is an amazing way to build apps for your users across form factors. Compose for Wear OS and the upcoming adaptive layouts for large screens help devs bring their apps to life with less code, powerful tools, and intuitive APIs. Check out the Wear OS and Large Screen galleries, where you can find UX inspiration and design guidance tailored to your type of app.




Behind the scenes, with Gemini Nano and AICore

With all of the excitement around generative AI, it could be daunting to know where to start. So in today’s show, we’re taking you behind the scenes with Gemini Nano, Google’s most efficient model built for on-device tasks, and AICore, Android’s system service for on-device foundation models. And we’re spotlighting how the team that builds the Recorder app used Gemini Nano to help summarize users’ voice memos on-device and with privacy in mind. And here’s the best part: the team built the feature in a short time with only a small number of engineers.




Now in Android

We celebrated the 100th episode of Now in Android, covering the latest developer news, including:




And that’s a wrap on another episode of our quarterly show, #TheAndroidShow. But the conversation continues on YouTube, X and LinkedIn: tell us your favorite part, or what you’d like us to dive into next time on our quarterly episode. And before we sign off, you can watch the full playlist, with the latest in Android developer news, here.

#TheAndroidShow: Faster and easier to build excellent apps, across devices.

Posted by Anirudh Dewani, Director of Android Developer Relations

We just wrapped another episode of #TheAndroidShow; in the show, we covered the latest in Android development, including a look at the new Pixel watch and the world of wearables, gathered the team to demo tools and libraries to build for foldables, large screen devices, with Compose, Android 14, Studio Bot, and more. Take a look, and here’s a recap of some of the ways we’re helping make it faster and easier to build excellent apps, across devices:

Studio Bot: improving your productivity, through Generative AI

At Google I/O we gave you a preview of Studio Bot, an AI powered coding assistant that is tightly integrated into Android Studio, designed to make Android development faster and easier. Last month, Studio Bot expanded into over 170 countries, and today we’re adding even more functionality in the latest canary release to help you be more productive. AI code completion enables you to receive suggestions for more complex code completions, such as multiline code or even entire functions. You can also now add comments to your code, and document code with just a click using Studio Bot. We caught up with Jamal Eason to learn about the investments the team is working on, including our improvements in quality.

Faster and easier to build, with Jetpack Compose

Jetpack Compose gives you powerful and intuitive APIs, which make it faster and easier to build UIs. Since Google I/O, we’ve been working on improving performance across Compose to make it even more helpful. Developers around the world are taking advantage of Compose to help them rewrite screens, build new screens, or create new apps. For example, The Reddit team adopted Compose for their design system, which improved their code flexibility and reduced code duplication. They rewrote several features in Compose and their new tech stack, one of them being Reddit Recap, with beautiful animations. They were able to achieve feature parity with 44% less code when they rewrote it using Compose, saving engineering resources and time.

Build across devices, with large screens

Foldables and tablets are an important space - and the market for large screens is growing with Samsung announcing that half of their users are thinking of making a foldable their next phone. We’re continuing to build tools and libraries to make it easier to build for different device types, including device streaming and new drag and drop APIs in Compose. See how Zoom saw 2x higher user engagement and optimized their app for large screens, and get started with making your app work better across screen sizes and form factors.

The latest in wearables, with Wear OS 4 and Pixel Watch 2

Earlier this month, we saw the launch of Google Pixel Watch 2 - the first Google watch with all the capabilities of Wear OS 4! The latest version of Wear OS offers several capabilities that make it easier to develop exceptional wearable experiences, from Watch Face Format to enhanced tiles and more. Read more to discover the latest updates to Wear OS and how you can get started!

Making excellent, premium apps

Earlier this month, Android 14 started rolling out to users around the world. So there’s no better time to start optimizing your apps for the release and taking advantage of new features in Android 14 to help you build excellent experiences for your users using the best of Android, such as improved camera functionality with UltraHDR, seamless authentication with Credential Manager and enhanced widget development with Jetpack Glance. In the show, we saw how Snapchat used Camera2 Extension API to build camera features such as night mode, zoom, and tap-to-focus, enhancing their user’s experience capturing high-quality Snaps on Android devices, and also had a conversation with Dave Burke about Android 14 and more. Take a look!

Connecting with you at events around the world

This year, we're excited to bring the Android team and our Android Google Developer Expert friends to events around the world, you can learn more about it here. Later this month, the Android team will be at Droidcon London (October 26-27), bringing talks and hosting office hours around many exciting topics, and a panel of subject matter experts. Android GDEs will be speaking at 100+ DevFest events around the world, with special appearances from the Android team at DevFests in New York, the Bay Area, London, and Singapore among others. We look forward to connecting with thousands of you in person!


Missed the show? You can watch it here, or check out the full playlist here. This is your conversation with the broader Android community, and if you’ve got an idea for the next show, we’d love to hear it - send us a Tweet, or share a message in the comments. We can’t wait to hear from you!

What it means to be an Android Google Developer Expert

Posted by Yasmine Evjen, Community lead, Android DevRel

The community of Android developers is at the heart of everything we do. Seeing the community come together to build new things, encourage each other, and share their knowledge encourages us to keep pushing the limits of Android.

At the core of this is our Android Google Developer Experts, a global community that comes together to share best practices through speaking, open-source contributions, workshops, and articles. This is a caring community that mentors, supports each other, and isn’t afraid to get their hands dirty with early access Android releases, providing feedback to make it the best release for developers across the globe.

We asked, “What do you love most about being in the #AndroidDev and Google Developer Expert community?”

Gema Socorro says, ”I love helping other devs in their Android journey,” and Jaewoog Eum shares the joy of “Learning, building, and sharing innovative Android technologies for everyone.”

Hear from the Google Developer Expert Community

We also sat down with Ahmed Tikiwa, Annyce Davis, Dinorah Tovar, Harun Wangereka, Madona S Wambua, and Zarah Dominguez - to hear about their journey as an Android Developer and GDE and what this role means to them - watch them on The Android Show below.

Annyce, VP Engineer Meetup shares, “the community is a great sounding board to solve problems, and helps me stay technical and keep learning.”

Does the community inspire you? Get involved by speaking at your local developer conferences, sharing your latest Android projects, and not being afraid to experiment with new technology. This year, we’re spotlighting community projects! Tag us in your blogs, videos, tips, and tricks to be featured in the latest #AndroidSpotlight.

Active in the #AndroidDev community? Become an Android Google Developer Expert.

A group of Android Developers and a baby, standing against a headge of lush greenery, smiling