Tag Archives: Android

Google Workspace Updates Weekly Recap – November 1, 2024

4 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.

Apply black & white filter to Google Drive scans on Android devices 
In August, we announced that you can now save files scanned in the Google Drive Android app as a .JPEG. This week, we’re excited to introduce an additional scanning option that gives you the ability to apply a black & white filter on your document scans. This new filter helps enhance texts and other important elements, ensuring they are sharply defined when compared to the background region. | Rolling out now to Rapid Release and Scheduled Release domains. | Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts. | Visit the Help Center to learn more about scanning files with your mobile device.
Apply black & white filter to Google Drive scans on Android devices

AI Classification now supports Field Selection for Model Training 
When AI Classification first launched, Labels eligible for model training needed to have a single field of either a badge or option-list field type, and Labels with multiple fields were ineligible. Now, customers that use AI Classification will be able to select which badge or option list field they would like to train a model for after identifying the target label. Once trained and enabled, the AI model will automatically apply the label and will only populate the selected field. | Roll out to Rapid Release and Scheduled Release domains is complete. | Available to customers with the AI Security add-on, Gemini Enterprise add-on, and Gemini Education Premium. | Visit the Help Center to learn more about Label Google Drive files automatically using AI classification. 


Reducing noise from unfollowed threads in Google Chat
In order to make it easier to identify which unread threads are most relevant to you, we’re reducing the noise by removing visual cues from threads that you do not follow in Google Chat. Starting this week, new activity, such as unread messages from threads you do not follow, will no longer bold and appear at the top of your conversation lists. | Rolling out now to Rapid Release and Scheduled Release domains on web and mobile at an extended rollout pace (potentially longer than 15 days for feature visibility). | Available to all Google Workspace customers. 


Introducing a better filter by condition experience for tables in Google Sheets 
Tables in Google Sheets will now provide users with a smarter filter by condition experience. Sheets offers 21 options for users to filter by condition, such as “Date is” or “Text ends with”. However, we know there are scenarios in which certain filters might not be applicable based on the type of data in a spreadsheet. Based on the set column type, users will now only see relevant filter by condition options. For example, if your column type is set to number, the filter options will be number-based only. | Roll out to Rapid Release and Scheduled Release domains is complete. | Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts. | Visit the Help Center to learn more about sorting & filtering your data.
Introducing a better filter by condition experience for tables in Google Sheets



Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Refine emails faster with updates to the “Polish” shortcut in Gmail 
We’re expanding the Help me write shortcut to web and introducing a Polish shortcut on web and mobile that helps you refine emails even faster. | Learn more about email shortcuts in Gmail. 

Google Classroom now supports exporting missing and excused grades to select Student Information Systems (SIS)
Teachers can now include missing and excused grades when exporting grades to their Student Information Systems (SIS). | Learn more about exporting missing and excused grades to select Student Information Systems (SIS). 

New density setting in Google Chat 
To give users more control over how they see information in Google Chat, we’re introducing a new setting that allows you to control the visual density of screen elements. Choose between “Comfortable” or “Compact” on chat.google.com. | Learn more about density settings in Chat.

Context Aware Access insights and recommendations are now generally available
We’re making it easier to apply context-aware access (CAA) policies with new insights and recommendations. We’ll proactively surface potential security gaps and suggest pre-built CAA levels which admins can deploy to remediate the security gaps. | Learn more Context Aware Access insights.

FedRAMP High authorization for Gemini for Workspace 
As recently announced, we submitted our package to obtain FedRAMP High authorization for Gemini for Workspace, including the Gemini app. A FedRAMP High certification assures federal agencies in the United States that a cloud service provides the highest level of protection for their most sensitive data, enabling them to confidently leverage cloud technologies for critical operations. | Learn more about FedRAMP High authorization for Gemini.

Gemini in the side panel of Google Chat is now available
We’re expanding Gemini in Chat to help users collaborate more effectively in their spaces, group messages and direct messages. | Learn more about Gemini in the side panel of Chat. 

Data classifications labels for Gmail are now available in open beta
In addition to Google Drive, we’re expanding data classification labels to now include Gmail. Classification labels are used to classify and audit content according to organizational guidelines (“Sensitive”, “Confidential”, etc.) and apply policies, such as data loss prevention (DLP) rules, to protect sensitive information in email messages. Classification labels will be available when using Gmail on the web – support for Gmail on mobile devices will be introduced in the coming months. | Learn more about the beta for data classifications labels for Gmail.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.


Rapid Release Domains: 
Scheduled Release Domains: 
Rapid and Scheduled Release Domains: 
    For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).
       


    #TheAndroidShow: live from Droidcon, including the biggest update to Gemini in Android Studio and more SDK releases for Android!

    Posted by Matthew McCullough – Vice President, Product Management, Android Developer

    We just dropped our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time are live from Droidcon in London, giving you the latest in Android Developer news including the biggest update to Gemini in Android Studio as well as sharing that there will be more frequent SDK releases for Android, including two next year. Let’s dive in!



    Gemini in Android Studio: now helping you at every stage of the development cycle

    AI has the ability to accelerate your development experience, and help you be more productive. That's why we introduced Gemini in Android Studio, your AI-powered development companion, designed to make it easier and faster for you to build high quality Android apps, faster. Today, we're launching the biggest set of updates to Gemini in Android Studio since launch: now for the first time, Gemini brings the power of AI with features at every stage of the development lifecycle, directly into your Android Studio IDE experience.



    More frequent Android SDK releases starting next year

    Android has always worked to get innovation in the hands of users faster. In addition to our annual platform releases, we’ve invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel's popular feature drop updates. Building off the success those quarterly Pixel releases have had towards bringing innovation faster to Pixel users, Android will have more frequent SDK releases going forward, with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers. Stay informed on upcoming releases for the 2025 calendar.



    Make the investment in adaptive, for Large Screens: 20% increased app spend

    Your users, especially in the premium segment, don’t just buy a phone anymore, they buy into a whole ecosystem of devices. So the experiences you build should follow your users seamlessly across the many screens they own. Take large screens, for instance – foldables, tablets, ChromeOS Devices: there are now over 300 million active Android large-screen devices. This summer, Samsung released their new foldables - the Galaxy Z Fold6 and Z Flip6, and at Google we released our own - the Pixel 9 Pro Fold. We’re also investing in a number of platform features to improve how users interact with these devices, like the developer preview of Desktop Windowing that we’ve been working on in collaboration with Samsung - optimizing these large screen devices for productivity. High quality apps optimized for large screens have several advantages on Play as well: like improved visibility in the Play Store and eligibility for featuring in curated collections and editorial articles. Apps now get separate ratings and reviews for different form factors, making positive feedback more visible.

    And it’s paying off for those that make the investment: we’ve seen that using a tablet, flip, or fold increases app spend by ~20%. Flipaclip is proof of this: they’ve seen a 54% growth in tablet users in the past four months. It has never been easier to build for large screens - with Compose APIs and Android Studio support specifically for building adaptive UIs.



    Kotlin Multiplatform for sharing business logic across Android and iOS

    Many of you build apps for multiple platforms, requiring you to write platform-specific code or make compromises in order to reuse code across platforms. We’ve seen the most value in reducing duplicated code for business logic. So earlier this year, we announced official support for Kotlin Multiplatform (KMP) for shared business logic across Android and iOS. KMP, developed by JetBrains, reduces development time and duplicated code, while retaining the flexibility and benefits of native programming.

    At Google, we’ve been migrating Workspace apps, starting with the Google Docs app, to use KMP for shared business logic across Android, iOS and Web. In the community there are a growing number of companies using KMP and getting significant benefits. And it’s not just apps - we’ve seen a 30% increase in the number of KMP libraries developed this year.

    To make it easier for you to leverage KMP in your apps, we’ve been working on migrating many of our Jetpack libraries to take advantage of KMP. For example, Lifecycle, ViewModel, and Paging are KMP compatible libraries. Meanwhile, libraries like Room, DataStore, and Collections have KMP support, so they work out-of-the-box on Android and iOS. We’ve also added a new template to Android Studio so you can add a shared KMP module to your existing Android app and begin sharing business logic across platforms. Kickstart your Kotlin Multiplatform journey with this comprehensive guide.


    Watch the Fall episode of #TheAndroidShow

    That’s a wrap on this quarter’s episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

    Have an idea for our next episode of #TheAndroidShow? It’s your conversation with the broader community, and we’d love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.

    FlipaClip optimizes for large screens and sees a 54% increase in tablet users

    Posted by Miguel Montemayor – Developer Relations Engineer

    FlipaClip is an app for creating dynamic and engaging 2D animations. Its powerful toolkit allows animators of all levels to bring their ideas to life, and its developers are always searching for new ways to help its users create anything they can imagine.

    Increasing tablet support was pivotal in improving FlipaClip users’ creativity, giving them more space and new methods of animating the stories they want to tell. Now, users on these devices can more naturally bring their visions to life thanks to Android’s intuitive features, like stylus compatibility and unique large screen menu interfaces.

    Large screens are a natural canvas for animation

    FlipaClip initially launched as a phone app, but as tablets became more mainstream, the team knew it needed to adapt its app to take full advantage of larger screens because they are more natural animating platforms. After updating the app, tablet users quickly became a core revenue-generating audience for FlipaClip, representing more than 40% of the app’s total revenue.

    “We knew we needed to prioritize the large screen experience,” said Tim Meson, the lead software engineer and co-founder of FlipaClip. “We believe the tablet experience is the ideal way to use FlipaClip because it gives users more space and precision to create.”

    The FlipaClip team received numerous user requests to optimize styluses on tablets, like pressure sensitivity and tilt for styluses and new brush types. So it gave their users exactly what they wanted. Not only did they implement stylus support, but they also redesigned the large screen drawing area, allowing for more customization with moveable tool menus and the ability to hide extra tools.

    Now, unique menu interfaces and stylus support provide a more immersive and powerful creative experience for FlipaClip’s large screen users. By implementing many of the features its users requested and optimizing existing workspaces, FlipaClip increased its US tablet users by 54% in just four months. The quality of the animations made by FlipaClip artists also visibly increased, according to the team.


    We knew we needed to prioritize the large screen experience...because it gives users more space and precision to create - Tim Meson; Lead Software Engineer and Co-founder of FlipaClip

    Improving large screen performance

    One of the key areas the FlipaClip team focused on was achieving low-latency drawing, which is critical for a smooth and responsive experience, especially with a stylus. To help with this, the team created an entire drawing engine from the ground up using Android NDK. This engine also improved the overall app responsiveness regardless of the input method.

    “Focusing on GPU optimizations helped create more responsive brushes, a greater variety of brushes, and a drawing stage better suited for tablet users with more customization and more on-screen real estate,” said Tim.

    Previously, FlipClip drawings were rendered using CPU-backed surfaces, resulting in suboptimal performance, especially on lower-end devices. By utilizing the GPU for rendering and consolidating touch input with the app’s historical touch data, the FlipaClip team significantly improved responsiveness and fluidity across a range of devices.

    “The improved performance enabled us to raise canvas size limits closer to 2K resolution,” said Tim. “It also resolved several reported application-not-responding errors by preventing excessive drawing attempts on the screen.”

    After optimizing for large screens and reducing their crash rate across device types, FlipaClip’s user satisfaction improved, with a 15% improvement in their Play Store rating for large screen devices. The performance enhancements to the drawing engine were particularly well received among users, leading to better engagement and overall positive feedback.

    Using Android Vitals, a tool in the Google Play Console for monitoring the technical quality of Android apps, was invaluable in identifying performance issues across the devices FlipaClip users were on. This helped its engineers pinpoint specific devices lacking drawing performance and provided critical data to guide their optimizations.

    FlipaClip UI examples across large screen devices

    Listening to user feedback

    Large screen users are Android’s fastest-growing audience, reaching over 300 million users worldwide. Allowing users to enjoy their favorite apps across device types while making use of the larger screen on tablets, means a more engaging experience for users to love.

    “One key takeaway for us was always to take the time to review user feedback and app stability reports,” said Tim. “From addressing user requests for additional stylus support to pinpointing specific devices to improve drawing performance, these insights have been invaluable for improving the app and addressing pain points of large screen users.”

    The FlipaClip team noted that developing for Android stood out in several ways compared to other platforms. One key difference is the libraries provided by the Android team, which are continuously updated and improved, allowing its engineers to seamlessly address and resolve any issues without requiring users to upgrade their Android OS.

    “Libraries like Jetpack Compose can be updated independently of the device's system version, which is incredibly efficient,” said Tim. “Plus, Android documentation has gotten a lot better over the years. The documentation for large screens is a great example. The instructions are more thorough, and all the code examples and codelabs make it so much easier to understand.”

    FlipaClip engineers plan to continue optimizing the app’s UI for larger screens and improve its unique drawing tools. The team also wants to introduce more groundbreaking animation tools, seamless cross-device syncing, and tablet-specific gestures to improve the overall animation experience on large screen devices.

    Get started

    Learn how to improve your UX by optimizing for large screens.

    Updates to power your growth on Google Play

    Posted by Paul Feng – Vice President of Engineering, Product and UX, Google Play

    Our annual Playtime event series kicks off this week and we’re excited to share the latest product updates to help your business thrive. We’re sharing new ways to grow your audience, optimize revenue, and protect your business in an ever-evolving digital landscape.

    Make sure to also check out news from #TheAndroidShow to learn more about the biggest update to Gemini in Android Studio since launch that will help boost your team’s developer productivity.

    Growing your audience with enhanced discovery features

    To help people discover apps and games they'll love, we're continuously improving our tools and personalizing app discovery so you can reach and engage your ideal audience.

    Enhanced content formats: To make your video content more impactful, we’re making enhancements to how it's displayed on the Play Store. Portrait videos on your store listing now have a full-screen experience to immerse users and drive conversions with a prominent "install" button. Simply keep creating amazing portrait videos for your store listing, and we'll handle the rest.

    Our early results are promising: portrait videos drive +7% increase in total watch time, a +9% increase in video completion count, and a +5% increase in conversions.

    Captivate users with full-screen portrait videos on your store listing
    Captivate users with full-screen portrait videos on your store listing

    We’ve also launched new features to create a more engaging and tailored experience for people exploring the Play Store.

      • Personalized query recommendations: To help users start their search journeys right, we’ve introduced personalized search query recommendations on Search Home. This feature is currently available in English, with expanded support for more languages coming soon this year.
    Personalized search queries help tailor search results to user’s interests
    Personalized search queries help tailor search results to user’s interests

      • Interest pickers: Multi-select interest filters allow people to share their preferences so they can get more helpful recommendations tailored to their interests. Earlier this year, we announced this for games, and now these filters are also available for apps.

    Optimizing your revenue with Google Play Commerce

    We want to make it effortless for people to buy what you're selling, so we're focused on helping our 2.5 billion users in over 190 markets have a seamless and secure purchase experience. Our tools support you and your users during every step of the journey, from payment setup, to the purchase flow, to ensuring transactions are secure and compliant.

    Proactive payment setup: To help more buyers be purchase ready, we’ve been proactively encouraging people to set up payment methods in advance, both within the Play Store and during Android device setup, and even during Google account creation. Our efforts have doubled the number of purchase-ready users this year, now reaching over half a billion users. And we’re already seeing results from this approach - In September alone, we’ve seen an almost 3% increase in global conversion rates, which means more people are completing purchases, which translates directly to higher revenue potential for you from your apps and games.

    Expanded payment options: Google Play already offers users over 300 local payment methods across 65+ markets, and we’re regularly adding new payment methods. US users can now use Cash App eWallet alongside credit cards, PayPal, direct carrier billing, and gift cards and users in Poland can pay with Blik Banking.

    Purchase flow recommendations: Our new algorithmic recommendation engine helps people discover relevant in-app purchases they’re likely to buy. Simply select products to feature in Play Console, and we'll recommend a popular or related option at different moments in the purchase journey, helping users find what they need. Our early results show an average of 3% increase in spend.

    Purchase flow recommendations in Google Play
    Purchase flow recommendations helps people discover relevant in-app purchases

    Cart abandonment reminders: If a user is browsing a product in your app or game, but hasn’t yet made a decision to purchase, we’ll remind them about it later when they browse the Play Store. These automatic, opt-out reminders help nudge users to complete their purchase.

    Cart abandonment reminders in Google Play
    Cart abandonment reminders help users complete their purchase

    Secure bio authentication: Users can now enjoy a faster and more secure checkout experience by choosing on-device biometrics (fingerprint or face recognition) to verify their purchases, eliminating the need to enter their account password. This year, we’ve seen adoption triple, as more users choose bioauth to make their first purchase.

    Protecting your business with the Play Integrity API

    Everything we do at Google Play has safety and security at its core. That’s why we’re continuing to invest in more ways to reinforce user trust, protect your business, and safeguard the ecosystem. This includes actively combating bad actors who try to deceive users or spread malware, and giving you tools to combat abuse.

    The Play Integrity API can help you detect and respond to potential abuse such as fraud, bots, cheating, or data theft, ensuring everyone experiences your apps and games as intended. Apps that use Play Integrity features are seeing 80% less unauthorized usage on average compared to unprotected apps.

    Here's what's new with the Play Integrity API:

      • Hardware-backed security signals: In the coming months, you can opt-in to improved Play Integrity API verdicts backed by hardware security and other signals on Android 13+ devices. This means faster, more reliable, and more privacy-friendly app and device verification, making it significantly harder and more costly for attackers to bypass.
      • New app access risk feature: Now out of beta, this feature allows you to detect and respond to apps that can capture the screen or control the device, so you can protect your users from scams or malicious activity.

    Those are the latest updates from Google Play! We're always enhancing our tools to help address the specific challenges and opportunities of different app categories, from games and media to entertainment and social.

    We're excited to see how you leverage both our new and existing features to grow your business. Check out how Spotify and SuperPlay are already taking advantage of features like Play Points and Collections to achieve powerful results:




    More frequent Android SDK releases: faster innovation, higher quality and more polish

    Posted by Matthew McCullough – Vice President, Product Management, Android Developer

    Android has always worked to get innovation into the hands of users faster. In addition to our annual platform releases, we’ve invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel Drops.

    Going forward, Android will have more frequent SDK releases with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers.

    Two Android releases in 2025

    Next year, we’ll have a major release in Q2 and a minor release in Q4, both of which will include new developer APIs. The Q2 major release will be the only release in 2025 to include behavior changes that can affect apps. We’re planning the major release for Q2 rather than Q3 to better align with the schedule of device launches across our ecosystem, so more devices can get the major release of Android sooner.

    The Q4 minor release will pick up feature updates, optimizations, and bug fixes since the major release. It will also include new developer APIs, but will not include any app-impacting behavior changes.

    Outside of the major and minor Android releases, our Q1 and Q3 releases will provide incremental updates to help ensure continuous quality. We’re actively working with our device partners to bring the Q2 release to as many devices as possible.

    2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

    What this means for your apps

    With the major release coming in Q2, you’ll need to do your annual compatibility testing a few months earlier than in previous years to make sure your apps are ready. Major releases are just like the SDK releases we have today, and can include behavior changes along with new developer APIs – and to help you get started, we’ll soon begin the developer preview and beta program for the Q2 major release.

    The minor release in Q4 will include new APIs, but, like the incremental quarterly releases we have today, will have no planned behavior changes, minimizing the need for compatibility testing. To differentiate major releases (which may contain planned behavior changes) from minor releases, minor releases will not increment the API level. Instead, they'll increment a new minor API level value, which will be accessed through a constant that captures both major and minor API levels. A new manifest attribute will allow you to specify a minor API level as the minimum required SDK release for your app. We’ll have an initial version of support for minor API levels in the upcoming Q2 developer preview, so please try building against the SDK and let us know how this works for you.

    When planning your targeting for 2026, there’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level only.

    How to get ready

    In addition to compatibility testing on the next major release, you'll want to make sure to test your builds and CI systems with SDK's supporting major and minor API levels – some build systems (including the Android Gradle build) might need adapting. Make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes for early testing.

    Meta is a great example of how to embrace and test for new releases: they improved their velocity towards targetSdkVersion adoption by 4x. They compiled apps against each platform Beta and conducted thorough automated and smoke tests to proactively identify potential issues. This helped them seamlessly adopt new platform features, and when the release rolled out to users, Meta’s apps were ready - creating a great user experience.

    What’s next?

    As always, we plan to work closely with you as we move through the 2025 releases. We will make all of our quarterly releases available to you for testing and feedback, with over-the-air Beta releases for our early testers on Pixel and downloadable system images and tools for developers.

    Our aim with these changes is to enable faster innovation and a higher level of quality and polish across releases, without introducing more overhead or costs for developers. At the same time, we’re welcoming an even closer collaboration with you throughout the year. Stay tuned for more information on the first developer preview of Android 16.

    The shift in platform releases highlights Android's commitment to constant evolution and collaboration. By working closely with partners and listening to the needs of developers, Android continues to push the boundaries of what's possible in the mobile world. It's an exciting time to be part of the Android ecosystem, and I can't wait to see what the future holds!

    Gemini in Android Studio, now helping you across the development lifecycle

    Posted by Sandhya Mohan – Product Manager, Android Studio

    This is Our Biggest Feature Release Since Launch!

    AI can accelerate your development experience, and help you become more productive. That's why we introduced Gemini in Android Studio, your AI-powered coding companion. It’s designed to make it easier for you to build high quality Android apps, faster. Today, we're releasing the biggest set of updates to Gemini in Android Studio since launch, and now Gemini brings the power of AI to every stage of the development lifecycle, directly within the Android Studio IDE experience. And for more updates on how to grow your apps and games businesses, check out the latest updates from Google Play.

    Download the latest version of Android Studio in the canary channel to take advantage of all these new features, and read on to unpack what's new.



    Gemini Can Now Write, Refactor, and Document Android Code

    Gemini goes beyond just guidance. It can edit your code, helping you quickly move from prototype to implementation, implement common design patterns, and refactor your code. Gemini also streamlines your workflow with features like documentation and commit message generation, allowing you to focus more time on writing code.

    Moving image demonstrating Gemini writing code for an Android Composable in real time in Android Studio

    Coding features we are launching include:

      • Gemini Code Transforms - modify and refactor code using custom prompts.

        using Gemini to modify code in Android Studio

      • Commit message generation - analyze changes and propose VCS commit messages to streamline version control operations.

        using Gemini to analyze changes and propose VCS commit messages in Android Studio

      • Rethink and Rename - generate intuitive names for your classes, methods, and variables. This can be invoked while you’re coding, or as a larger refactor action applied to existing code.

        using Gemini to generate intuitive names for variables while you're coding in Android Studio

      • Prompt library - save and manage your most frequently used prompts. You can quickly recall them when you need them.

        save your frequently used prompts for future use with Gemini in Android Studio

      • Generate documentation - get documentation for selected code snippets with a simple right click.

        generating code documation in Android Studio

    Integrating AI into UI Tools

    It’s never been easier to build with Compose now that we have integrated AI into Compose workflows. Composable previews help you visualize your composables during design time in Android Studio. We understand that manually crafting mock data for the preview parameters can be time-consuming. Gemini can now help auto-generate Composable previews with relevant context using AI, simplifying the process of visualizing your UI during development.

    Visualize your composables during design time in Android Studio

    We are continuing to experiment with Multimodal support to speed up your UI development cycle. Coming soon, we will allow for image attachment as context and utilizing Gemini's multimodal understanding to make it easier to create beautiful and engaging user interfaces.

    Deploy with Confidence

    Gemini's intelligence can help you release higher quality apps with greater confidence. Gemini can analyze, test code, and suggest fixes — and we are continuing to integrate AI into the IDE’s App Quality Insights tool window by helping you analyze crashes reported by Google Play Console and Firebase Crashlytics. Now, with Ladybug Feature Drop, you can generate deeper insights by using your local code context. This means that you will fix bugs faster and your users will see fewer crashes.

    Generate insights using the IDE's App Quality Insights tool window

    Some of the features we are launching include:

      • Unit test scenario generation generates unit test scenarios based on local code context.

      generate unit test scenarios based on local code context in Android Studio

        • Build / sync error insights now provides improved coverage for build and sync errors.

          build sync error insights are now avaiable in Android Studio

        • App Quality Insights explains and suggests fixes for observed crashes from Android Vitals and Firebase Crashlytics, and now allows you to use local code context for improved insights.

          save your frequently used prompts for future use with Gemini in Android Studio

      A better Gemini in Android Studio for you

      We recently surveyed many of you to see how AI-powered code completion has impacted your productivity, and 86% of respondents said they felt more productive. Please continue to provide feedback as you use Gemini in your day-to-day workflows. In fact, a few of you wanted to share some of your tips and tricks for how to get the most out of Gemini in Android Studio.



      Along with the Gemini Nano APIs that you can integrate with your own app, Android developers now have access to Google's leading edge AI technologies across every step of their development journey — with Gemini in Android Studio central to that developer experience.

      Get these new features in the latest versions of Android Studio

      These features are all available to try today in the Android Studio canary channel. We expect to release many of these features in the upcoming Ladybug Feature Drop, to be released in the stable channel in late December — with the rest to follow shortly after.

        • Gemini Code Transforms - Modify and refactor your code within the editor
        • Commit message generation - Automatically generate commit messages with Gemini
        • Rethink and Rename - Get help renaming your classes, methods, and variables
        • Prompt library - Save and recall your most commonly used prompts
        • Compose Preview Generation - Generate previews for your composables with Gemini
        • Generate documentation - Have Gemini help you document your code
        • Unit test scenario generation - Generate unit test scenarios
        • Build / sync error insights - Ask Gemini for help in troubleshooting build and sync errors
        • App Quality Insights - Insights on how you can fix crashes from Android Vitals and Firebase Crashlytics

      As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent — which means you'll need to opt in to enable Gemini's developer assistance features in Android Studio. You can read more on Gemini in Android Studio's commitment to privacy.

      Try enabling Gemini in your project and tell us what you think on social media with #AndroidGeminiEra. We're excited to see how these enhancements help you build amazing apps!

      Set a reminder: tune in for our Fall episode of #TheAndroidShow on October 31, live from Droidcon!

      Posted by Anirudh Dewani – Director, Android Developer Relations

      In just a few days, on Thursday, October 31st at 10AM PT, we’ll be dropping our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com!

      In our quarterly show, this time we’ll be live from Droidcon in London, giving you the latest in Android Developer news with demos of Jetpack Compose and more. You can set a reminder to watch the livestream on YouTube, or click here to add to your calendar.


      In our Fall episode, we’ll be taking the lid off the biggest update to Gemini in Android Studio, so you don’t want to miss out! We also had a number of recent wearable, foldable and large screen device launches and updates, and we’ll be unpacking what you need to know to get building for these form factors.

      Get your #AskAndroid questions answered live!

      And we’ve assembled a team of experts from across Android to answer your #AskAndroid questions on building excellent apps, across devices - share your questions now and tune in to see if they are answered live on the show!

      #TheAndroidShow is your conversation with the Android developer community, this time hosted by Simona Milanović and Alejandra Stamato. You'll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on October 31 at 10AM PT, live on YouTube and on developer.android.com/events/show!

      5 new protections on Google Messages to help keep you safe

      Every day, over a billion people use Google Messages to communicate. That’s why we’ve made security a top priority, building in powerful on-device, AI-powered filters and advanced security that protects users from 2 billion suspicious messages a month. With end-to-end encrypted1 RCS conversations, you can communicate privately with other Google Messages RCS users. And we’re not stopping there. We're committed to constantly developing new controls and features to make your conversations on Google Messages even more secure and private.

      As part of cybersecurity awareness month, we're sharing five new protections to help keep you safe while using Google Messages on Android:

      1. Enhanced detection protects you from package delivery and job scams. Google Messages is adding new protections against scam texts that may seem harmless at first but can eventually lead to fraud. For Google Messages beta users2, we’re rolling out enhanced scam detection, with improved analysis of scammy texts, starting with a focus on package delivery and job seeking messages. When Google Messages suspects a potential scam text, it will automatically move the message into your spam folder or warn you. Google Messages uses on-device machine learning models to classify these scams, so your conversations stay private and the content is never sent to Google unless you report spam. We’re rolling this enhancement out now to Google Messages beta users who have spam protection enabled.
      2. Intelligent warnings alert you about potentially dangerous links. In the past year, we’ve been piloting more protections for Google Messages users when they receive text messages with potentially dangerous links. In India, Thailand, Malaysia and Singapore, Google Messages warns users when they get a link from unknown senders and blocks messages with links from suspicious senders. We’re in the process of expanding this feature globally later this year.
      3. Controls to turn off messages from unknown international senders. In some cases, scam text messages come from international numbers. Soon, you will be able to automatically hide messages from international senders who are not existing contacts so you don’t have to interact with them. If enabled, messages from international non-contacts will automatically be moved to the “Spam & blocked” folder. This feature will roll out first as a pilot in Singapore later this year before we look at expanding to more countries.
      4. Sensitive Content Warnings give you control over seeing and sending images that may contain nudity. At Google, we aim to provide users with a variety of ways to protect themselves against unwanted content, while keeping them in control of their data. This is why we’re introducing Sensitive Content Warnings for Google Messages.

        Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares.

        All of this happens on-device to protect your privacy and keep end-to-end encrypted message content private to only sender and recipient. Sensitive Content Warnings doesn’t allow Google access to the contents of your images, nor does Google know that nudity may have been detected. This feature is opt-in for adults, managed via Android Settings, and is opt-out for users under 18 years of age. Sensitive Content Warnings will be rolling out to Android 9+ devices including Android Go devices3 with Google Messages in the coming months.
      5. More confirmation about who you’re messaging. To help you avoid sophisticated messaging threats where an attacker tries to impersonate one of your contacts, we’re working to add a contact verifying feature to Android. This new feature will allow you to verify your contacts' public keys so you can confirm you’re communicating with the person you intend to message. We’re creating a unified system for public key verification across different apps, which you can verify through QR code scanning or number comparison. This feature will be launching next year for Android 9+ devices, with support for messaging apps including Google Messages.

        These are just some of the new and upcoming features that you can use to better protect yourself when sending and receiving messages. Download Google Messages from the Google Play Store to enjoy these protections and controls and learn more about Google Messages here.

        Notes


        1. End-to-end encryption is currently available between Google Messages users. Availability of RCS varies by region and carrier. 

        2. Availability of features may vary by market and device. Sign up for beta testing and a data plan may be required.  

        3. Requires 2 GB of RAM. 

      Improved comments experience in Google Docs, Sheets, and Slides on Android tablets

      What’s changing

      Earlier this year, we introduced a new comments experience in Google Docs, Sheets, and Slides on web. Today, we’re announcing a similar update to Android tablets for viewing, navigating, and replying to comments, especially on-the-go. In addition to improved design and filtering functionality to match the web experience, you’ll now be able to easily: 

      • Keep a pulse on the latest updates: now you’ll see the first comment and the two most recent replies from a comment thread, with the option to show all comments within a discussion.
      • Review comments with full context: enjoy familiar, in-context commenting, similar to the web experience, while taking advantage of larger screen real estate on tablets. 
      • Navigate and filter comments: navigation tabs and filters within the comments panel help you easily find relevant comments, without having to switch to a separate view.
      Comment experience in Docs

      Comment experience in Docs
      Comment experience in Sheets

      Comment experience in Sheets

      Comment experience in Slides

      Comment experience in Slides

      Getting started 

      Rollout pace 

      Availability 

      • Available to all Google Workspace customers, Workspace Individual Subscribers, and users with personal Google accounts 

      Resources 

      CameraX update makes dual concurrent camera even easier

      Posted by Donovan McMurray – Developer Relations Engineer

      CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

      Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

      Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

      // Set up primary and secondary camera selectors if supported on device.
      var primaryCameraSelector: CameraSelector? = null
      var secondaryCameraSelector: CameraSelector? = null
      
      for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
          primaryCameraSelector = cameraInfos.first {
              it.lensFacing == CameraSelector.LENS_FACING_FRONT
          }.cameraSelector
          secondaryCameraSelector = cameraInfos.first {
              it.lensFacing == CameraSelector.LENS_FACING_BACK
          }.cameraSelector
      
          if (primaryCameraSelector == null || secondaryCameraSelector == null) {
              // If either a primary or secondary selector wasn't found, reset both
              // to move on to the next list of CameraInfos.
              primaryCameraSelector = null
              secondaryCameraSelector = null
          } else {
              // If both primary and secondary camera selectors were found, we can
              // conclude the search.
              break
          }
      }
      
      if (primaryCameraSelector == null || secondaryCameraSelector == null) {
          // Front and back concurrent camera not available. Handle accordingly.
      }
      

      Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

      // If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
      // and compose them in a picture-in-picture layout.
      val primary = SingleCameraConfig(
          cameraSelectorPrimary,
          useCaseGroup,
          CompositionSettings.Builder()
              .setAlpha(1.0f)
              .setOffset(0.0f, 0.0f)
              .setScale(1.0f, 1.0f)
              .build(),
          lifecycleOwner);
      val secondary = SingleCameraConfig(
          cameraSelectorSecondary,
          useCaseGroup,
          CompositionSettings.Builder()
              .setAlpha(1.0f)
              .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
              .setScale(1 / 3f, 1 / 3f)
              .build()
          lifecycleOwner);
      
      // Bind to lifecycle
      ConcurrentCamera concurrentCamera =
          cameraProvider.bindToLifecycle(listOf(primary, secondary));
      

      You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

      // If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
      // and compose them in a picture-in-picture layout.
      val primary = SingleCameraConfig(
          cameraSelectorPrimary,
          useCaseGroup,
          CompositionSettings.Builder()
              .setAlpha(1.0f)
              .setOffset(0.0f, 0.25f)
              .setScale(0.5f, 0.5f)
              .build(),
          lifecycleOwner);
      val secondary = SingleCameraConfig(
          cameraSelectorSecondary,
          useCaseGroup,
          CompositionSettings.Builder()
              .setAlpha(1.0f)
              .setOffset(0.5f, 0.25f)
              .setScale(0.5f, 0.5f)
              .build()
          lifecycleOwner);
      
      // Bind to lifecycle
      ConcurrentCamera concurrentCamera =
          cameraProvider.bindToLifecycle(listOf(primary, secondary));
      

      We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.