In the past year, the Android team made significant improvements to on-device machine learning to help developers create smarter apps with more features to process images, sound, and text. In the Google I/O talk Build smarter Android apps with on-device Machine Learning, David Miro-Llopis PM on ML Kit and Thomas Ezan Android Developer Relation Engineer review new Android APIs and solutions and showcase applications using on-device ML.
Running ML processes on-device enables low-latency, increases data-privacy, enables offline support and potentially reduces cloud bill. Applications such as Lens AR Translate or the document scanning feature available in Files in India, benefit from the advantages of on-device ML.
To deploy ML features on Android, developers have two options:
ML Kit: which offers production-ready ML solutions to common user flows, via easy-to-use APIs.
Android’s custom ML stack: which is built on top of Tensorflow Lite, and provides control over the inference process and the user experience.
ML Kit released new APIs and improved existing features
Over the last year, the ML Kit team worked on both improving existing APIs and launching new ones: face mesh and document scanner. ML Kit is launching a new document scanner API in Q3 2023, that will provide a consistent scanning experience across apps in Android. Developers will be able to use it only with a few lines of code, without needing camera permission and with low apk size impact (given that it will be distributed via Google Play Services. In a similar fashion, Google code scanner is now generally available and provides a consistent scanning experience across apps, without needing camera permission, via Google Play Services.
Additionally, ML Kit improved the performance of the following APIs: barcode detection (by 17%), text recognition, digital ink recognition, pose detection, translation, and smart reply. ML Kit also integrated some APIs to Google Play Services so you don’t have to bundle the models to your application. Many developers are using ML Kit to easily integrate machine learning into their apps; for example, WPS uses ML Kit to translate text in 43 languages and save $65M a year.
Acceleration Service in Android’s custom ML stack is now in public beta
To support custom machine learning, the Android ML team is actively developing Android’s custom ML stack. Last year, TensorFlow Lite and GPU delegates were added to the Google Play Services which lets developers use TensorFlow Lite without bundling it to their app and provides automatic updates. With improved inference performance, hardware acceleration can in turn also significantly improve the user experience of your ML-enabled Android app. This year, the team is also announcing Acceleration Service, a new API enabling developers to pick the optimal hardware acceleration configuration at runtime. It is now in public beta and developers can learn more and get started here.
Posted by Adarsh Fernando, Senior Product Manager, Android Studio
We first announced Android Studio at I/O 2013 with a promise to deliver a best-in-class integrated development environment (IDE) focused on Android app developers. 10 years later, this commitment to developer productivity still drives the team to deliver new tools and solutions that help teams around the world to create amazing app experiences for their users. And with Google's push to unlock the power of AI to help you throughout your day, Android Studio Hedgehog introduces a key breakthrough: an AI-powered conversational experience designed to make you more productive.
In addition to accelerating coding productivity, this latest version of the IDE provides better tools when you develop for multiple form factors, and helps you improve app quality with new insights, debugging, and testing solutions. All these improvements add to the many updates we’ve included in Android Studio Giraffe, which is now in the Beta channel and helps make it easier to configure your builds with Kotlin DSL support, improve sync times with new data and guidance, target the latest Android SDK version with the new Android SDK Upgrade Assistant, and more.
To see highlights of the new features in action including Studio Bot, watch the What’s new in Android Developer Tools session from Google I/O 2023.
What’s new in Android Development Tools - with Studio Bot Demo
At the heart of our mission is to accelerate your ability to write high-quality code for Android. In this release we are excited to introduce an AI-powered conversational experience called Studio Bot, that leverages Codey, Google's foundation model for coding that is a descendant of PaLM 2, to help you generate code for your app and make you more productive. You can also ask questions to learn more about Android development or help fix errors in your existing code — all without ever having to leave Android Studio. Studio Bot is in its very early days, and we’re training it to become even better at answering your questions and helping you learn best practices. We encourage you to try it out for yourselves, and help it improve by sharing your feedback directly with Studio Bot.
Privacy is top of mind, and what is unique in this integration is that you don’t need to send your source code to Google to use Studio Bot—only the chat dialogue between you and Studio Bot is shared. Much like our work on other AI projects, we stick to a set of principles that hold us accountable. We’re taking a measured approach to our rollout; for this initial launch, Studio Bot is only available to Android developers in the US. You can read more here
Studio Bot
Live Edit
Live Edit helps keep you in the flow by minimizing interruptions when you make updates to your Compose UI and validates those changes on a running device. You can use it in manual mode to control when the running app should be updated or in automatic mode to update the running app as you make code changes. Live Edit is available in Android Studio Giraffe Beta, with the Hedgehog release providing additional improvements in error handling and reporting.
Live Edit with Compose
Build productivity
Kotlin DSL and Version Catalogs
A number of updates help you leverage more modern syntax and conventions when configuring your build. Kotlin is the recommended language when developing for Android. Now, with official support for Kotlin DSL in your Gradle build scripts, it’s also the preferred way to configure your build because Kotlin is more readable and offers better compile-time checking and IDE support. Additionally, we’ve also added experimental support for TOML-based Gradle Version Catalogs, a feature that lets you manage dependencies in one central location and share dependencies across modules or projects. Android Studio now makes it easier to configure version catalogs through editor suggestions and integrations with the Project Structure dialog, plus the New Project Wizard.
Kotlin DSL and Version Catalogs in the New Project Wizard
Per-app language preferences
Typically, multilingual users set their system language to one language—such as English—but they want to select other languages for specific apps, such as Dutch, Chinese, or Hindi. Android 13 introduced support for per-app language preferences, and now Android Gradle plugin 8.1 and higher can configure your app to support it automatically. Learn more.
Download impact during Sync
When using Android Gradle Plugin 7.3 or higher, The Build > Sync tool window now includes a summary of time spent downloading dependencies and a detailed view of downloads per repository, so you can easily determine whether unexpected downloads are impacting build performance. Additionally, it can help you identify inefficiencies in how you configure your repositories. Learn more.
Build Analyzer showing impact of downloads during build
New Android SDK Upgrade Assistant
Android Studio Giraffe introduces the Android SDK Upgrade Assistant, a new tool that helps you upgrade the targetSdkVersion, which is the API level that your app targets. Instead of having to navigate every API change with an Android SDK release, the Android SDK Upgrade Assistant guides you through upgrading targetSdkVersion level by level by creating a customized filter of API changes that are relevant to your app. For each migration step, it highlights the major breaking changes and how to address them, helping you get to taking advantage of what the latest versions of Android have to offer much more quickly. To open the Android SDK Upgrade Assistant, go to Tools > Android SDK Upgrade Assistant. In the Assistant panel, select the API level that you want to upgrade to for guidance.
Upgrade more quickly with the Android SDK Upgrade Assistant
Developing for form factors
Google Pixel Fold and Tablet Virtual Devices
Although these devices won’t launch until later this year, you can start preparing your app to take full advantage of the expanded screen sizes and functionality of these devices by creating virtual devices using new Google Pixel Fold and Google Pixel Tablet device profiles in Android Studio Hedgehog. To start, open Device Manager and select Create Device.
Pixel Tablet running on the Android Emulator
Emulator Support for Wear OS 4 Developer Preview
Wear OS 4 is the next generation OS for Wear. Based on Android 13, it officially launches in the fall and has a great selection of new features and optimizations. We’re giving you a preview of all the new platform features with the new Wear OS 4 emulator. We recommend you try it with Android Studio Hedgehog and test that your Wear OS app works as intended with the latest platform updates. The Wear OS 4 emulator will give you a faster and smoother transition to Wear OS 4, and help you make apps ready in time for the official Wear OS 4 release on real devices. Check out the Wear 4 Preview site for how to get started with the new Wear OS 4 emulator.
Watch Face Format support in Wear OS 4 Emulator
Together with Samsung, we’re excited to announce the launch of the Watch Face Format, a new way to build watch faces for Wear OS. The Watch Face Format is a declarative XML format, meaning there will be no code in your watch face APK. The platform takes care of the logic needed to render the watch face so you no longer have to worry about code optimizations or battery performance. Use watch face creation tools such as Watch Face Studio to design watch faces, or you can manually or dynamically edit the watch face format to build watch faces directly. You can test the new Watch Face Format on the Wear OS 4 emulator.
Watch Face Format Watchface on Wear 4 Emulator
Device Mirroring for local devices
Whether you use a direct USB connection or ADB over Wi-Fi, Device Mirroring lets you see and interact with your local physical devices directly within the Android Studio Running Devices window. This feature lets you focus on how you develop and test your app all in one place. With the Hedgehog release, we’re adding more functionality, including the ability to mirror Wear OS devices and simulate folding actions on foldable devices directly from the IDE.
Device Mirroring with the Pixel Fold
Android Device Streaming
We know sometimes it’s critical for you to see and test how your apps work on physical hardware to ensure that your users have the best experience. However, accessing the latest flagship devices isn’t always easy. Building on top of Device Mirroring for local devices, we’re introducing device streaming of remote physical Google Pixel devices, such as the Pixel Fold and Pixel Tablet, directly within Android Studio. Device streaming will let you deploy your app to these remote devices and interact with them, all without having to leave the IDE. If you’re interested in getting early access later this year, enroll now.
Espresso Device API
Automated testing of your app using Espresso APIs helps you catch potential issues early, before they reach users. However, testing your app across configuration changes, such as rotating or folding a device, has always been a challenge. Espresso Device API is now available to help you write tests that perform synchronous configuration changes when testing on Android virtual devices running API level 24 and higher. You can also set up test filters to ensure that tests that require certain device features, such as a folding action, only run on devices that support them. Learn more.
Synchronous device configuration changes using the Espresso Device API
Improve your app quality
App Quality Insights with Android vitals
App Quality Insights launched in Android Studio Electric Eel to provide access to Firebase Crashlytics issue reports directly from the IDE. The integration lets you navigate between your stack trace and code with a click, use filters to see only the most important issues, and see report details to help you reproduce issues. In Android Studio Hedgehog, you can now view important crash reports from Android vitals, powered by Google Play. Android vitals reports also include useful insights, such as notes from SDK providers so that you can quickly diagnose and resolve crashes related to SDKs your app might be using.
Android vitals crash reports in the App Quality Insights window
App Quality Insights with improved code navigation
When you publish your app using the latest version of AGP 8.2, crash reports now attach minimal git commit hash data to help Android Studio navigate to your code when investigating Crashlytics crash reports in the IDE. Now, when you view a report that includes the necessary metadata, you can choose to either navigate to the line of code in your current git checkout, or view a diff between the checkout and the version of your codebase that generated the crash. To get started with the right dependencies, see the documentation.
Compose State information in Debugger
When parts of your Compose UI recompose unexpectedly, it can sometimes be difficult to understand why. Now, when setting a breakpoint on a Composable function, the debugger lists the parameters of the composable and their state, so you can more easily identify what changes might have caused the recomposition. For example, when you pause on a composable, the debugger can tell you exactly which parameters have “Changed” or have remained “Unchanged”, so you can more efficiently investigate the cause of the recomposition.
Compose state information in the debugger
New Power Profiler
We are excited to announce a brand new Power Profiler in Android Studio Hedgehog, which shows power consumption on the Pixel 6 and higher devices running Android 10 and higher. Data is segmented by each sub-system (such as, Camera, GPS, and more). This data is made available when recording a System Trace via the Profiler and helps you to visually correlate power consumption of the device to the actions happening in your app. For example, you can A/B test multiple algorithms of your video calling app to optimize power consumed by the camera sensor.
The new Power Profiler
Device Explorer
The Device File Explorer in Giraffe has been renamed to Device Explorer and updated to include information about debuggable processes running on connected devices. In addition to the Files tab, which includes existing functionality that allows you to explore a device’s file hierarchy, the new Processes tab allows you to view a list of debuggable processes for the connected device. From there you can also select a process and perform a Kill process action (which runs am kill), a Force stop (which runs am force-stop) , or attach the debugger to a selected process.
Processes tab in the Device Explorer window
Compose animation preview
Compose Animation Preview in Android Studio Hedgehog now supports a number of additional Compose APIs, animate*AsState, CrossFade, rememberInfiniteTransition, and AnimatedContent (in addition to updateTransition and AnimatedVisibility). Compose Animation Preview also has new pickers that let you set non-enum or boolean states to debug your Compose animation using precise inputs. For all supported Compose Animation APIs, you can play, pause, scrub, control speed, and coordinate.
Compose Animation Preview
Embedded Layout Inspector
You can now run Layout Inspector directly embedded in the Running Device Window in Android Studio! Try out this feature today in Android Studio Hedgehog to conserve screen real estate and organize your UI debugging workflow in a single tool window. You can access common Layout Inspector features such as debugging the layout of your app by showing a view hierarchy and allowing you to inspect the properties of each view. Additionally, because the embedded Layout Inspector overlays on top of the existing device mirroring stream, overall performance when using the inspector is now much faster. To get started and understand known limitations, read the release notes.
Embedded Layout Inspector
Firebase Test Lab support for Gradle Managed Devices
Gradle Managed Devices launched in Android Gradle Plugin (AGP) 7.3 to make it easier to utilize virtual devices when running automated tests in your continuous integration (CI) infrastructure by allowing Gradle to manage all aspects of device provisioning. All you need to do is use the AGP DSL to describe the devices you wanted Gradle to use. But sometimes you need to run your tests on physical Android devices. With AGP 8.2, we have expanded Gradle Managed Devices with the ability to target real physical (and virtual) devices running in Firebase Test Lab (FTL). The capability makes it easier than ever to scalably test across the large selection of FTL devices with only a few simple steps. Additionally, this version of AGP can also take advantage of FTL’s new Smart Sharding capabilities, which allows you to get test results back much more quickly by utilizing multiple devices that run in parallel. To learn more and get started, read the release notes.
Gradle Managed Devices with support for Firebase Test Lab
IntelliJ
IntelliJ Platform Update
Android Studio Hedgehog (2023.1) includes the IntelliJ 2023.1 platform release, which comes with IDE startup performance improvements, faster import of Maven projects, and a more streamlined commit process. Read the IntelliJ release notes here.
New UI
Along with the IntelliJ platform update comes further improvements to the New UI. In large part due to community feedback, there’s a new Compact Mode, which provides a more consolidated look and feel of the IDE, and an option to vertically split the tool window area and conveniently arrange the windows, just like in the old UI. We also improved the Android-specific UI by updating the main toolbar, tool windows, and new iconography. To use the New UI, enable it in Settings > Appearance & Behavior > New UI. For a full list of changes, see the IntelliJ New UI documentation.
The New UI adopted from IntelliJ
Summary
To recap, Android Studio Giraffe is available in the Beta channel. Android Studio Hedgehog is the latest version of the IDE and is available in the Canary channel, and includes all of these new enhancements and features:
Coding productivity
Android Studio Bot, is a tightly integrated, AI-powered assistant in Android Studio designed to make you more productive.
(Beta) Live Edit, which helps keep you in the flow by minimizing interruptions when you make updates to your Compose UI and validate those changes on a running device.
Build productivity
(Beta) Kotlin DSL and Version Catalogs, which helps you take advantage of more modern syntax and conventions when configuring your build.
(Beta) Per-app language preferences, built-in support in AGP for automatically configuring per-app language preferences.
(Beta) Download impact in Build Analyzer, which provides a summary of time spent downloading dependencies and a detailed view of downloads per repository, so you can easily determine whether unexpected downloads are impacting build performance.
(Beta) New Android SDK Upgrade Assistant, which helps you upgrade the targetSdkVersion, which is the API level that your app targets, much more quickly.
Developing for form factors
Google Pixel Fold and Google Pixel Tablet Virtual Devices, which can help you start preparing your app to take full advantage of the expanded screen sizes and functionality of these devices before they are available in stores.
Wear OS 4 Developer Preview Emulator, which similarly provides you early access to test and optimize your app against the next generation of Wear OS by Google.
Watch Face Format support in Wear OS 4 Developer Preview Emulator, a new way to build watch faces for Wear OS.
Device Mirroring for local devices, which lets you see and interact with your local physical devices directly within Android Studio’s Running Devices window.
Android Device Streaming, a device streaming of remote physical Google Pixel devices, which you can register for early access today!
Espresso Device API, which helps you write tests that perform synchronous configuration changes when testing on Android virtual devices running API level 24 and higher.
Improve your app quality
App Quality Insights: Android vitals, which now lets your view, filter, and navigate important crash reports from Android vitals, powered by Google Play.
App Quality Insights with improved code navigation, which lets you now choose to either navigate to the line of code in your current git checkout, or view a diff between the checkout and the version of your codebase that generated the crash.
Compose State information in Debugger, which lists the parameters of the composable and their state when paused on a breakpoint in a composable, so you can more easily identify what changes might have caused the recomposition.
New Power Profiler, which shows highly accurate power consumption from the device segmented by each sub-system.
(Beta) Device Explorer, which now includes information about debuggable processes running on connected devices and actions you can perform on them.
(Beta) Compose animation preview, now supports a number of additional Compose APIs and new pickers that let you set non-enum or boolean states to debug your Compose animation using precise inputs.
Embedded Layout Inspector, which runs Layout Inspector directly embedded in the Running Device Window in Android Studio, leading to a more seamless debugging experience and significant performance improvements.
Firebase Test Lab support for Gradle Managed Devices, which leverages GMD to help you seamlessly configure Firebase Test Lab devices for your automated testing, and now with additional support for smart sharding.
IntelliJ
IntelliJ Platform Update to the IntelliJ 2023.1 platform release, which includes a number of performance and quality of life improvements.
New UI update that allows Android Studio to adopt a number of improvements to IntilliJ’s modern design language.
You can download Android Studio Hedgehog Canary or Android Studio Giraffe Beta today to incorporate the new features into your workflow. You can install them side by side with a stable version of Android Studio by following these instructions. The Beta release is near stable release quality, but bugs might still exist, and Canary features are leading edge features. As always, we appreciate any feedback on things you like or features you would like to see. If you find a bug, please report the issue and also check out known issues. Remember to also follow us on Twitter, Medium, or YouTube for more Android development updates!
Posted by Phalene Gowling, Product Manager, Google Play
At this year’s Google I/O, our “Boost your revenue with Play Commerce” session highlights the newest monetization tools that are deeply integrated into Google Play, with a focus on helping you optimize your pricing strategy. Pricing your products or content correctly is foundational to driving better user lifetime value and can result in reaching new buyers, improving conversion, and encouraging repeat orders. It can be the difference between a successful sale and pricing yourself out of one, or even undervaluing your products and missing out on key sales opportunities.
To help you price with confidence, we’re excited to announce price experiments for in-app products in Play Console, allowing you to test price points and optimize for local purchasing power at scale. Price experiements will launch in the coming weeks - so read on to get the details on the new tool and learn how you can prepare to take full advantage when it's live.
A/B test to find optimal local pricing that’s sensitive to the purchasing power of buyers in different markets. Adjusting your price to local markets has already been an industry-wide practice amongst developers, and at launch you will be able to test and manage your global prices, all within Play Console. An optimized price helps reach both new and existing buyers who may have previously been priced out of monetized experiences in apps and games. Additionally, an optimized price can help increase repeat purchases by buyers of their favorite products.
Illustrative example only. A/B test price points with ease in Play Console
Experiment with statistical confidence: price experiments enables you to track how close you are to statistical significance with confidence interval tracking, or for a quick summary, you can view the top of the analysis when enough data has been collected in the experiment to determine a statistically significant result. To help make your decision on whether to apply the ‘winning’ price easier, we’ve also included support for tracking key monetization metrics such as revenue uplift, revenue derived from new installers, buyer ratio, orders, and average revenue per paying user. This gives you a more detailed understanding of how buyers behave differently for each experiment arm per market. This can also inspire further refinements towards a robust global monetization strategy.
Improve return on investment in user acquisition. Having a localized price and a better understanding of buyer behavior in each market, allows you to optimize your user acquisition strategy having known how buyers will react to market-specific products or content. It could also inform which products you chose to feature on Google Play.
Set up price experiments in minutes in Play Console
Price experiments will be easy to run with the new dedicated section in Play Console under Monetize > Products > Price experiments. You’ll first need to determine the in-app products, markets, and the price points you’d like to test. The intuitive interface will also allow you to refine the experiment settings by audience, confidence level and sensitivity. And once your experiment has reached statistical significance, simply apply the winning price to your selected products within the tool to automatically populate your new default price point for your experiment markets and products. You also have the flexibility to stop any experiment before it reaches statistical significance if needed.
You’ll have full control of what and how you want to test, reducing any overhead of managing tests independently or with external tools – all without requiring any coding changes.
You can start preparing now by strategizing what type of price experiment you might want to run first. For a metric-driven source of inspiration, game developers can explore strategic guidance, which can identify country-specific opportunities for buyer conversion. Alternatively, start building expertise on running effective pricing experiments for in-app products by taking our new Play Academy course, in preparation for price experiments rolling out in the coming weeks.
We are excited to announce the launch of the Watch Face Format! We worked in partnership with Samsung to introduce a new way for you to build watch faces for Wear OS smartwatches.
The Watch Face Format is a declarative XML format to design the appearance and behavior of watch faces. This means that there is no executable code involved in creating a watch face, and there will be no code embedded in your watch face APK.
The Wear OS platform takes care of the logic needed to render the watch face so you no longer have to worry about code optimizations or battery performance.
Watch faces that are built with this new format require less maintenance and fewer updates than the ones built using the Jetpack Watch Face library. For example, you no longer need to update your watch face to benefit from improvements in performance or battery consumption, or to get the latest bug fixes.
Starting today, you can build watch faces in this new format and publish them on Google Play, ready for when the first Wear OS 4 watches are available.
The Watch Face Format lets you create…
Analog and digital watch faces:
Watch faces with complications:
Customizable watch faces:
And more...
Watch Face Editing
With the Watch Face Format, we have included the watch face editor as part of Wear OS itself, so users can customize every watch face using the same editor UI. You no longer need to build your own watch face editor for users to customize their watch face.
Wear OS 4’s editor for watch faces made using the Watch Face Format
Build Watch Faces, or Watch Face Tools
The new Watch Face Format can be used to build watch faces directly, or it can be integrated into creation tools, allowing designers to create watch faces without having to write any executable code.
Watch Face Studio
Today, Samsung has released the latest version of Watch Face Studio, ready for you to try now. As an alternative to directly writing XML using the Watch Face Format, Watch Face Studio makes it easy for designers to create watch faces without any coding experience.
Watch faces made in the latest version of Watch Face Studio use the Watch Face Format by default when they are run on a Wear OS 4 watch, or they run as traditional watch faces when the watch face runs on a Wear OS 3 watch.
Using Watch Face Studio to create a watch face
Learn more
Build watch faces using the Watch Face Format today:
Get started with watch faces, or create watch face tools, using our documentation.
Posted by Jennifer Tsau, Product Management Lead and David Dandeneau, Engineering Lead
For more than a decade, Google has been committed to bringing safe and seamless connected experiences to cars. We’re continuing to see strong momentum and adoption across Android for Cars. Android Auto is supported by nearly every major car maker, and will be in nearly 200 million cars by the end of this year. And the number of cars powered by Android Automotive OS with Google built-in — which includes top brands like Chevrolet, Volvo, Polestar, Honda, Renault and more — is expected to nearly double by the end of this year.
With cars becoming more connected and equipped with immersive displays, there’s more opportunities for developers to bring app experiences to cars. We’re excited to share updates and new ways for developers to reach more users in the car.
Apps designed for driving experiences
Helping drivers while on the road - whether they are navigating, listening to music, or checking the weather - is a top priority. We’re continuing to invest in tools and resources, including the Android for Cars App Library, to make it even easier for developers to build new apps or port existing Android apps over to cars.
New capabilities for navigation apps
Today, we announced Waze rolling out globally on the Google Play Store for all cars with Google built-in, expanding its availability beyond Android Auto. As a part of this launch, we created more templates in Android for Cars App Library to help speed up development time across a number of app categories, including navigation.
For navigation apps, it’s also now possible to integrate with the instrument cluster, providing turn-by-turn directions right in the driver's line of sight. And developers can also access car sensor data to surface helpful information like range, fuel level, and speed to provide more contextual assistance to drivers.
The Waze app is coming to all cars with Google built-in, including the first-ever Chevrolet Blazer EV launching this year.
Tools to easily port your media apps across Android for Cars
Media apps continue to be a top use case in the car, and it’s quicker than ever to bring your media apps to Android Auto and Android Automotive OS. Audible recently joined popular streaming audio apps like Deezer, Soundcloud, and Spotify to offer their apps across both Android Auto and cars with Google built-in. If you have a media app on mobile, port it over to reach new users in the car.
New app categories for driving experiences
The Android for Cars App Library now allows developers to bring new apps to cars including internet of things (IoT) and weather apps to cars. The IoT category is available for all developers, while weather is in an early access program. In the weather category, The Weather Channel app will join other weather apps like Weather & Radar later this year.
We’re also working with messaging apps like Zoom, Microsoft Teams, and Webex by Cisco to allow you to join meetings by audio from your car display in the coming months.
Coming soon, join meetings by audio from your car display.
Apps designed for parked and passenger experiences
With screens expanding in size and more being added for passengers, there is growing demand for parked and passenger experiences in cars.
Video, gaming, and browsing in cars
Now, video and gaming app categories are available in the car, with an early access program for browsing apps coming soon. YouTube is now available for car makers to offer in cars with Google built-in. And drivers of cars with Google built-in will soon have access to popular titles like Beach Buggy Racing 2, Solitaire FRVR, and My Talking Tom Friends from publishers like Vector Unit, FRVR and Outfit7 Limited. Developers can now port their large screen optimized apps to cars to take advantage of this opportunity.
YouTube is coming to cars with Google built-in, like the Polestar 2.
More screens in cars allows for new experiences between drivers and passengers, including individual and shared entertainment experiences. We're excited to announce multi-screen support is coming to Android Automotive OS 14 — stay tuned for more updates.
Support for multiple screens is coming to Android Automotive OS 14.
Start developing apps for cars today
To learn how to bring your apps to cars, check out the technical session, codelab and documentation on the Android for Cars developer site. With all the opportunities across car screens, there has never been a better time to bring your apps and experiences to cars. Thanks for all the contributions to the Android ecosystem. See you on the road!
The Android Studio logo redesign caught the attention of the developer community since its sneak peek at the Android Developer Summit ‘22. We are thrilled to release the new Android Studio logo with the stable release of Flamingo. Now that the new logo is available to most Android Studio users, we can examine the design changes in greater detail and decode their meaning.
This case study offers a comprehensive overview of the design journey, from identifying the initial problem to the final outcome. It explores the critical brand elements that the team needed to consider and the tools used throughout the redesign process. This case study also delves into the various stages of design exploration, highlighting the efforts to create a modern logo while honoring the Android Studio brand's legacy.
You told us the Android Studio logo looked a little weird and complicated. It doesn't shrink down well and it's way too similar to the emulator. We heard you!
With Android Studio’s new Logo, it seems like the studio team gave high consideration to Android Launcher Icon guidelines with no regards for how it looks on a Windows Machine Taskbar” tweet by @theretroportal Oct 22, 2020
The Android Studio logo used between 2020 and 2022 was well-suited for print, but it posed challenges when used as an application icon. Its readability suffered when reduced to smaller sizes, and its similarity to the emulator caused confusion.
2020 - 2022 Android Studio Stable scalability issues
Additionally, the use of color alone to differentiate between Canary and Stable versions made it difficult for users with color vision deficiencies.
The redesign aimed to resolve these concerns by creating a logo that was easy to read, visually distinctive, and followed the OS guidelines when necessary, ensuring accessibility. The new design also maintained a connection with the Android logo family while honoring its legacy.
Android Developer Logo Family
In this case study, we will delve into the version history and evolution of the Android Studio logo and how it has changed over the years.
A brief history of the Android Studio logo
2013: The original Android Studio logo was a 3D robot that highlighted the gears and interworking of the bugdroid. At this time, the Android Emulator was the bugdroid.
2014: The Android Emulator merged to a flat mark but remained otherwise unchanged.
2014-2019: An updated Android Studio logo was introduced featuring an "A" compass in front of a green circle.
2019: In Canary 3.6, the color palette was updated to match Android 10.
2020-2022: With the release of Android Studio 4.1 Canary, the "A" compass was reduced to an abstract form placed in front of a blueprint. The Android head was also added, peeking over the top.
A timeline of Android Studio & Emulator design evolution
Understanding the Android brand elements
When redesigning a logo, it's important to consider brand elements that unify products within an ecosystem. For the Android Developer ecosystem, the “robot head” is a key brand element, alongside the primaryAndroid green color. The secondary colors blue and navy, and tertiary colors like orange, can also be utilized for support.
Android brand color palette
Key objectives
Iconography: use recognizable and appropriate symbols, such as compass "A" for Android Studio or a device for Android Emulator, to convey the purpose and functionality clearly and quickly.
Enhance recognition and scalability: the Android Studio and Android Emulator should prioritize legibility and scalability, ensuring that they can be easily recognized and understood even at smaller sizes.
Establish distinction: the Android Studio and Android Emulator need to be easily distinguishable, to avoid confusion.
Maintain brand consistency: the Android Studio and Android Emulator designs should be consistent with the overall branding and visual identity of the Android family, while still being distinctive.
Ensure accessibility: the logo should be accessible to all users, including those with visual impairments. This means using clear shapes, colors, and contrast.
Follow OS guidelines: the updated application icon must align with the Android visual language and conform to the guidelines of macOS, Windows, and Linux operating systems, ensuring consistency and coherence across all platforms.
Ensure versatility: the Android Studio logo should be versatile enough to work in a variety of sizes and contexts, such as on different devices and platforms.
The tools
Paper, pencil and pen sketching, markers, Adobe Illustrator and Figma.
Design exploration: how it started
It all started as a simple brief: redesign the Android Studio logo. We initiated our creative process by brainstorming objects and concepts that evoke a sense of software development - such as pencils, rulers, building blocks, construction sites, tape measures, compasses, and protractors.
Logo exploration and sketches: pencils, rulers and compass
Logo exploration and sketches: blueprint, rulers and Android head
We experimented with replacing the drawing compass with a ruler and tried various combinations of design elements. We even explored the idea of incorporating bricks, similar to building blocks, and playfully stacked the Android head, a ruler, and a pencil together, with a nod to the terminal prompt symbol '>'.
Logo exploration with building blocks and a ‘play on terminal’
During the logo exploration phase, we examined different approaches to incorporating an "A" for Android into the design. One concept highlighted the precision of Android development tools through an "A" ruler, while another featured the original "A" compass from 2014.
Once we had generated a variety of logo concepts through sketching, we then proceeded to add the Android color palette to our designs. This was an important step to ensure that our new logo would not only stand out on its own but also maintain a strong visual connection with the wider Android Developer family branding.
Android Studio logo exploration with an “A” ruler
Android Studio logo exploration with a “A” compass
To ensure clear differentiation between the redesigned Emulator and Android Studio, we explored the option of removing elements and reversing the colors of both marks, which would simplify their overall design and make them easier to recognize at a glance.
Android Studio & Android Emulator exploration
We aimed to enhance the distinctiveness, scalability, and iconography by carefully analyzing various design elements such as line weight, corner radius, and the placement of the Android head, to create a visually strong mark. We further simplified the design by eliminating all shadow effects and reorienting the emulator phone to an upright position, which improved recognition, scalability, and scannability. This heightened the visual differentiation between the two marks, making them more recognizable and visually distinct.
Android Emulator exploration
Design exploration: how it ended
The redesigned Android Studio logo is a fresh take on the original design, featuring the Android head and the iconic A compass. The team initially considered keeping the simplified A used in the 2020- 2022, but ultimately decided that the simplified A was not strong enough of a mark to be the central symbol of the Android Studio brand. The compass's handle and hinge have been reintroduced, while the legs of the compass have been sharpened to points, reflecting the meticulousness and precision that developers bring to their craft. Additionally, the adjustment angle radius has been reinstated, creating the crossbar necessary to form the letter A.
Inclusive design: improved accessibility and scalability
Accessible: the logo uses secondary encoding with an outlined A for Canary and a solid A for Stable in addition to color. This makes it easier for users with color vision deficiencies to distinguish between the two application icons.
Minimal and scalable: the blueprint drawing in the 2020 - 2022 was removed to create a minimalist design that is scalable and legible at smaller sizes. This makes it easier for low vision users to see.
Ensuring recognition: the new logo's central focus is the A compass, which incorporates elements from the original compass mark. This helps ensure that the application icon is recognizable to users, even at small sizes.
The unique shapes: squircle and 13-pointed bottle cap
The Android Studio application icon consists of two unique shapes: a squircle, a square with slightly rounded corners commonly used in macOS applications, and a 13-pointed bottle cap, which is a shape derived from the Modern Android design system. Besides reflecting the design system, the 13-pointed bottle cap also serves as a delightful Easter egg 🥚, with 13 points specifically included to coincide with Android 13's release. These two background shapes are used on desktop to adhere to OS guidelines, and to ensure that the application icon is legible and recognizable in both dark and light mode.
Android Studio has two application icons - one for the Canary version and one for the Stable release. The Canary application icon is a white outline on a dark blue background, representing a blueprint or prototype. The Android Studio Preview (Canary) version enables developers to experiment with new features that are still in development. The white outline of the A compass in Canary indicates that the features are not yet finalized and may change.
From 2014 to 2022, the Android Studio application icons featured different background colors, with yellow representing Canary and green (2014 - 2019) and white (2020-2022) representing Stable releases. However, the most recent redesign takes accessibility to a new level by going beyond the use of background colors alone to differentiate between Canary and Stable. The new design employs a secondary encoding method, featuring an outlined A for Canary and a solid A for Stable, in addition to color, to effectively convey meaning and make the application icons more accessible for users with color vision deficiencies.
The new Android Studio application icons also embody the spirit of software development, highlighting the transformation from a blueprint/prototype (Canary) to a fully designed and polished product (Stable). Drawing inspiration from the design language of the Canary and Stable Splash Screens, the Android Studio Canary and Stable icons visually reinforce the progression of the developer's journey from the blueprint and ideation stages to execution.
A modern Android Studio & Emulator logo that honors its legacy
The new Android Studio logo illustrates how a brand can evolve through simplification, improve clarity and recognition while honoring its legacy. By keeping the A compass as a reference to the 2014 logo, the team created a modern design that represents the evolution of the Android Studio platform. This minimalist design is easily recognizable and aligns with the rest of the Android Developer branding.
It is a good time to download the latest stable version of Android Studio to see the new icon. As always, we appreciate any feedback on things you like and issues or features you would like to see. If you find a bug or issue, please file an issue and also check out known-issues. Remember to also follow us on Twitter, Medium, or YouTube for more Android Development updates!
Posted by Paul Lammertsma, Developer Relations Engineer
Over the past year, we’ve continued to see significant growth on Android TV OS, now with over 150 million monthly active devices. In fact, according to Strategy Analytics, the Android TV streaming platform shipped on more devices worldwide than any other streaming TV platform in 2022.
Today, we’re launching the Alpha release of Compose for TV, the latest UI framework for developing beautiful and functional apps for Android TV.
Building pixel-perfect living room experiences with Compose for TV
Compose for TV unlocks all the benefits of Jetpack Compose for your TV apps, allowing you to build apps with less code, easier maintenance and a modern Material 3 look straight out of the box:
Less code: Do more with less code and avoid entire classes of bugs, so code is simple and easy to maintain.
Intuitive: Describe your UI, and Compose takes care of the rest. As the app state changes, your UI automatically updates.
Accelerate development: Compose for TV is compatible with all your existing code so you can adopt when and where you want. Iterate fast with live previews and full Android Studio support.
Powerful & flexible: Create beautiful apps with direct access to the Android platform APIs that can be easily reused between other form factors, including your existing mobile, tablet, foldable, wearable and TV interfaces.
TV design guidelines
We're also excited to announce the launch of our new TV Design Guidelines for Android TV. This comprehensive guide gives you the tools you need to create TV apps that are visually appealing, intuitive, and immersive. The guidelines cover everything from typography and color to navigation and layout. Follow these guidelines to create high-quality TV apps that are easy to use.
Components you can use today
Here are some components from the TV library that are optimized for the living room experience. You can use them alongside the Material components in Compose you’re already familiar with.
NavigationDrawer makes it easy to implement side navigation that expands and collapses
TV-optimized components
Subtle focus hints that work on phones and tablets might not be optimal for TVs, due to environmental factors such as distance from the screen and contrast ratio. To address this, we’ve built dedicated Material3 inspired components that provide big, bold focus for selected elements like Buttons and Cards, designed with accessibility in mind. You can use these Indications for your own custom surfaces as well.
Component focus can be customized through different indication types: Scale, Border, Glow and Color
Built with developers
We worked closely with a group of early adopters to get their feedback on Compose for TV. Here’s what they have to say:
We’ve heard your feedback about the Leanback API and your desire to use a modern UI framework that looks great out of the box, but also lends itself to be thoroughly themed and customized. Please continue to give us feedback so we can continue shaping Compose for TV to fit your needs.
As we continue to evolve the Wear OS platform, we're excited to share with you some of the newest features and improvements that have been added to help you create innovative and engaging experiences for your users.
Partners like Peloton and Todoist have been building exceptional experiences for Wear OS - and seeing the impact on their feature-adoption and engagement. Hear directly from Peloton engineers about how they built a differentiated experience for the watch with Compose for Wear OS.
In this blog post, we’ll be highlighting some of the key updates we announced at Google I/O this year, so let’s dive in and explore the latest advancements in Wear OS!
Wear OS 4 Developer Preview
Today we’re releasing the first Developer Preview of Wear OS 4, the next version of Google’s smartwatch platform arriving later this year. It has enhancements to security, user customization, and power optimizations.
This preview introduces several new tools to help enhance your Wear OS app experience:
Watch Face Format
We are launching the Watch Face Format, a new way to create watch faces for Wear OS. The format makes it easier to create customizable and more power-efficient watch faces for Wear OS 4. Developed in partnership with Samsung, the Watch Face Format is a declarative XML format, so there is no executable code involved in creating a watch face and there will be no code embedded in your watch face APK. Read more.
Watch faces created using the new Format
Tiles
Wear OS tiles give users fast, predictable access to the information and actions they rely on most. Version 1.2 of the Jetpack Tiles library introduces support for platform data bindings, so if your tile uses platform data sources such as heart rate, step count, or time, your tile is updated once per second.
The new version of tiles also adds support for animations. You can use tween animations to create smooth transitions on changes to part of your layout, and transition animations can animate new or disappearing elements from the tile.
Examples of animated Tiles
Get your app ready
Wear OS 4 is based on Android 13, which is several versions newer than the current Wear OS version, so your app will need to handle the system behavior changes that took effect in Android 12 and Android 13. We recommend you start by testing your app and releasing a compatible update first – as devices get upgraded to Wear OS 4, it’s a basic but a critical level of quality that provides a good app experience for users.
Download the Wear OS 4 emulator in Android Studio Hedgehog to explore new features and test your app on Wear OS 4 Developer Preview.
Tooling and library updates
Wear OS support in Firebase Test Lab
Firebase Test Lab will support running tests for your standalone app on physical Google Pixel Watches in the next few weeks. You can run your automated tests on the Google Pixel Watch via Gradle Managed Devices, or use the Firebase Console to also run Robo tests. To learn more, check out available devices.
Wear OS support in the Pre-launch reports
Today we are also excited to announce Wear OS support in Google Play Pre-launch reports for standalone apps. The Pre-launch report helps to identify issues proactively before your app reaches users, so it’s an important tool to help you launch a high-quality app. You can test for stability, accessibility, security and trust, and screenshot previews! At the moment the analysis runs on Wear emulators and it is soon launching on Google Pixel Watches.
Emulator improvements
The Wear OS 4 emulator brings support for emulated Bluetooth, which lets you test more use cases, for example Bluetooth audio.
The new Wear OS 4 emulator doesn’t support unmanaged 32-bit code, so if your app uses native code, make sure that it includes both 32-bit and 64-bit native libraries. This will also prepare your app for upcoming 64-bit only hardware.
In Android Studio Hedgehog we also added capabilities for capturing screenshots and Logcat snapshots in the Wear OS emulator, so it is now much easier to generate screenshots for your app’s store listing.
Jetpack libraries
Since the latest stable Compose for Wear OS 1.1 release, we continue to bring new features and improvements to the toolkit. Version 1.2 already has a number of alpha releases – check out release notes to find out more.
Health Services version 1.0 has introduced a few new features in latest beta releases. Most notably, it includes BatchingMode to deliver batched exercise data at a configured interval instead of the default interval, as well as an ExerciseTypeConfig API which enables updates during ongoing exercises, such as golfing. If you are interested to learn what's new in Android Health, check out this blog.
Start building for Wear OS now
Wear OS active devices have grown 5x since launching Wear OS 3, and it's the fastest growing smartwatch platform.
We’re excited to share our brand new Wear OS Gallery, where you can find even more guidance with proven design and development patterns for messaging, media, and health & fitness apps!
With the latest updates, you'll have even more tools at your disposal to create beautiful, high-quality wearable experiences.
We’re introducing the Android Design Hub to make it easier to build compelling UI across form factors.
What is the Android UI design hub?
The design hub is a comprehensive resource with opinionated guidance designed to empower designers and developers like you to create stunning and user-friendly interfaces for Android apps. It's all about sharing takeaways, examples and do’s and don’ts, starter Figma kits, UI samples and inspiring galleries. This is the beginning of a journey we'd love for you to join us on.
Why the Android UI design hub?
A well-designed UI is crucial to the success of an app, and that's why we created a one-stop shop designed to help create outstanding user interfaces across all Android form factors:
The design hub goes into depth about what it means to design for Android, at the same time complementing and extending our Material Design open-source design system. As the Android ecosystem adds an increasing variety of devices, it's more important than ever to create seamless and adaptable experiences for your users. The Android UI Design Hub is your comprehensive resource for mastering the art of designing and implementing UI on a diverse range of devices, from smartphones and tablets to foldables to wearables and TV.
Here’s a sneak peek of what you'll find in the design hub:
Resources organized by form factors
Gain access to a library tailored for each form factor, including guidance, design templates, and sample projects. With these resources at your fingertips, you're well equipped to create exceptional UI experiences for every device category.
Galleries with UI inspiration across form factors
Spark your creativity by exploring our collection of Android app designs for a variety of popular categories of apps such as Social, Productivity, Health & Fitness, Shopping, and more. Get inspired by thoughtfully curated examples and discover design patterns to tackle common design challenges across form factors for
Dive into the nitty gritty of Android design principles, learning how to implement high-quality Android designs and understanding app layout with Android system bars, navigation modes, theming, and more. Our comprehensive documentation helps ensure that your app's design adheres to the highest standard and directs you to the latest Material Design guidelines.
Equip yourself with an extensive selection of design tools, templates, and resources, specially tailored for Android development. Streamline your workflow and bring your app ideas to life more efficiently:
Whether you're an experienced designer or a developer looking to enhance your design skills, the Android Design Hub is here to support and guide you throughout your journey.
So what are you waiting for? Visit the Android UI design hub today, and start creating exceptional user interfaces that captivate your audience and leave a lasting impression. Happy designing! 🚀
Posted by Andrew Lewis - Software Engineer, Android Media Solutions
The creation of user-generated content is on the rise, and users are looking for more ways to personalize and add uniqueness to their creations. These creations are then shared to a vast network of devices, each with its own capabilities. The Jetpack Media3 1.0 release includes new functionality in the Transformer module for converting media files between formats, or transcoding, and applying editing operations. For example, you can trim a clip from a longer piece of media and apply effects to the video track to share over social media, or transcode media into a more efficient codec for upload to a server.
The overall goal of Transformer is to provide an easy to use, reliable and performant API for transcoding and editing media, including support for customizing functionality, following the same API design principles to ExoPlayer. The library is supported on devices running Android 5.0 Lollipop (API 21) onwards and includes device-specific optimizations, giving developers a strong foundation to build on. This post gives an introduction to the new functionality and describes some of the many features we're planning for upcoming releases!
Getting Started
Most operations with Transformer will follow the same general pattern:
Create a Transformer and pass it your TransformationRequest
Apply additional effects and edits
Attach a listener to react to completion events
Start the transformation
Of course, depending on your desired transformations, you may not need every step. Here's an example of transcoding an input video to the H.265/HEVC video format and removing the audio track.
// Create a TransformationRequest and set the output format to H.265
val transformationRequest = TransformationRequest.Builder().setVideoMimeType(MimeTypes.VIDEO_H265).build()
// Create a Transformer
val transformer = Transformer.Builder(context)
.setTransformationRequest(transformationRequest) // Pass in TransformationRequest
.setRemoveAudio(true) // Remove audio track
.addListener(transformerListener) // transformerListener is an implementation of Transformer.Listener
.build()
// Start the transformation
val inputMediaItem = MediaItem.fromUri("path_to_input_file")
transformer.startTransformation(inputMediaItem, outputPath)
Check out our documentation to learn about further capabilities in the Transformer APIs. You can also find details about using Transformer to accurately convert 10-bit HDR content to 8-bit SDR in the "Dealing with color washout" blog post to ensure your video's colors remain as vibrant as possible in the case that your app or the device doesn't support HDR content.
Edits, effects, and extensions
Media3 includes a set of core video effects for simple edits, such as scaling, cropping, and color filters, which you can use with Transformer. For example, you can create a Presentation effect to scale the input to 480p resolution while maintaining the original aspect ratio, and apply it with setVideoEffects:
It's also possible to extend Transformer’s functionality by implementing custom effects that build on existing ones. Here is an example of subclassing MatrixTransformation, where we start zoomed in by 2 times, then zoom out gradually as the frame presentation time increases:
val zoomOutEffect = MatrixTransformation { presentationTimeUs ->
val transformationMatrix = Matrix()
val scale = 2 - min(1f, presentationTimeUs / 1_000_000f) // Video will zoom from 2x to 1x in the first second
transformationMatrix.postScale(/* sx= */ scale, /* sy= */ scale)
transformationMatrix // The calculated transformations will be applied each frame in turn
}
Transformer.Builder(context)
.setVideoEffects(listOf(zoomOutEffect))
.build()
Here's a screen recording that shows this effect being applied in the Transformer demo app:
For even more advanced use cases, you can wrap your own OpenGL code or other processing libraries in a custom GL texture processor and plug those into Transformer as custom effects. See the demo app for some examples of custom effects. The README also has instructions for trying a demo of MediaPipe integration with Transformer.
Coming soon
Transformer is actively under development but ready to use, so please give it a try and share your feedback! The Media3 development branch includes a sneak peek into several new features building on the 1.0 release described here, including support for tone-mapping HDR videos to SDR using OpenGL, previewing video effects using ExoPlayer.setVideoEffects, and custom audio processing. We are also working on support for editing multiple videos in more flexible compositions, with export from Transformer and playback through ExoPlayer, making Media3 an end-to-end solution for transforming media.
We hope you'll find Transformer an easy-to-use and powerful tool for implementing fantastic media editing experiences on Android! You can send us feature requests and bug reports in the Media3 GitHub issue tracker, and follow this blog to get updates on new features. Stay tuned for our upcoming talk “High quality Android media experiences” at Google I/O.