Posted by Nick Butcher, Developer Relations Engineer
Our goal is to make developing beautiful and engaging Android apps as fast and easy as possible. We want to take on the complex parts of building apps so that you can focus on your app’s features and deliver high quality experiences to your users.
We call this approach Modern Android Development (or MAD for short!) and deliver it through a suite of tools, libraries and guidance. At Google I/O we announced a number of updates and additions to our MAD offerings; here’s a recap of the three largest announcements.
#1 Compose 1.2 Beta
Jetpack Compose 1.2 reaches the first Beta, which means the API is stable. We continue to build out our roadmap, bringing the APIs you need to support more advanced use cases like downloadable fonts, LazyGrids, window insets, nested scrolling interop, and more tooling support with features like LiveEdit, Recomposition counts in the Layout Inspector and Animation Preview. Learn more about how developers like Airbnb are improving their productivity with Jetpack Compose, and check out what else is new in Compose.
#2 Baseline Profiles
Baseline profiles allow you to embed a profile to guide the Android Runtime about which code paths should be pre-compiled rather than interpreted, which could dramatically impact critical user journeys like app startup. This is especially significant when using unbundled libraries like Jetpack Compose which don’t benefit from optimizations in platform code.
Many Jetpack libraries (including Jetpack Compose) already ship baseline profiles, but you can learn how to add them to your own apps and libraries to boost their performance. We've seen up to 40% faster app startup times thanks to adding baseline profiles alone, no other code changes required!
#3 Live Edit
With Live Edit you can edit composables and view those changes in real time, on the Compose Preview or on physical devices or emulators, enabling rapid iteration. Live Edit is an opt-in experimental feature in Android Studio Electric Eel, with a number of limitations. Please try it out and provide your feedback.
Those were the top three announcements about Modern Android Development at Google I/O. To learn more, check out the full playlist of talks and workshops.
With the recent launch of the Play Integrity API, more developers are now taking action to protect their games and apps from potentially risky and fraudulent interactions.
In addition to useful signals on the integrity of the app, the integrity of the device, and licensing information, the Play Integrity API features a simple, yet very useful feature called “nonce” that, when correctly used, can further strengthen the existing protections the Play Integrity API offers, as well as mitigate certain types of attacks, such as person-in-the-middle (PITM) tampering attacks, and replay attacks.
In this blog post, we will take a deeper look at what the nonce is, how it works, and how it can be used to further protect your app.
What is a nonce?
In cryptography and security engineering, a nonce (number once) is a number that is used only once in a secure communication. There are many applications for nonces, such as in authentication, encryption and hashing.
In the Play Integrity API, the nonce is an opaque base-64 encoded binary blob that you set before invoking the API integrity check, and it will be returned as-is inside the signed response of the API. Depending on how you create and validate the nonce, it is possible to leverage it to further strengthen the existing protections the Play Integrity API offers, as well as mitigate certain types of attacks, such as person-in-the-middle (PITM) tampering attacks, and replay attacks.
Apart from returning the nonce as-is in the signed response, the Play Integrity API doesn’t perform any processing of the actual nonce data, so as long as it is a valid base-64 value, you can set any arbitrary value. That said, in order to digitally sign the response, the nonce is sent to Google’s servers, so it is very important not to set the nonce to any type of personally identifiable information (PII), such as the user’s name, phone or email address.
val nonce: String = ...
// Create an instance of a manager.
val integrityManager =
IntegrityManagerFactory.create(applicationContext)
// Request the integrity token by providing a nonce.
val integrityTokenResponse: Task<IntegrityTokenResponse> =
integrityManager.requestIntegrityToken(
IntegrityTokenRequest.builder()
.setNonce(nonce) // Set the nonce
.build())
Java:
String nonce = ...
// Create an instance of a manager.
IntegrityManager integrityManager =
IntegrityManagerFactory.create(getApplicationContext());
// Request the integrity token by providing a nonce.
Task<IntegrityTokenResponse> integrityTokenResponse =
integrityManager
.requestIntegrityToken(
IntegrityTokenRequest.builder()
.setNonce(nonce) // Set the nonce
.build());
Unity:
string nonce = ...
// Create an instance of a manager.
var integrityManager = new IntegrityManager();
// Request the integrity token by providing a nonce.
var tokenRequest = new IntegrityTokenRequest(nonce);
var requestIntegrityTokenOperation =
integrityManager.RequestIntegrityToken(tokenRequest);
Native:
/// Create an IntegrityTokenRequest object.
const char* nonce = ...
IntegrityTokenRequest* request;
IntegrityTokenRequest_create(&request);
IntegrityTokenRequest_setNonce(request, nonce); // Set the nonce
IntegrityTokenResponse* response;
IntegrityErrorCode error_code =
IntegrityManager_requestIntegrityToken(request, &response);
The value of the nonce field should exactly match the one you previously passed to the API. Furthermore, since the nonce is inside the cryptographically signed response of the Play Integrity API, it is not feasible to alter its value after the response is received. It is by leveraging these properties that it is possible to use the nonce to further protect your app.
Protecting high-value operations
Let us consider the scenario in which a malicious user is interacting with an online game that reports the player score to the game server. In this case, the device is not compromised, but the user can view and modify the network data flow between the game and the server with the help of a proxy server or a VPN, so the malicious user can report a higher score, while the real score is much lower.
Simply calling the Play Integrity API is not sufficient to protect the app in this case: the device is not compromised, and the app is legitimate, so all the checks done by the Play Integrity API will pass.
However, it is possible to leverage the nonce of the Play Integrity API to protect this particular high-value operation of reporting the game score, by encoding the value of the operation inside the nonce. The implementation is as follows:
The user initiates the high-value action.
Your app prepares a message it wants to protect, for example, in JSON format.
Your app calculates a cryptographic hash of the message it wants to protect. For example, with the SHA-256, or the SHA-3-256 hashing algorithms.
Your app calls the Play Integrity API, and calls setNonce() to set the nonce field to the cryptographic hash calculated in the previous step.
Your app sends both the message it wants to protect, and the signed result of the Play Integrity API to your server.
Your app server verifies that the cryptographic hash of the message that it received matches the value of the nonce field in the signed result, and rejects any results that don't match.
The following sequence diagram illustrates these steps:
As long as the original message to protect is sent along with the signed result, and both the server and client use the exact same mechanism for calculating the nonce, this offers a strong guarantee that the message has not been tampered with.
Notice that in this scenario, the security model works under the assumption that the attack is happening in the network, not the device or the app, so it is particularly important to also verify the device and app integrity signals that the Play Integrity API offers as well.
Preventing replay attacks
Let us consider another scenario in which a malicious user is trying to interact with a server-client app protected by the Play Integrity API, but wants to do so with a compromised device, in a way so the server doesn’t detect this.
To do so, the attacker first uses the app with a legitimate device, and gathers the signed response of the Play Integrity API. The attacker then uses the app with the compromised device, intercepts the Play Integrity API call, and instead of performing the integrity checks, it simply returns the previously recorded signed response.
Since the signed response has not been altered in any way, the digital signature will look okay, and the app server may be fooled into thinking it is communicating with a legitimate device. This is called a replay attack.
The first line of defense against such an attack is to verify the timestampMillis field in the signed response. This field contains the timestamp when the response was created, and can be useful in detecting suspiciously old responses, even when the digital signature is verified as authentic.
That said, it is also possible to leverage the nonce in the Play Integrity API, to assign a unique value to each response, and verifying that the response matches the previously set unique value. The implementation is as follows:
The server creates a globally unique value in a way that malicious users cannot predict. For example, a cryptographically-secure random number 128 bits or larger.
Your app calls the Play Integrity API, and sets the nonce field to the unique value received by your app server.
Your app sends the signed result of the Play Integrity API to your server.
Your server verifies that the nonce field in the signed result matches the unique value it previously generated, and rejects any results that don't match.
The following sequence diagram illustrates these steps:
With this implementation, each time the server asks the app to call the Play Integrity API, it does so with a different globally unique value, so as long as this value cannot be predicted by the attacker, it is not possible to reuse a previous response, as the nonce won’t match the expected value.
Combining both protections
While the two mechanisms described above work in very different ways, if an app requires both protections at the same time, it is possible to combine them in a single Play Integrity API call, for example, by appending the results of both protections into a larger base-64 nonce. An implementation that combines both approaches is as follows:
The user initiates the high-value action.
Your app asks the server for a unique value to identify the request
Your app server generates a globally unique value in a way that malicious users cannot predict. For example, you may use a cryptographically-secure random number generator to create such a value. We recommend creating values 128 bits or larger.
Your app server sends the globally unique value to the app.
Your app prepares a message it wants to protect, for example, in JSON format.
Your app calculates a cryptographic hash of the message it wants to protect. For example, with the SHA-256, or the SHA-3-256 hashing algorithms.
Your app creates a string by appending the unique value received from your app server, and the hash of the message it wants to protect.
Your app calls the Play Integrity API, and calls setNonce() to set the nonce field to the string created in the previous step.
Your app sends both the message it wants to protect, and the signed result of the Play Integrity API to your server.
Your app server splits the value of the nonce field, and verifies that the cryptographic hash of the message, as well as the unique value it previously generated match to the expected values, and rejects any results that don't match.
The following sequence diagram illustrates these steps:
These are some examples of ways you can use the nonce to further protect your app against malicious users. If your app handles sensitive data, or is vulnerable against abuse, we hope you consider taking action to mitigate these threats with the help of the Play Integrity API.
To learn more about using the Play Integrity API and to get started, visit the documentation at g.co/play/integrityapi.
How Compose enables Airbnb to create better host and guest experiences
Since 2007, Airbnb has grown to connect more than 4 million hosts with more than 1 billion guests across the globe. One of the reasons behind the app’s success is that its developers aim to achieve engineering excellence by focusing on two main principles: using technology that sparks innovative development and empowering the engineers behind the work.
Jetpack Compose, Android’s modern UI-building toolkit, directly supports both of Airbnb’s development principles. Compose provided a solid foundation for adaptable, quality engineering and reduced boilerplate code, so developers could focus on delivering a great user experience — and advance their two-fold pursuit of engineering excellence.
Airbnb started testing Compose in 2020 when it was in developer preview. As an early adopter, the Airbnb team was eager use the various new features and simplify their workflow. Now, having gained confidence using Compose in production, Airbnb engineers continue to be satisfied with how it improved their development process.
Equipping engineers for success
Compose’s deterministic testing helped ensure Airbnb’s engineers had tight control over the UI tests they ran and eliminated common flakiness, thereby strengthening their confidence in the quality of every part of their app and the user experiences they were creating. Engineers can now also use Compose to test animations they previously couldn't.
Similarly, Airbnb developers used Compose to add automated screenshot tests to their codebase. Because they didn’t need to write the code for screenshot testing, engineers could go straight into using it to catch bugs and regressions. This gave them more time to review and guarantee feature functionality and UI appearance across a variety of devices.
Compose is great to use alongside Views. This interoperability made it easy for Airbnb engineers to onboard and test the new UI toolkit at their own pace, so they were able to experience the benefits of Compose without having to migrate entire features.
These engineering improvements gave them the solid technical foundations they needed to serve users in fresh and improved ways.
Engineering efficiencies improve user experiences
Airbnb keeps hosts and guests at the heart of their decisions. The engineering team was excited to adopt Compose when they learned about how it would enable them to more easily and efficiently produce UI, resulting in better experiences for their end users.
Because Compose made Airbnb’s features require significantly less code to write and manage, the Airbnb team boosted their efficiency. All of this meant the team could focus its energy on executing the complex tasks involved in developing the innovative features that could best serve users.
Because their features now require less code, the Airbnb team will be able to slow the growth of their app size in the long run. Providing a smaller app is important to Airbnb as an organization with users across the globe that looks to ensure all hosts and guests can easily download and access their app — especially those with older devices or logging on from countries with high data costs.
Using Compose’s engineering enhancements, the Airbnb team was able to put user needs first.
Improve developer productivity with Compose
Compose simplified UI development to allow Airbnb engineers the freedom to focus on more dynamic and innovative features that benefit the app’s hosts and guests.
Learn how you can improve your team’s productivity with Jetpack Compose.
How Compose enables Airbnb to create better host and guest experiences
Since 2007, Airbnb has grown to connect more than 4 million hosts with more than 1 billion guests across the globe. One of the reasons behind the app’s success is that its developers aim to achieve engineering excellence by focusing on two main principles: using technology that sparks innovative development and empowering the engineers behind the work.
Jetpack Compose, Android’s modern UI-building toolkit, directly supports both of Airbnb’s development principles. Compose provided a solid foundation for adaptable, quality engineering and reduced boilerplate code, so developers could focus on delivering a great user experience — and advance their two-fold pursuit of engineering excellence.
Airbnb started testing Compose in 2020 when it was in developer preview. As an early adopter, the Airbnb team was eager use the various new features and simplify their workflow. Now, having gained confidence using Compose in production, Airbnb engineers continue to be satisfied with how it improved their development process.
Equipping engineers for success
Compose’s deterministic testing helped ensure Airbnb’s engineers had tight control over the UI tests they ran and eliminated common flakiness, thereby strengthening their confidence in the quality of every part of their app and the user experiences they were creating. Engineers can now also use Compose to test animations they previously couldn't.
Similarly, Airbnb developers used Compose to add automated screenshot tests to their codebase. Because they didn’t need to write the code for screenshot testing, engineers could go straight into using it to catch bugs and regressions. This gave them more time to review and guarantee feature functionality and UI appearance across a variety of devices.
Compose is great to use alongside Views. This interoperability made it easy for Airbnb engineers to onboard and test the new UI toolkit at their own pace, so they were able to experience the benefits of Compose without having to migrate entire features.
These engineering improvements gave them the solid technical foundations they needed to serve users in fresh and improved ways.
Engineering efficiencies improve user experiences
Airbnb keeps hosts and guests at the heart of their decisions. The engineering team was excited to adopt Compose when they learned about how it would enable them to more easily and efficiently produce UI, resulting in better experiences for their end users.
Because Compose made Airbnb’s features require significantly less code to write and manage, the Airbnb team boosted their efficiency. All of this meant the team could focus its energy on executing the complex tasks involved in developing the innovative features that could best serve users.
Because their features now require less code, the Airbnb team will be able to slow the growth of their app size in the long run. Providing a smaller app is important to Airbnb as an organization with users across the globe that looks to ensure all hosts and guests can easily download and access their app — especially those with older devices or logging on from countries with high data costs.
Using Compose’s engineering enhancements, the Airbnb team was able to put user needs first.
Improve developer productivity with Compose
Compose simplified UI development to allow Airbnb engineers the freedom to focus on more dynamic and innovative features that benefit the app’s hosts and guests.
Learn how you can improve your team’s productivity with Jetpack Compose.
Posted by Paris Hsu, Product & Design, Android and
Don Turner, Developer Relations Engineer, Android
The Now in Android app is now on GitHub!
For two years, 'Now in Android' has been a popular blog and YouTube series, providing you with the latest and greatest developer news from the Android team. Starting today, you can check out the alpha version of the Now in Android app on GitHub! ?
The app has two goals:
Firstly, it showcases best practices, opinionated designs, and solutions to complex real-world problems which other sample apps don’t handle. It does so with an open source implementation of a real world app.
Secondly, it helps you (the developer) keep up to date with the areas of Android development which interest you most. It is a working app planned for publication on the Play Store.
Remote/local data synchronization scheduled using WorkManager with exponential backoff
As well as these features, we are also documenting the learning journeys we took to certain decisions with the app's design and implementation. Check out our first journey on the app's Architecture here.
The Now in Android screens adapt based on device screen size
Since this is an alpha release, we expect that there will be bugs and missing features, and we would greatly appreciate your feedback. We have some exciting features planned, such as user authentication and loading data from a real backend. We can’t wait for you to check out the app and let us know what you think!
Finally, if you want to learn about the tools we used to build the app and how we target multiple screen sizes, check out these talks from this year's Google I/O:
Posted by Jennifer Chui, Technical Program Manager and Rod Lopez, Product Manager
At Google, our work in cars has always been guided by our vision of creating safe and seamless connected experiences. This work would not be possible without developers like you. We’re excited to share some of our combined accomplishments from this past year, and introduce new updates that will make it easier for you to provide users with an even better experience in the car.
Android Auto continues to grow and scale, with compatible vehicles now numbering over 150 million worldwide. An increasing number are also wirelessly compatible, and with the newly introduced Motorola MA1 adapter, even more drivers now have access to a wireless experience. In addition, our new design for Android Auto brings split-screen functionality to every screen, keeping navigation and media front and center while also providing room for prominent notification widgets.
Android Automotive OS with Google built-in also has exciting updates. Beyond the continued expansion of carmakers that are bringing more car models to the market, we’ve also been hard at work enabling more parked experiences to take advantage of the large screens that many AAOS cars offer. From more video streaming apps like Epix Now and Tubi to future features like browsing and cast, there’s much to look forward to, and given minimal effort is required to translate your large screen tablet apps into a parked car experience, it’s now easier than ever to reach users in the car.
We know that developing for cars can be complex, which is why we’re focused on making developing across Android for Cars as easy as possible. We’ve seen strong momentum with our Car App Library with over 200 apps published to date, and beyond enriching the navigation feature set with version 1.3, we’re also excited to share that all developers can now publish apps in supported categories directly to production for both Android Auto and Android Automotive OS. We’ve also created new templates and expanded our supported app categories, adding driver apps like Lyft to the navigation category, and replacing the parking and charging categories with a comprehensive point of interest (POI) category to include apps like MochiMochi and Fuelio.
We’re also introducing several new features to help you build more powerful media apps on Android Auto. Media recommendations working side by side with Google Assistant helps users easily discover and quickly play relevant content based on their preferred music provider at the click of a button. To surface recommendations from your app, integrate with this API.
For long form content such as podcasts and audiobooks, you can now introduce a progress bar that shows how much of the content the user has previously listened to, and with our new single item styling API, you can now assign content items individually as either list or grid as opposed to categorically, to easily combine them in the same content space.
We’re grateful to have you on the journey with us as we seek to create safer, more seamless connected experiences in cars. Be sure to check out our Google I/O technical session above, and as always, you can get help from the developer community at Stack Overflow using the android-automotive and android-auto tags. We can’t wait to see what you build next, and where the road takes you.
Shobana Radhakrishnan, Senior Director of Engineering - Google TV
Paul Lammertsma, Developer Relations Engineer
Today, there is more entertainment content available than ever before. In fact, our research shows a third of U.S. households now watch more than 25 hours of TV every week. As the role of TV continues to evolve, it’s our goal to build a tailored TV experience that gives users easy access to the entertainment they love.
We’re excited about the future of Android TV OS, now with over 110 million monthly active devices, including millions of Google TVs. Android TV and Google TV are available on over 300 partners worldwide, including 7 of the 10 largest smart TV OEMs and over 170 pay TV operators. And thanks to the hard work of our developer community, there are more than 10,000 apps available on TV, with more being added everyday.
Since last year’s I/O, we’ve continued our commitment to enable you to build better and more engaging experiences on Android TV OS. In addition to platform updates, new features, like expanded integrations with the Live tab, offer opportunities for users to better engage with your content. And if you haven’t begun using WatchNext API, take a moment to learn how to add it to your app to make your content more discoverable and accessible.
Today, we are introducing new features and tools on Android 13 that focus on overall performance & quality, improve accessibility, and enable multitasking.
Performance & quality: To help build for the next generation of TVs, we’re introducing new APIs to help you better detect a user’s settings and give them the best experience for their device. AudioManager allows your app to anticipate audio routes and precisely understand which playback mode is available. Integrating your app correctly with MessiaSession allows Android TV to react to HDMI state changes in order to save power and signal that content should be paused.
Accessibility: To improve how users interact with their TV, we’ve added support for different keyboard layouts in the InputDevice API. Game developers can also reference keys by their physical location to support different layouts of physical keyboards, such as QWERTZ and AZERTY keyboards. A new system-wide accessibility preference also allows users to enable audio descriptions across apps.
Multitasking: TVs are now used for more than just watching media content. In fact, we often see users taking calls or monitoring cameras in a smart home. To help with multitasking, an updated picture in picture API will be supported in Android 13 with the APIs from core Android. Picture in picture on the TV supports an expanded mode to show more videos from a group call, a docked mode to avoid overlaying content on other apps, and a keep-clear API to prevent overlays from concealing important content in full-screen apps.
Android 13 Beta for TV is available now, allowing you to test your apps and provide feedback on the latest release. Thank you for your continued support of Android TV OS. We can’t wait to see what amazing and innovative things you continue to build for the big screen.
Posted by Maru Ahues Bouza, Director of Android Developer Relations
There aren’t many platforms where you can build something and instantly reach billions of people around the world, not only on their phones—but their TVs, cars, tablets, watches, and more. Today, at Google I/O, we covered a number of ways Android helps you make the most of this opportunity, and how Modern Android Development brings as much commonality as possible, to make it faster and easier for you to create experiences that tailor to all the different screens we use in our daily lives.
#1: Jetpack Compose Beta 1.2, with support for more advanced use cases
Android’s modern UI toolkit, Jetpack Compose, continues to bring the APIs you need to support more advanced use cases like downloadable fonts, LazyGrids, window insets, nested scrolling interop and more tooling support with features like LiveEdit, Recomposition Debugging and Animation Preview. Check out the blog post for more details.
#2: Android Studio: introducing Live Edit
Get more done faster with Android Studio Dolphin Beta and Electric Eel Canary! Android Studio Dolphin includes new features and improvements for Jetpack Compose and Wear OS development and an updated Logcat experience. Android Studio Electric Eel comes with integrations with the new Google Play SDK Index and Firebase Crashlytics. It also offers a new resizable emulator to test your app on large screens and the new Live Edit feature to immediately deploy code changes made within composable functions. Watch the What’s new in Android Development Tools session and read the Android Studio I/O blog post here.
#3: Baseline Profiles - speed up your app load time!
The speed of your app right after installation can make a big difference on user retention. To improve that experience, we created Baseline Profiles. Baseline Profiles allow apps and libraries to provide the Android runtime with metadata about code path usage, which it uses to prioritize ahead-of-time compilation. We've seen up to 30% faster app startup times thanks to adding baseline profiles alone, no other code changes required! We’re already using baseline profiles within Jetpack: we’ve added baselines to popular libraries like Fragments and Compose – to help provide a better end-user experience. Watch the What’s new in app performance talk, and read the Jetpack blog post here.
BETTER TOGETHER
#4: Going big on Android tablets
Google is all in on tablets. Since last I/O we launched Android 12L, a release focused on large screen optimizations, and Android 13 includes all those improvements and more. We also announced the Pixel tablet, coming next year. With amazing new hardware, an updated operating system & Google apps, improved guidelines and libraries, and exciting changes to the Play store, there has never been a better time to review your apps and get them ready for large screens and Android 13. That’s why at this year’s I/O we have four talks and a workshop to take you from design to implementation for large screens.
#5: Wear OS: Compose + more!
With the latest updates to Wear OS, you can rethink what is possible when developing for wearables. Jetpack Compose for Wear OS is now in beta, so you can create beautiful Wear OS apps with fewer lines of code. Health Services is also now in beta, bringing a ton of innovation to the health and fitness developer community. And last, but certainly not least, we announced the launch of The Google Pixel Watch - coming this Fall - which brings together the best of Fitbit and Wear OS. You can learn more about all the most exciting updates for wearables by watching the Wear OS technical session and reading our Jetpack Compose for Wear OS announcement.
#6: Introducing Health Connect
Health Connect is a new platform built in close collaboration between Google and Samsung, that simplifies connectivity between apps making it easier to reach more users with less work, so you can securely access and share user health and fitness data across apps and devices. Today, we’re opening up access to Health Connect through Jetpack Health—read our announcement or watch the I/O session to find out more!
#7: Android for Cars & Android TV OS
Android for Cars and Android TV OS continue to grow in the US and abroad. As more users drive connected or tune-in, we’re introducing new features to make it even easier to develop apps for cars and TV this year. Catch the “What’s new with Android for Cars” and “What's new with Google TV and Android TV” sessions on Day 2 (May 12th) at 9:00 AM PT to learn more.
#8: Add Voice Across Devices
We’re making it easier for users to access your apps via voice across devices with Google Assistant, by expanding developer access to Shortcuts API for Android for Cars, with support for Wear OS apps coming later this year. We’re also making it easier to build those experiences with Smarter Custom Intents, enabling Assistant to better detect broader instances of user queries through ML, without any NLU training heavy lift. Additionally, we’re introducing improvements that drive discovery to your apps via voice on Mobile, first through Brandless Queries, that drive app usage even when the user hasn’t explicitly said your app’s name, and App Install Suggestions that appear if your isn’t installed yet–these are automatically enabled for existing App Actions today.
AND THE LATEST FROM ANDROID, PLAY, AND MORE:
#9: What’s new in Play!
Get the latest updates from Google Play, including new ways Play can help you grow your business. Highlights include the ability to deep-link and create up to 50 custom listings; our LiveOps beta, which will allow more developers to submit content to be considered for featuring on the Play Store; and even more flexibility in selling subscriptions. Learn about these updates and more in our blog post.
#10: Google Play SDK Index
Evaluate if an SDK is right for your app with the new Google Play SDK index. This new public portal lists over 100 of the most widely used commercial SDKs and information like which app permissions the SDK requests, statistics on the apps that use them, and which version of the SDK is most popular. Learn more on our blog post and watch “What’s new in Google Play” and “What’s new in Android development tools” sessions.
#11: Privacy Sandbox on Android
Privacy Sandbox on Android provides a path for new advertising solutions to improve user privacy without putting access to free content and services at risk. We recently released the first Privacy Sandbox on Android Developer Preview so you can get an early look at the SDK Runtime and Topics API. You can conduct preliminary testing of these new technologies, evaluate how you might adopt them for your solutions, and share feedback with us.
#12: The new Google Wallet API
The new Google Wallet gives users fast and secure access to everyday essentials across Android and Wear OS. We’re enhancing the Google Wallet API, previously called Google Pay Passes API, to support generic passes, grouping and mixing passes together, for example grouping an event ticket with a voucher, and launching a new Android SDK which allows you to save passes directly from your app without a backend integration. To learn more, read the full blog post, watch the session, or read the docs at developers.google.com/wallet.
#13: And of course, Android 13!
The second Beta of Android 13 is available today! Get your apps ready for the latest features for privacy and security, like the new notification permission, the privacy-protecting photo picker, and improved permissions for pairing with nearby devices and accessing media files. Enhance your app with features like app-specific language support and themed app icons. Build with modern standards like HDR video and Bluetooth LE Audio. You can get started by enrolling your Pixel device here, or try Android 13 Beta on select phones, tablets, and foldables from our partners - visit developer.android.com/13 to learn more.
That’s just a snapshot of some of the highlights for Android developers at this year’s Google I/O. Be sure to watch the What’s New in Android talk to get the landscape on the full Android technical track at Google I/O, which includes 26 talks and 4 workshops. Enjoy!
Posted by Juan Sebastian Oviedo, Senior Product Manager
Today at Google I/O 2022, we announced an exciting set of new features available in Android Studio Dolphin Beta and Electric Eel Canary, both available for download. You told us that you want to be more productive while creating Android apps, so we focused on improvements that make the development experience faster and more informative.
In the Android Studio Dolphin release you will find the following features and improvements that you can start using in the Beta channel, which is close to stable quality:
View Compose animations and coordinate them with Animation Preview.
Define annotation classes to easily include and apply multiple Compose preview definitions at once.
Track recomposition counts for your composables in the Layout Inspector.
Easily pair and control Wear OS emulators and launch tiles, watch faces, and complications directly from Android Studio.
Diagnose app issues faster with Logcat V2.
For even more cutting edge features, you can take a sneak peek at the Android Studio Electric Eel release in the Canary channel:
View dependency insights from the new Google Play SDK Index, a public portal with information about popular dependencies/SDKs. If a specific version of a library has been marked as “outdated” by its author, a corresponding Lint warning will appear when viewing that dependency definition. This enables you to discover and update dependency issues during development instead of later when you go to publish your app on the Play Console. You can learn more about this new tool here.
See Firebase Crashlytics reports directly in Android Studio using the new App Quality Insights window. The App Quality Insights window allows you to navigate from stack traces into your code with a few simple clicks. The IDE also highlights lines of code in the editor as you're editing files containing recent crashes. This saves you time by presenting actionable crash information from users directly in the IDE, so you can focus on providing your users with the best app experience.
Test your app’s UI on representative reference devices using a single resizable Android Emulator. Instead of having to set up emulators specifically for tablets, phones, or desktops, you can use a single resizable emulator and change its configuration without needing to re-deploy to test your app.
With the experimental Live Edit feature, make code changes and have those immediately reflected in the Compose Preview and running app on an emulator or physical device.
These features will be promoted to more stable channels once we have your feedback and make improvements, so please try them out.
To see all the new features in action, watch the What’s new in Android Developer Tools session.
Below is a list of key new features and improvements in Android Studio Dolphin:
Jetpack Compose
Compose Animation Coordination - See all your animations at once and coordinate them in Animation Preview. You can also freeze a specific animation.
Compose Animation Coordination
Compose Multipreview Annotations - Define an annotation class that includes multiple Preview definitions and use that new annotation to generate those previews at once. Use this new annotation to preview multiple devices, fonts, and themes at the same time — without repeating those definitions for every single composable.
Multipreview annotations
Compose Recomposition Counts in Layout Inspector - View recomposition counts for a Compose app in the Layout Inspector. Recomposition counts and skip counts can optionally be shown in the Component Tree and Attributes panels. Learn more.
Compose Recomposition Counts
Wear OS
Wear OS Emulator Pairing Assistant - Using the Wear OS Emulator Pairing Assistant, you can now see Wear Devices in the Device Manager, and pair multiple watch emulators with a single phone. You also don't have to re-pair devices as often because Android Studio remembers pairings after being closed.
Wear OS Emulator Pairing Assistant
Wear OS Emulator Side Toolbar - Use Wear-specific emulator buttons that resemble and simulate physical buttons, including main buttons, palm buttons, and tilt buttons.
Wear OS Emulator Side Toolbar
Wear OS Direct Surface Launch - Create Run/Debug configurations for Wear OS tiles, watch faces, and complications, and launch them directly from Android Studio.
New Wear OS Run/Debug configuration types
Development tools
Logcat V2 - Rebuilt from the ground up, the new Logcat makes it easier to parse, query, and track logs. Logcat V2 includes new formatting that makes it easier to scan useful information, new split views to allow you to track more at a glance, and a brand new powerful syntax for filtering logs. Learn more.
Logcat V2
Gradle Managed Devices - Describe the virtual devices you need for your automated tests as a part of your build, and let Gradle take care of the rest. From SDK downloading, to device provisioning and setup, to test execution and teardown, Gradle manages the lifecycle of your virtual devices during instrumentation tests. Gradle is also able to apply intelligent functionality, such as snapshot management, test caching, and test sharding to ensure your tests run efficiently, quickly, and consistently. Gradle Managed Devices also introduces a completely new type of device, called the Automated Test Device, which optimizes devices for automated tests, resulting in significant reduction in CPU and memory usage during test execution. Learn more.
Gradle Managed Devices
Below is a list of key new features and improvements in Android Studio Electric Eel:
Jetpack Compose
Live Edit - Make code changes to Composables in Android Studio and see those changes reflected immediately in the Compose Preview and your emulator or physical device. Live Edit is an opt-in feature that you can enable in Android Studio settings. Learn more.
Live Edit on emulator
Live Edit on Preview
Google Play and Firebase
SDK Insights- Get Lint warnings for SDKs/libraries that have been marked as outdated by their authors in the Google Play SDK Index. Update outdated dependency versions during development to avoid issues when your app is submitted to the Play Console.
Google Play SDK Index insights
App Quality Insights from Firebase Crashlytics - Discover, investigate, and resolve issues reported by Crashlytics in Android Studio and within the context of your local source code. This integration helps reduce friction when navigating from crashes to code (and from code to crash), and surfaces important contextual data about each crash to help you reproduce issues locally.
App Quality Insights from Firebase Crashlytics
Large Screens
Resizable Emulator - Rapidly toggle between representative reference devices to quickly test various application layout states with a single running emulator instance. You can create these emulators by selecting the “Resizable” type in the Device Manager’s “Create device” flow.
Resizable Emulator
Visual Linting - Discover and fix your layout issues across different devices (for example, when a button is hidden out of bounds on a larger tablet) by opening the Layout Validation panel. We automatically run your layout to check for Visual Lint issues across different screen sizes.
Visual Linting
Development Tools
Emulated Bluetooth - You can now discover and connect two phone emulators using virtual Bluetooth. This feature is available on Android Emulator 31.3.8 and higher with system image T (API 33). We plan to add more support for creating sample virtual peripherals, such as beacons and heart rate monitors, and integration testing for your Bluetooth features!
Pairing two Android Emulators using Emulated Bluetooth
Device Mirroring - Minimize the number of interruptions when developing by streaming your device display directly to Android Studio. Device Mirroring gives you the ability to interact with a physical device using the Running Devices window in Studio. To enable this feature, go to Preferences > Experimental and select Device Mirroring. Once enabled, plug in your device and open the Running Devices window to begin streaming your display.
Device Mirroring
To recap, these new features and improvements are available in the Android Studio Dolphin Beta, near stable quality:
Jetpack Compose
Compose Animation Coordination
Compose Multipreview Annotations
Compose Recomposition Counts in Layout Inspector
Wear OS
Wear OS Emulator Pairing Assistant
Wear OS Emulator Side Toolbar
Wear OS Direct Surface Launch
Development tools
Logcat V2
Gradle Managed Devices
These brand new features and improvements are available in the Android Studio Electric Eel Canary:
Jetpack Compose
Live Edit
Google Play and Firebase
SDK Insights
App Quality Insights from Firebase Crashlytics
Large Screens
Resizable Emulator
Visual Linting
Development tools
Emulated Bluetooth
Device Mirroring
Getting started
Android Studio Dolphin Beta and Electric Eel Canary are both available for download. You can install them side by side with the current stable version of Android Studio following these instructions. The Beta release is near stable release quality, but bugs might still exist, so, if you do find an issue, please let us know so we can work to fix it. Likewise, if you find an issue or have feedback for the features in the Canary release, please let us know.
We really appreciate your feedback on issues and feature requests. You can follow us—the Android Studio development team—on Twitter and on Medium.
Posted by Amanda Alexander, Product Manager, Android
Android Jetpack is a key pillar of Modern Android Development. It is a suite of over 100 libraries, tools and guidance to help developers follow best practices, reduce boilerplate code, and write code that works consistently across Android versions and devices so that you can focus on building unique features for your app.
Most apps in Google Play use Jetpack for app architecture. Today, over 90% of the top 1000 apps use Jetpack.
Below we’ll cover updates in three major areas of Jetpack:
Architecture Libraries and Guidance
Performance Optimization of Applications
User Interface Libraries and Guidance
And then conclude with some additional key updates.
1. Architecture Libraries and Guidance
App architecture libraries and components ensure that apps are robust, testable, and maintainable.
Data Persistence
Room is the recommended data persistence layer which provides an abstraction layer over SQLite, allowing for increased usability and safety over the platform.
In Room 2.4, support for Kotlin Symbol Processing (KSP) moved to stable. KSP showed a 2x speed improvement over KAPT in our benchmarks of Kotlin code. Room 2.4 also adds built-in support for enums and RxJava3 and fully supports Kotlin 1.6.
Room 2.5 includes the beginning of a full Kotlin rewrite. This change sets the foundation for future Kotlin-related improvements while still being binary compatible with the previous version written in the Java programming language. There is also built-in support for Paging 3.0 via the room-paging artifact which allows Room queries to return PagingSource objects. Additionally, developers can now perform JOIN queries without the need to define additional data structures since Room now supports relational query methods using multimap (nested map and array) return types.
@Query("SELECT * FROM Artist
JOIN Song ON Artist.artistName =
Song.songArtistName")
fun getArtistToSongs(): Map<Artist, List<Song>>
Relational query methods using multimap return types
Database migrations are now simplified with updates to AutoMigrations, with added support for additional annotations and properties. A new AutoMigration property on the @Database annotation can be used to declare which versions to auto migrate to and from. And when Room needs additional information regarding table and column modifications, the @AutoMigration annotation can be used to specify the inputs.
Database(
version = MyDb.LATEST_VERSION,
autoMigrations = {
@AutoMigration(from = 1, to = 2,
spec = MyDb.MyMigration.class),
@AutoMigration(from = 2, to = 3)
}
)
public abstract class MyDb
extends RoomDatabase {
...
DataStore
The DataStore library is a robust data storage solution that addresses issues with SharedPreferences. To better understand how to use this powerful replacement for many SharedPreferences use cases, you can check out a series of videos and articles in Modern Android Development Skills: DataStore which includes guidance on testing your app’s usage of the library, using it with dependency injection, and migrating from SharedPreference to Proto DataStore.
Incremental Data Fetching
The Paging library allows you to load and display small chunks of data to improve network and system resource consumption. App data can be loaded gradually and gracefully within RecyclerViews or Compose lazy lists.
Paging 3.1 provides stable support for Rx and Guava integrations, which provide Java alternatives to Paging’s native use of Kotlin coroutines. This version also has improved handling of invalidation race conditions with a new return type, LoadResult.Invalid, to represent invalid or stale data. There is also improved handling of no-op loads and operations on empty pages with the new onPagesPresented and addOnPagesUpdatedListener APIs.
The Navigation library is a framework for moving between destinations in an app.
The Navigation component is now integrated into Jetpack Compose via the new navigation-compose artifact which allows for composable functions to be used as destinations in your app.
The Multiple Back Stacks feature has improved to make it easier to remember state. NavigationUI now automatically saves and restores the state of popped destinations, meaning developers can support multiple back stacks without any code changes.
Large screen support was enhanced with the navigation-fragment artifact providing a prebuilt implementation of a two-pane layout in AbstractListDetailFragment. This fragment uses a SlidingPaneLayout to manage a list pane – managed by your subclass – and a detail pane, which uses a NavHostFragment.
All Navigation artifacts have been rewritten in Kotlin and feature improved nullability of classes using generics – such as NavType subclasses.
Opinionated Architecture Guidance
To learn more about how our key architecture libraries work together, you can view a collection of videos and articles covering best practices for modern Android development in a series called Modern Android Development Skills: Architecture.
2. Performance Optimization of Applications
Using performance libraries allows you to build performant apps and identify optimizations to maintain high performance, resulting in better end-user experiences.
Improving Start-up Times
App speed can have a big impact on a user’s experience, particularly when using apps right after installation. To improve that first time experience, we created Baseline Profiles. Baseline Profiles allow apps and libraries to provide the Android run-time with metadata about code path usage, which it uses to prioritize ahead-of-time compilation. This profile data is aggregated across libraries and lands in an app’s APK as a baseline.prof file, which is then used at install time to partially pre-compile the app and its statically-linked library code. This can make your apps load faster and reduce dropped frames the first time a user interacts with an app.
We’ve already started leveraging Baseline Profiles at Google. The Play Store app saw a decrease in initial page rendering time on its search results page of 40% after adopting Baseline Profiles. Baseline profiles have also been added to popular libraries, such as Fragments and Compose, to help provide a better end-user experience. To create your own baseline profile, you need to use the Macrobenchmark library.
Instrumenting Your Application
The Macrobenchmark library helps developers better understand app performance by extending Jetpack’s benchmarking coverage to more complex use-cases, including app startup and integrated UI operations such as scrolling a RecyclerView or running animations. Macrobenchmark can also be used to generate Baseline Profiles.
Macrobenchmark has been updated to increase testing speed and has several new experimental features. It also now supports Custom trace-based timing measurements using TraceSectionMetric, which allows developers to benchmark specific sections of code. Additionally, the AudioUnderrunMetric now enables detection of audio buffer underruns to help understand audible jank.
BaselineProfileRule generates profiles to help with runtime optimizations. BaselineProfileRule works similarly to other macro benchmarks, where you represent user actions as code within lambdas. In the example below, the critical user journey that the compiler should optimize ahead of time is a cold start: opening the app’s landing activity from the launcher.
@ExperimentalBaselineProfilesApi
@RunWith(AndroidJUnit4::class)
class BaselineProfileGenerator {
@get:Rule
val baselineProfileRule = BaselineProfileRule()
@Test
fun startup() = baselineProfileRule.collectBaselineProfile(
packageName = "com.example.app"
) {
pressHome()
// This block defines the app's critical user journey. Here we are
// interested in optimizing for app startup, but you can also navigate
// and scroll through your most important UI.
startActivityAndWait()
}
}
For more details and a full guide on generating and using baseline profiles with Macrobenchmark, check our guidance on the Android Developers site.
Avoiding UI Stuttering / Jank
The new JankStats library helps you track and analyze performance problems in your app’s UI, including reports on dropped rendering frames – commonly referred to as “jank.” JankStats builds on top of existing Android platform APIs, such as FrameMetrics, but can be used back to API level 16.
The library also offers additional capabilities beyond those built into the platform: heuristics that help pinpoint causes of dropped frames, UI state that provides additional context in reports, and reporting callbacks that can be used to upload data for analysis.
Here’s a closer look at the three major aspects of JankStats:
Identifying Jank: This library uses internal heuristics to determine when jank has occurred, and uses that information to know when to issue jank reports so that developers have information on those problems to help analyze and fix the issues.
Providing UI Context: To make the jank reports more useful and actionable, the library provides a mechanism to help track the current state of the UI and user. This information is provided whenever reports are logged, so that developers can understand not only when problems occurred, but also what the user was doing at the time. This helps to identify problem areas in the application that can then be addressed. Some of this state is provided automatically by various Jetpack libraries, but developers are encouraged to provide their own app-specific state as well.
Reporting Results: On every frame, the JankStats client is notified via a listener with information about that frame, including how long the frame took to complete, whether it was considered jank, and what the UI context was during that frame. Clients are encouraged to aggregate and upload the data as they see fit for analysis that can help debug overall performance problems.
Adding Logging to your App
The Tracing library enables profiling of app performance by writing trace events to the system buffer. Tracing 1.1 supports profiling in non-debug builds back to API level 14, similar to the <profileable> manifest tag which was added in API level 29.
3. User Interface Libraries and Guidance
Several changes have been made to our UI libraries to provide better support for large-screen compatibility, foldables, and emojis.
Jetpack Compose
Jetpack Compose, Android’s modern toolkit for building native UI, has reached 1.2 beta today which has added several features to support more advanced use cases, including support for downloadable fonts, lazy layouts, and nested scrolling interoperability. Check out the What’s New in Jetpack Compose blog post to learn more.
Understanding Window State
The new WindowManager library helps developers adapt their apps to support multi-window environments and new device form factors by providing a common API surface with support back to API level 14.
The initial release targets foldable device use cases, including querying physical properties that affect how content should be displayed.
Jetpack’s SlidingPaneLayout component has been updated to use WindowManager’s smart layout APIs to avoid placing content in occluded areas, such as across a physical hinge.
Drag and Drop
The new DragAndDrop library also helps with new form factors and windowing modes by enabling developers to accept drag-and-drop data – both from inside and outside their app. DrapAndDrop includes a consistent drop target affordance and it supports back to API level 24.
Backporting New APIs to Older API Levels
The AppCompat library allows access to new APIs on older API versions of the platform, including backports of UI features such as dark mode.
AppCompat 1.4 integrates the Emoji2 library to bring default support for new emoji to all text-based views supported by AppCompat on API level 14 and above.
Custom locale selection is now supported back to API level 14. This feature enables manual persistence of locale settings across app starts, and supports automatic persistence via a service metadata flag. This tells the library to load the locales synchronously and recreate any running Activity as needed. On API level 33 and above, persistence is managed by the platform with no additional overhead.
Other key updates
Annotation
The Annotation library exposes metadata that helps tools and other developers understand your app's code. It provides familiar annotations like @NonNull that pair with lint checks to improve the correctness and usability of your code.
Annotation is migrating to Kotlin, so now developers using Kotlin will see more appropriate annotation targets, including @file.
Several highly-requested annotations have been added with corresponding lint checks. This includes annotations concerning method or function overrides, and the @DeprecatedSinceApi annotation which provides a corollary to @RequiresApi and discourages use beyond a certain API level.
Github
We now have over 100 projects in our GitHub! Several modules are open for developer contributions using the standard GitHub-based workflow:
Activity
AppCompat
Biometric
Collection
Compose Compiler
Compose Runtime
Core
DataStore
Fragment
Lifecycle
Navigation
Paging
Room
WorkManager
Check the landing page for more information on how we handle pull requests, and to get started building with Jetpack libraries.
This was a brief tour of all the changes in Jetpack over the past few months. For more details on each Jetpack library, check out the AndroidX release notes, quickly find relevant libraries with the API picker and watch the Google I/O talks for additional highlights.
Java is a trademark or registered trademark of Oracle and/or its affiliates.