Chrome OS is the fast, simple, and secure operating system that powers Chromebooks, including the Google Pixelbook and millions of devices used by consumers and students every day. The latest Flutter release adds support for building beautiful, tailored Chrome OS applications, including rich support for keyboard and mouse, and tooling to ensure that your app runs well on a Chromebook. Furthermore, Chrome OS is a great developer workstation for building general-purpose Flutter apps, thanks to its support for developing and running Flutter apps locally on the same device.
Flutter is a great way to build Chrome OS apps
Since its inception, Flutter has shared many of the same principles as Chrome OS: productive, fast, and beautiful experiences. Flutter allows developers to build beautiful, fast UIs, while also providing a high degree of developer productivity, and a completely open-source engine, framework and tools. In short, it’s the ideal modern toolkit for building multi-platform apps, including apps for Chrome OS.
Flutter initially focused on providing a UI toolkit for building apps for mobile devices, which typically feature touch input and small screens. However, we’ve been building keyboard and mouse support into Flutter since before our 1.0 release last December. And today, we’re pleased to announce that Flutter for Chrome OS is now stronger with scroll wheel support, hover management, and better keyboard event support. In addition, Flutter has always been great at allowing you to build apps that run at any size (large screen or small), with seamless resizing, as shown here in the Chrome OS Best Practices Sample:
The Chrome OS best practices sample in action
The Chrome OS Hello World sample is an app built with Flutter that is optimized for Chrome OS. This includes a responsive UI to showcase how to reposition items and have layouts that respond well to changes in size from mobile to desktop.
Because Chrome OS runs Android apps, targeting Android is the way to build Chrome OS apps. However, while building Chrome OS apps on Android has always been possible, as described in these guidelines, it’s often difficult to know whether your Android app is going to run well on Chrome OS. To help with that problem, today we are adding a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines:
The Flutter Chrome OS lint rules in action
When you’re able to put these Chrome OS lint rules in place, you’ll quickly be able to see any problems in your Android app that would hamper it when running on Chrome OS. To learn how to take advantage of these rules, see the linting docs for Flutter Chrome OS.
But all of that is just the beginning -- the Flutter tools allow you to develop and test your apps directly on Chrome OS as well.
Chrome OS is a great developer platform to build Flutter apps
No matter what platform you're targeting, Flutter has support for rich IDEs and programming tools like Android Studio and Visual Studio Code. Over the last year, Chrome OS has been building support for running the Linux version of these tools with the beta of Linux on Chrome OS (aka Crostini). And, because Chrome OS also supports Android natively, you can configure the Flutter tooling to run your Android apps directly without an emulator involved.
The Flutter development tools running on Chrome OS
All of the great productivity of Flutter is available, including Stateful Hot Reload, seamless resizing, keyboard and mouse support, and so on. Recent improvements in Crostini, such as high DPI support, Crostini file system integration, easier adb, and so on, have made this experience even better! Of course, you don’t have to test against the Android container running on Chrome OS; you can also test against Android devices attached to your Chrome OS box. In short, Chrome OS is the ideal environment in which to develop and test your Flutter apps, especially when you’re targeting Chrome OS itself.
Customers love Flutter on Chrome OS
With its unique combination of simplicity, security, and capability, Chrome OS is an increasingly popular platform for enterprise applications. These apps often work with large quantities of data, whether it’s a chart, or a graph for visualization, or lists and forms for data entry. The support in Flutter for high quality graphics, large screen layout, and input features (like text selection, tab order and mousewheel), make it an ideal way to port mobile applications for the enterprise. One purveyor of such apps is AppTree, who use Flutter and Chrome OS to solve problems for their enterprise customers.
“Creating a Chrome OS version of our app took very little effort. In 10 minutes we tweaked a few values and now our users have access to our app on a whole new class of devices. This is a huge deal for our enterprise customers who have been wanting access to our app on Desktop devices.”
--Matthew Smith, CTO, AppTree Software
By using Flutter to target Chrome OS, AppTree was able to start with their existing Flutter mobile app and easily adapt it to take advantage of the capabilities of Chrome OS.
Posted by Tomer Amarilio, Product Manager, Google Assistant
Building Google Assistant Bluetooth devices gets easier for device makers
Headphones were one of the first devices optimized for the Google Assistant. With just your voice, you can ask the Assistant to make calls to friends or skip to the next song when you’re commuting on the subway to work or biking around on the weekend without having to always glance at your phone.
But as wireless Bluetooth devices like headphones and earbuds become more popular, we need to make it easier to have the same great Assistant experience across many headsets. We collaborated with Qualcomm to design a comprehensive, customizable development kit to provide all device makers with the building blocks needed to create a smart headset with the Google Assistant. The new Qualcomm Smart Headset Development Kit for the Google Assistant is powered by Qualcomm’s QCC5100-series Bluetooth audio chip and supports Google Fast Pair to make pairing Bluetooth accessories a hassle-free process.
To inspire device makers, we also built a Qualcomm Smart Headset Reference Design which delivers high quality audio, noise cancellation capabilities, and supports extended battery life and playback time. The reference design includes a push button to activate the Assistant and is just an example of what manufacturers can engineer.
Last year, we launched Android Jetpack, a collection of software components designed to accelerate Android development and make writing high-quality apps easier. Jetpack was built with you in mind -- to take the hardest, most common developer problems on Android and make your lives easier.
Jetpack has seen incredible adoption and momentum. Today, 80% of the top 1,000 apps in the Play store are using Jetpack. We’ve also heard feedback from so many of you across our early access developer programs and user studies, as well as Reddit, Stack Overflow, and Slack, that has helped shape these APIs. Very humbly, thank you.
What’s New in Jetpack
Today, we are excited to share with you 11 Jetpack libraries that can be used in development now and an early-development, open-source project called Jetpack Compose to simplify UI development.
Now in Alpha
We've heard from many of you that developing camera apps or integrating camera functionality within your existing apps is hard. With the new CameraX library, we want to enable you to create great camera-driven experiences in your application without worrying about the underlying device behavior. This API is backwards compatible to Android 5.0 (API 21) or higher, ensuring that the same code works on most devices in the market. While it leverages the capabilities of camera2, it uses a simpler, use case-based approach that is lifecycle-aware eliminating significant amount of boilerplate code vs camera2. Finally, it enables you to access the same functionality as the native camera app on supported devices. These optional Extensions enable features like Portrait, Night, HDR, and Beauty.
LiveData and Lifecycles w/ coroutines
We heard you loud and clear and agree that LiveData must support your common one-shot asynchronous operations. With Lifecycle & LiveData KTX, you can do so with Kotlin coroutines that are lifecycle-aware. Kotlin coroutines have been well received by the developer community for how they simplify the way concurrency is handled within Android apps. We want to simplify it even further and enabling you to use them safely by offering coroutine scopes tied to lifecycles, coroutine dispatchers that are lifecycle-aware, and support for simple asynchronous chains with the new liveData builder.
The Benchmark library provides you a quick way to benchmark your app code, whether it is written in Kotlin, the Java programming language or native code. We use this library to continuously benchmark Jetpack libraries we release to ensure we do not introduce any latency into your code. You can now do the same right within your development environment in Android Studio, easily measuring database queries, view inflation, or a RecyclerView scroll. The library takes care of what is needed to provide reliable and consistent results like handling warm-up periods, removing outliers, and locking CPU clocks.
To maximize security of an application’s data at-rest, the new Security library implements security best practices for you. It provides strong security that balances encryption with performance for consumer apps like banking and chat. It also provides a maximum level of security for apps that require a hardware-backed keystore with user presence and simplifies many operations including key generation and validation.
ViewModel with SavedState
ViewModel provided you an easy way to save your UI data in the event of a configuration change. It did not save your app state in the event of process death, and many of you have been relying on SavedInstanceState alongside ViewModel. With the ViewModel with SavedState module, you can eliminate boilerplate code and gain the benefits of using both ViewModel and SavedState with simple APIs to save and retrieve data right from your ViewModel.
ViewPager2, the next generation of ViewPager, is now based on RecyclerView and supports vertical scrolling and RTL (Right-to-Left) layouts. It also provides a much easier way to listen for page data changes with registerOnPageChangeCallback.
Now in Beta
ConstraintLayout 2.0 brings up new optimizations, and new way of customizing layouts, with the addition of helper classes. As part of ConstraintLayout 2.0, MotionLayout provides an easy way to manage motion and widget animation in your applications. You can easily describe transitions between layouts and animation of properties. MotionLayout is fully declarative in XML, allowing you to describe even complex transitions without requiring any code.
Users are accustomed to biometric credentials on their phones, but if your app requires a biometric login, it is important to make sure that users are provided a consistent and safe way to enter their credentials. The Biometrics library provides a simple system prompt giving the user a trustworthy experience.
With the Jetpack Enterprise library, your managed enterprise apps can send feedback back to Enterprise Mobility Management providers in the form of keyed app states, while taking advantage of backwards compatibility with managed configurations.
Android for Cars
With the Android for Cars libraries, you can provide your users a driver-optimized version of your app that will be automatically installed onto the vehicle’s infotainment system in vehicles equipped with the Android Automotive OS. It also allows your apps to work with the Android Auto app, providing the driver-optimized version anytime on their device.
Today, we open-sourced an early preview of Jetpack Compose, a new unbundled toolkit designed to simplify UI development by combining a reactive programming model with the conciseness and ease-of-use of Kotlin. We have always done our best work when we did it with you - our developer community. That’s why we decided to develop Jetpack Compose in the open, starting today.
In that vein, we took a step back and chatted with many of you. We heard strong feedback from developers that they like the modern, reactive APIs that Flutter, React Native, Litho, and Vue.js represent. We also heard that developers love Kotlin, with over 53% of professional Android developers using it and with 20% higher language satisfaction ratings than the Java programming language. Kotlin has become the fastest-growing language in terms of number of contributors on GitHub.
So, we decided to invest in the reactive approach to declarative programming and create an easier way to build UIs with Kotlin.
We are building Compose with a few core principles:
Build with the benefits that Kotlin brings -- concise, safe, and fully interoperable with the Java programming language. Designed to drastically reduce the amount of boilerplate code you have to write, so you can focus on your app code, and help avoid entire classes of errors.
Fully declarative for defining UI components, including drawing and creating custom layouts. Simply describe your UI as a set of composable functions, and the framework handles UI optimizations and updates to the view hierarchy under the hood.
Provide reusable building blocks that let you build custom widgets easier, and without starting from scratch.
Compatible with existing views so you can mix and match and adopt at your own pace with direct access to all of the Android and Jetpack APIs.
Material Design out of the box and animations from the start, so it’s easy to create beautiful apps that are full of motion.
Accelerate development with tools like live preview and apply changes.
A Compose application is made up of composable functions that transform application data into a UI hierarchy. A function is all you need to create a new UI component. To create a composable function just add the @Composable annotation to the function name. Under the hood, Compose uses a custom Kotlin compiler plug-in so when the underlying data changes, the composable functions can be re-invoked to generate an updated UI hierarchy. The simple example below prints a string to the screen.
We know that adopting any new framework is a big change for existing projects and codebases, which is why we’ve designed Compose like all of Jetpack -- with individual components that you can adopt at your own pace and are compatible with existing views.
We'd love to hear from you as we iterate on this exciting future together. Send us feedback by posting comments below, and please file any bugs you run into on AOSP or directly through the feedback buttons in the Android Studio Jetpack Compose build in AOSP. Since this is an early preview, we do not recommend trying this on any production projects.
Android Studio 3.5 Beta is ready to download today. Last year, at Google I/O, we heard from many of you that you wanted us to focus even more on quality and stability over features. Consequently, we kicked off Project Marble, focused on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid. Android Studio 3.5 is the culmination of this effort. The results of Project Marble are focused on three core areas: system health, feature polish, and bugs. We are seeking your final round of feedback to make sure we didn't miss a key area that matters to you, so download Android Studio 3.5 on the beta channel today to let us know what you think.
Many times it can be difficult to see the range of changes that go into a quality release. Therefore, this post and our Google I/O talk on What’s New in Android Development Tools walk through a variety of changes in each of the major focus areas of Project Marble within Android Studio 3.5. We are certainly not done improving quality with Android Studio, but with the work and new infrastructure put into Project Marble for long term quality tracking we hope that you are even more productive in developing Android apps.
What's New in Android Development Tools (Google I/O'19)
System Health - Memory
One of the major points of feedback on Android Studio is how slow the IDE runs over time. Many times the reason behind this experience is due to unexpectedly reaching memory pressure or IDE memory leaks. We dug into this area, and as part of Project Marble, we have addressed over 33 impactful memory leaks. To identify leaks, we now measure out-of-memory exceptions on an internal dashboard on an on-going basis for those who opt-in to share data with us which enables us to focus and fix the most impactful issues. Starting with Android Studio 3.5, when the IDE runs out of memory, we capture some high level statistics about the size of the memory heap and dominant objects in the heap. With this data the IDE can do two things: suggest better memory settings and offer to do a deeper memory analysis.
Auto-recommend Memory Settings - By default, Android Studio has a maximum memory heap size of 1.2 GB. For those of you with large projects this amount may not be enough. Even if you have a machine with a large amount of RAM, the IDE will not exceed this value. With Android Studio 3.5, the IDE will recognize when an app project needs more RAM on a machine with higher RAM capacity and will notify you to increase the memory heap size in a notification. Alternatively, you can make adjusts in the new settings panel under Appearance & Behavior → Memory Settings.
Easier to report memory problems with Memory Heap Analysis - It can sometimes be hard to capture and reproduce memory problems to report them to the Android Studio team. To solve this, Android Studio 3.5 allows you to trigger a memory heap dump (Help → Analyze Memory Use) that the IDE locally sanitizes for personal data, analyzes, and creates a report. You can opt to share this memory usage report with the Android Studio team to troubleshoot performance problems.
Memory Usage Report
System Health - Exceptions
We have revamped our exception process backend pipeline. Now with the opt-in data we have earlier signals of common exceptions in aggregate which lets us prioritize and fix issues earlier in the canary release process than before. Moreover, we reduced the amount of times we prompt you for exceptions, since the analytics and opt-in crash reports are now more actionable for our team. The net result is that you should see the blinky red exception report icon in the lower status bar of the IDE less frequently.
Android Studio Exception Bubble
System Health - User Interface Freezes
User Interface (UI) freezes are another common issue we heard from you. In Android Studio 3.5, we extended the infrastructure of the underlying Intellij platform, and now measure UI thread stops that last longer than a few moments. Over time, we will have a bigger picture of the top hit spots to focus our efforts on. For example, during the Project Marble development, we found in our data that XML code editing was notably slower in the IDE. With this data point, we optimized XML typing, and have measurably better performance in Android Studio 3.5. You can see below that editing data binding expressions in XML is faster due to typing latency improvements.
Code Editing Before - Android Studio 3.4 (left) and Code Editing After - Android Studio 3.5 (right)
System Health - Build Speed
We continued our investment in build speed. For those developers with larger projects, it is the number one concern. As we uncovered in our recent Medium blog post on build speed, many elements can affect build performance, sometimes slowing it down more than we can improve. However, during Project Marble, we made speed improvements by adding incremental build support to the top annotation processors including Glide, AndroidX data binding, Dagger, Realm, and Kotlin (KAPT). Incremental support can make a notable impact on build speed. For example, in our preliminary analysis, adding incremental support just for Kotlin has improved submodule non-ABI code changes for the Google I/O schedule app from 9.1 seconds to 3.6 seconds – a 60% improvement. Read more about the performance changes to the build system here.
System Health - IDE Speed
In the past, a pro-tip some developers used to do is to turn off Android Studio plugins such as Android NDK support to improve performance. While there is nothing wrong with disabling plugins to remove extra menus or options that you don't need, we removed some unnecessary performance hotspots for the Android NDK support that impacted overall IDE speed.
System Health - Lint Code Analysis
Android Lint is a code analysis framework in Android Studio that helps identify common programming mistakes. However, we learned from several user reports that Lint could be too slow—especially when running in batch analysis mode on large projects. After some digging, we found and fixed several large memory leaks, leading to a roughly 2x speedup in Lint performance. We also published a profiling tool that can help identify bottlenecks in individual Lint checks. Read more about the analysis and tool here.
System Health - I/O File Access for Windows
Many users of Android Studio use Microsoft Windows. Over time, we received a range of reports from users on this platform that build times and installation speeds were increasingly getting slower. After investigating the problem during Project Marble, we realized that recent anti-virus programs included Android Studio build and installation directories as active scan targets. Since these folders have many small files created and removed over time, the I/O and CPU are partially taxed and consequently impacts the overall build/sync performance of Android Studio.
Google Internal Data, 2.2GHz quad-core Intel Core i7, April 2019
System Health Check - Starting with Android Studio 3.5, the IDE will check various directories that could be impacted by this slowdown, including the project build directory, and compare them against the list of excluded antivirus directories. If Android Studio finds an inconsistency, you will see a pop-up notification and link to help guide you through the optimal setup. Learn more here .
System Health Notification - Anti-virus Check
System Health - Emulator CPU Usage
Many app developers enjoy the fast and responsive emulator which has had dramatic performance improvements in the last few years. However, we heard from you that the Android Emulator seems to take an inordinate amount of CPU cycles and triggers the cooling fans on laptops even when the emulator is idle in the background. After investigation and measurement, we found that Google Play Services and related services were aggressively running in the background because by default the emulator was set to AC charging instead of battery discharging. We switched the default to battery discharging, and background CPU usage declined by more than 3x. This change is just of the many optimizations we made to the Android Emulator during Project Marble. Learn more about the Android Emulator and Project Marble here.
Google Internal Data on Apple MacBook Pro (15” 2016), Emulator: Pixel 3 API 28
Feature Polish - Apply Changes
Being able to quickly edit and see code changes you have made without restarting your app is great for app development. Two years ago, the Instant Run feature was our attempt to enable this flow, but it ultimately fell short of expectations. During the Project Marble time period, we re-architectured and implemented from the ground-up a more practical approach in Android Studio 3.5 called Apply Changes. Apply Changes uses platform-specific APIs from Android Oreo and higher to ensure reliable and consistent behavior; unlike Instant Run, Apply Changes does not modify your APK. To support the changes, we re-architected the entire deployment pipeline to improve deployment speed, and also tweaked the run and deployment toolbar buttons for a more streamlined experience. Learn more about the architecture behind Apply Changes here.
Apply Changes Buttons
Feature Polish - Gradle Sync
A recent and annoying pain point in Android Studio is to have your project unexpectedly trigger red symbols across your app code, especially when re-opening your project. The Gradle build system retains a cache of all the dependencies in your home directory that allows the IDE to quickly sync without re-downloading new artifacts. The root cause for many of the recent incidents of red symbols appearing is that in a recent Gradle change, these caches were periodically deleted to save hard drive space. The IDE was unaware of the discrepancy and consequently generated red symbols for missing dependencies. Starting with Android Studio 3.5, we now have the conditional logic to check for this state. We certainly have more we can do in this area, but this is just one example of the types of issues we addressed for project sync during Project Marble.
Feature Polish - Project Upgrades
Ideally, the Android Studio team would like you to be on the latest version of the IDE since this is where the team does active feature development, bug fixing and performance improvements. We know that upgrading your Android Studio is not a seamless process as it should be with many issues revolving around fixing gradle plugin errors. With Android Studio 3.5, we have updated the user experience on output windows, pop-ups and dialog boxes to help clarify when you actually need to upgrade, plus we made more sync & build upgrade errors more actionable.
From a recent developer survey, we heard that many developers upgrade the Android Studio IDE and the Gradle plugin at the same time. As of the last several releases, the IDE and your gradle plugin can actually be updated independently. This means if you want the latest build system speed and correctness improvements, you can upgrade your Gradle plugin, but you can also wait until you're ready. Whether or not you upgrade you Gradle plugin at the same time as the IDE, we encourage you to be on the latest release of Android Studio 3.5 to start using all the enhancements from Project Marble.
Feature Polish - Layout Editor
Based on user research on the layout editor and input from you, we know that there are several performance and error-prone usability issues that make editing XML the only path forward, especially when working with ConstraintLayout. To address the general usability of the layout editor, we refined a wide range of interactions from constraint selection and deletion, to better device preview resizing. While XML code editing is still a click away, we hope you can see that these interaction refinements can be a big productivity boost when creating and editing layouts in Android Studio. Learn more about the full range of layout editor changes here.
Layout Editor Before - Android Studio 3.4 (left) and Layout Editor After - Android Studio 3.5 (right)
Feature Polish - Data Binding
During Project Marble, we also took a look at long standing issues with data binding. From a performance perspective, we found that creating data binding expressions in XML files would lead to severe hangs in the code editor. After fixing this issue we also improved code completion, navigation, and refactoring.
Feature Polish - App Deployment Flow
We streamlined the deployment flow during Project Marble, by adding a new dropdown to easily see and change the device you intend to deploy to and a new menu item to deploy to multiple devices.
App Deployment User Flow
Feature Polish - C++ Improvements
C++ project support was also a focus area during Project Marble. CMake builds are now up to 25% faster for large projects because the IDE now invokes parallel Ninja targets. Additionally, you will find an improved single build variant user interface panel that allows you to specify ABI targets separately.. And lastly, Android Studio 3.5 allows you to use multiple versions of the Android NDK side-by-side in your build.gradle file. This should allow you to have more reproducible builds and mitigate incompatibilities between NDK versions and the Android gradle plugin.
Single Variant Selection by ABI
Feature Polish - Intellij Platform Update
This release of the Android Studio includes the features and quality enhancements of the 2019.1 Intellij platform release. The 2019.1 Intellij updates has a range of improvements from custom themes to better version control system integration.
Feature Polish - Conditional Delivery for Dynamic Feature Support
Android Studio 3.5 enhances app bundle feature support with the addition of conditional delivery features for your app bundle. Conditional delivery allows you to set certain device configuration requirements for dynamic feature modules to be downloaded automatically during app install. You can set conditional delivery based on hardware features such as OpenGL versions, support for Augmented Reality, or you set conditions based on API level and user country.
Module Selection for Conditional Delivery
Feature Polish - Emulator Foldables & Pixel Device Support
This release of the IDE includes the Android Emulator skins for Pixel 3a and Pixel 3a XL. Additionally, the Android Studio supports the creation of foldable Android Virtual Devices.
Android Emulator - Foldable Support
Feature Polish - Chrome OS Support
Android Studio 3.5 is now officially supported on Chrome OS 75 and higher on high-end x86 based Chromebooks. During Project Marble we refined a few usability issues, and now have an installer for Android Studio and support app deployment to external USB connected Android devices. Learn more how to setup the IDE on Chrome OS here.
Android Studio on Chrome OS
To recap, Android Studio 3.5 has hundreds of bug fixes and notable changes in these core areas:
Memory Usage Report
User Interface Freezes
Lint Code Analysis
I/O File Access
Emulator CPU Usage
Intellij 2019.1 Platform Update
Conditional Delivery for Dynamic Feature Support
Emulator Foldables & Pixel Device Support
Chrome OS Support
Check our the Android Studio preview release notes page for more details and read about deep dives into several areas of Project Marble in the following Medium blog posts:
The specific areas and the approach we took to optimize Android Studio for Project Marble were all based on your feedback and metrics data. The aggregate metrics you can opt-in to inside of Android Studio allow us to figure out if there are broader problems in the product for all users, and the data also allows the team to prioritize feature work appropriately. There are are a couple pathways to help us build better insights. At a baseline, you can opt-in to metrics, by going to Preferences /Settings → Appearance & Behavior → Data Sharing.
IDE Data Sharing
Additionally, throughout the year, you might see user sentiment emojis in the bottom corner of the IDE. Those icons are a lightweight way to inform the Android Studio team on how things are going and to give us in-context feedback, and the fastest way to log a bug and send to the team.
IDE User Feedback
Download the beta version of Android Studio 3.5 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. If you want to maintain a stable version of Android Studio, you can run the stable release version and beta release versions of Android Studio at the same time. Learn more.
To use the mentioned Android Emulator features make sure you are running at least Android Emulator v29.0.6 downloaded via the Android Studio SDK Manager.
As mentioned above, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.
Posted by Anwar Ghuloum, Engineering Director and Maya Ben Ari, Product Manager, Android
With each new OS release, we are making efforts to deliver the latest OS improvements to more Android devices.
Thanks to Project Treble and our continuous collaboration with silicon manufacturers and OEM partners, we have improved the overall quality of the ecosystem and accelerated Android 9 Pie OS adoption by 2.5x compared to Android Oreo. Moreover, Android security updates continue to reach more users, with an 84% increase in devices receiving security updates in Q4, when compared to a year before.
This year, we have increased our overall beta program reach to 15 devices, in addition to Pixel, Pixel 2 and Pixel 3/3a running Android Q beta: Huawei Mate 20 Pro, LGE G8, Sony Xperia XZ3, OPPO Reno, Vivo X27, Vivo NEX S, Vivo NEX A, OnePlus 6T, Xiaomi Mi Mix 3 5G, Xiaomi Mi 9, Realme 3 Pro, Asus Zenfone 5z, Nokia 8.1, Tecno Spark 3 Pro, and Essential PH-1.
But our work hasn’t stopped there. We are continuing to invest in efforts to make Android updates available across the ecosystem.
Safer and more secure devices with Project Mainline
Project Mainline builds on our investment in Treble to simplify and expedite how we deliver updates to the Android ecosystem. Project Mainline enables us to update core OS components in a way that's similar to the way we update apps: through Google Play. With this approach we can deliver selected AOSP components faster, and for a longer period of time – without needing a full OTA update from your phone manufacturer. Mainline components are still open sourced. We are closely collaborating with our partners for code contribution and for testing, e.g., for the initial set of Mainline components our partners contributed many changes and collaborated with us to ensure they ran well on their devices.
As a result, we can accelerate the delivery of security fixes, privacy enhancements, and consistency improvements across the ecosystem.
Security: With Project Mainline, we can deliver faster security fixes for critical security bugs. For example, by modularizing media components, which accounted for nearly 40% of recently patched vulnerabilities, and by allowing us to update Conscrypt, the Java Security Provider, Project Mainline will make your device safer.
Privacy: Privacy has been a major focus for us, and we are putting a lot of effort into better protecting users’ data and increasing privacy standards. With Project Mainline, we have the ability to make improvements to our permissions systems to safeguard user data.
Consistency: Project Mainline helps us quickly address issues affecting device stability, compatibility, and developer consistency. We are standardizing time-zone data across devices. Also, we are delivering a new OpenGL driver implementation, ANGLE, designed to help decrease device-specific issues encountered by game developers.
Our initial set of components supported on devices launching on Android Q:
Security: Media Codecs, Media Framework Components, DNS Resolver, Conscrypt
Mainline components are delivered as either APK or APEX files. APEX is a new file format we developed, similar to APK but with the fundamental difference that APEX is loaded much earlier in the booting process. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update. To ensure updates are delivered safely, we also built new failsafe mechanisms and enhanced test processes. We are also closely collaborating with our partners to ensure devices are thoroughly tested.
Project Mainline enables us to keep the OS on devices fresher, improve consistency, and bring the latest AOSP code to users faster. Users will get these critical fixes and enhancements without having to take a full operating system update. We look forward to extending the program with our OEM partners through our joint work on mainline AOSP.
Posted by Allan Livingston, Product Management Director, Chrome OS App Ecosystem
When Google launched Chrome OS nine years ago, we designed every aspect around three core principles: speed, simplicity, and security. Last year at I/O, Google put those principles at developers’ fingertips by implementing Linux support on Chrome OS. This gave developers the increased flexibility of building and running Linux apps combined with the speed and security of working within Chrome OS.
In just the last year, the Chrome OS ecosystem has grown at an incredible rate. Linux support has been rolled out to over half of all Chromebooks. Plus, all devices launched this year will be Linux-ready right out of the box. The combination of Linux and Chrome OS makes for a great web development environment — and we’re making the process even easier for Android development.
At I/O this year, we showed web and Android developers a few of the most exciting improvements that have made Chrome OS an even faster, simpler, and more secure environment than ever. Let’s get into a few of the highlights:
File sharing Today we announced that it’s much easier to share files between Linux, Android, and Chrome OS. Now you can use the file manager to move your files safely across Chrome OS, Google Drive, Android, and Linux.
Port forwarding We’ve also made improvements to port forwarding on Chrome OS, making it easier to connect networking services between Linux and Chrome OS. That way, you can run a web server within the Linux container while debugging on the same machine.
Android Studio one-click installation and integrated debugging Installing Android Studio on Chrome OS used to be a fairly lengthy process. Now, it takes a simple double-click. There’s no need to use a terminal to download, move, and unzip the file — just download it, click, and install.
Now in beta channel with Chrome OS 75, we also enabled secure USB support for Android phones. You can develop, debug, and push your APK to Android phones on any of the Android developer-recommended Chromebooks.
Chrome OS also automatically handles common installation pain-points, like hardware compatibility and power management set-up.
A growing opportunity for Android developers
App developers have to consider a huge range of factors to deliver amazing experiences on every screen size and form factor. In just the last few years, the app experience has evolved far beyond mobile screens. People are using apps across different devices that blur the lines between mobile and desktop — from attaching keyboards to their tablets to using their smartphones to project onto a desktop screen. And no matter what device they’re using, they expect apps to deliver a seamless experience every time.
When you’re building on and for Chrome OS, you’re on a streamlined path to reaching a massive and fast-growing audience of engaged users. In just the last year, the number of monthly active users who enabled Android apps on Chrome OS has grown by 250%.1 And in Q4 2018, 21% of notebooks sold in the U.S. were Chromebooks — a 23% YoY unit sales growth.2
Because millions of Android apps already run on Chrome OS, you can take the same APK and extend your app’s reach to even more consumers with just a few tweaks. Whether they’re building apps with larger screens in mind from the start or optimizing old apps to reach new users, developers behind some of the most popular mobile apps and games have already seen incredible results from Chromebook users.
Developer spotlight: Concepts & BandLab
As people use apps in more unpredictable and inspiring ways, devs are seeing even higher engagement after optimizing for larger screens. Watch the video below to see how Concepts created a larger, more responsive canvas for aspiring digital designers and how BandLab gave musicians a more immersive platform for exploring and composing new music.
Chrome OS: A fast and secure development environment
It’s never been easier or more secure to develop for the Web and Android on Chrome OS. Between a fast-growing user base, Progressive Web Apps, millions of Android apps, and now, Linux, the potential for developing on and for Chrome OS is only going to keep growing.
Chrome OS delivers the speed and performance app users expect, and it’s now even faster, simpler, and more secure than ever for all developers.
We can’t wait to see the amazing stuff you create with your Chromebooks!
Sources 1. Google Internal Data, March 2018 to March 2019. 2. The NPD Group, Inc., Retail Tracking Service, U.S., Notebook Computers, Chrome OS, based on units, Oct. 8, 2017–Jan. 6, 2018 vs. Oct. 7, 2018–Jan. 5, 2019.
Today marks an important milestone for the Flutter framework, as we expand our focus from mobile to incorporate a broader set of devices and form factors. At I/O, we’re releasing our first technical preview of Flutter for web, announcing that Flutter is powering Google’s smart display platform including the Google Home Hub, and delivering our first steps towards supporting desktop-class apps with Chrome OS.
From Mobile to Multi-Platform
For a long time, the Flutter team mission has been to build the best framework for developing mobile apps for iOS and Android. We believe that mobile development is ripe for improvement, with developers today forced to choose between building the same app twice for two platforms, or making compromises to use cross-platform frameworks. Flutter hits the sweet spot of enabling a single codebase to deliver beautiful, fast, tailored experiences with high developer productivity for both platforms, and we’ve been excited to see how our early efforts have flourished into one of the most popular open source projects.
As we started to home in on our 1.0 release last year, we began experimenting with broadening the scope of Flutter to other platforms. This was triggered both by internal teams within Google who are increasingly relying on Flutter, as well as the latent potential of the Dart platform for delivering portable experiences. In particular, a small team who were already building a web framework for Dart for internal usage started an exploratory project (codename “Hummingbird”) to evaluate the technical merits of porting the Flutter engine to support the standards-based web.
In parallel, the core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.
A Portable UI Framework for All Screens
It’s worth pausing for a moment to acknowledge the business potential of a high-performance, portable UI framework that can deliver beautiful, tailored experiences to such a broad variety of form factors from a single codebase.
For startups, the ability to reach users on mobile, web, or desktop through the same app lets them reach their full audience from day one, rather than having limits due to technical considerations. Especially for larger organizations, the ability to deliver the same experience to all users with one codebase reduces complexity and development cost, and lets them focus on improving the quality of that experience.
With support for mobile, desktop, and web apps, our mission expands: we want to build the best framework for developing beautiful experiences for any screen.
Flutter for Web
This week, we are releasing the first technical preview of Flutter for the web. While this technology is still in development, we are ready for early adopters to try it out and give us feedback. Our initial vision for Flutter on the web is not as a general purpose replacement for the document experiences that HTML is optimized for; instead we intend it as a great way to build highly interactive, graphically rich content, where the benefits of a sophisticated UI framework are keenly felt.
To showcase Flutter for the web, we worked with the New York Times to build a demo. In addition to world-class news coverage, the New York Times is famous for its crossword and other puzzle games. Since avid puzzlers want to play on whatever device they’re using at the time, their development team was attracted to Flutter as a potential solution for their needs. Discovering that they could reach the web with the same code was a huge boon. At Google I/O this week, you can get a sneak peek of their newly refreshed KENKEN puzzle game, which runs with the same code on Android, iOS, web, Mac, and Chrome OS.
Here’s what Eric von Coelln, Executive Director of Puzzles at the New York Times has to say about their experiences with Flutter:
"The New York Times Crossword has more than 400,000 stand-alone subscriptions and is a daily ritual for puzzle solvers. Along with the Crossword, we’ve grown our portfolio of digital puzzles that reaches more than two million solvers each month.
We were already beginning to explore Flutter as a potential solution to the challenge of quickly developing engaging, high-quality mobile experiences. Now the addition of being able to publish to web makes Flutter an even more appealing option to quickly deploy across all of our user platforms. This update of our old Flash-based KenKen game into a multi-platform playable experience is something we’re excited to bring to our solvers this year.”
There’s lots more to say about Flutter for web than we have space for here, so check out the dedicated article about Flutter for web on the Flutter blog.
At this early stage, we’re eager to get your feedback on how you’d like to use Flutter for web. We expect to rapidly evolve the code, with a particular focus on performance, and harmonizing the codebase with the rest of the Flutter project.
Flutter for Mobile Devices
The core Flutter framework also receives an upgrade this week, with the immediate availability of Flutter 1.5 in our stable channel. Flutter 1.5 includes hundreds of changes in response to developer feedback, including updates for new App Store iOS SDK requirements, updates to the iOS and Material widgets, engine support for new device types, and Dart 2.3 featuring new UI-as-code language features.
As the framework itself matures, we’re investing in building out the supporting ecosystem. The architectural model of Flutter has always prioritized a small core framework, supplemented by a rich package community. In the last few months, Google has contributed production-quality packages for web views, Google Maps, and Firebase ML Vision, and this week, we’re adding initial support for in-app payments. And with over 2,000 open source packages available for Flutter, there are options available for most scenarios.
One particularly exciting project that we’re announcing this week at I/O is the ML Kit Custom Image Classifier. Built using Flutter and Firebase, it offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone's camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app.
Flutter continues to grow in popularity and adoption. A growing roster of demanding customers including eBay, Sonos, Square, Capital One, Alibaba and Tencent are developing apps with Flutter. And they’re having fun! Here’s what Larry McKenzie, a senior developer at eBay had to say about Flutter:
“Flutter is fast! Features that once took us multiple days to implement can be finished in a single day. Many problems we used to spend a lot of time on, simply no longer occur. Our team can now focus on creating more polished user experiences and delivering functionality. Flutter is enabling us to exceed expectations!”
Another quickly growing Flutter platform is Chrome OS, with millions of Chromebooks being sold every year, particularly in education. Chrome OS is a perfect environment for Flutter, both for running Flutter apps, and as a developer platform, since it supports execution of both Android and Linux apps. With Chrome OS, you can use Visual Studio Code or Android Studio to develop a Flutter app that you can test and run locally on the same device without an emulator. You can also publish Flutter apps for Chrome OS to the Play Store, where millions of others can benefit from your creation.
Flutter for Embedded Devices
As the final example of Flutter’s portability, we offer Flutter embedded on other devices. We recently published samples that demonstrate Flutter running directly on smaller-scale devices like Raspberry Pi, and we offer an embedding API for Flutter that allows it to be used in scenarios including home, automotive and beyond.
Perhaps one of the most pervasive embedded platforms where Flutter is already running is on the smart display operating system that powers the likes of Google Home Hub.
Within Google, some Google-built features for the Smart Display platform are powered by Flutter today. And the Assistant team is excited to continue to expand the portfolio of features built with Flutter for the Smart Display in the coming months; the goal this year is to use Flutter to drive the overall system UI.
We often get asked by developers how they can get started with Flutter. We are pleased today to announce a comprehensive new training course for Flutter, built by The App Brewery, authors of the highest-rated iOS training course on Udemy. Their new course has over thirty hours of content for Flutter, including videos, demos and labs, and with Google’s sponsorship, they are announcing today a time-limited discount of this course from the retail price of $199 to just $10.
Many developers are creating inspiring apps with Flutter. In the run-up to Google I/O, we ran a contest called Flutter Create to encourage developers to see what they could build with Flutter in 5KB or less of Dart code. We had over 750 unique entries from around the world, with some amazing examples that pushed what we imagine would be possible in such a small size.
Today, we’re announcing the winners, which can be found on flutter.dev/create. Congratulations to the overall winner, Zebiao Hu, who wins a fully-loaded iMac Pro worth over $10,000!
Flutter is no longer a mobile framework, but a multi-platform framework that can help you reach your users wherever they are. We can’t wait to see what you’ll build with Flutter on the web, desktop, mobile, and beyond!
It's great to be in our backyard again for Google I/O to connect with Android’s developers around the world. The 7,200 attendees at Shoreline Amphitheatre, millions of viewers on the livestream, and thousand of developers at local I/O Extended events across 80+ countries heard about our efforts to make the lives of developers easier. Today at Google I/O, we talked about two big themes; helping our developers become more productive and strengthening user privacy and security in the platform. Let's take a closer look at the major developer news at I/O so far:
This year, we focused on a simple idea - we want to save you time every today. By making everything you use even better.
Two years ago, we announced Kotlin was a supported language for Android. Our top developers loved it already, and since then, it’s amazing how fast it’s grown. Over 50% of professional Android developers now use Kotlin, it’s been one of the most-loved languages two years running on Stack Overflow, and one of the fastest-growing on GitHub in number of contributors.
Today we’re announcing another big step: Android development will become increasingly Kotlin-first. Many new Jetpack APIs and features will be offered first in Kotlin. If you’re starting a new project, you should write it in Kotlin; code written in Kotlin often mean much less code for you–less code to type, test, and maintain. And, in partnership with Jetbrains and the Kotlin Foundation, we’re continuing to invest in tooling, docs, trainings and events to make Kotlin even easier to learn and use. This includes Kotlin/Everywhere, a new, global series of events where you can learn more about the language, new Udacity courses, and more.
Last year, we announced Android Jetpack, Android’s API to accelerate Android development and make writing high-quality apps easy, with less code. Over 80% of our top 1000 apps are already using Jetpack, as we continue to simplify more every-day developer challenges. Today, we are releasing 6 new Jetpack libraries (in alpha), and bringing 5 libraries to beta quality. Here are 3 highlights:
CameraX - You’ve told us working effectively across the range of unique Android devices was tough. CameraX is a new open-source Android Jetpack library to make camera development easier and faster. It provides a consistent camera experience across devices, so you no longer have to maintain device specific configurations. You’ll find support for leading-edge hardware and software features like optical zoom, bokeh, HDR, and night mode on participating manufacturer devices. It works with almost 90% of devices (backwards compatible to L). There’s also an easy migration path from legacy Camera APIs and it works seamlessly with camera2 APIs. 70% of camera usage on Android comes from installed apps (not the device camera app) so we’re really excited to make camera development easier.
Architecture Components - We’ve made a number of additions and enhancements based on your feedback. You’ve told us concurrency on Android was hard. So we’re bringing you LiveData and Lifecycles w/ coroutines to support common one-shot asynchronous operations. With the ViewModel with SavedState module, you can eliminate boilerplate code and gain the benefits of using both ViewModel and SavedState with simple APIs to save and retrieve data right from your ViewModel. And in case you missed it, we announced stable releases of WorkManager (background processing) and Navigation (navigation between app screens) just a couple of months ago.
Jetpack Compose - Many of you have been asking us for a modern, reactive style UI toolkit for Android, which takes advantage of Kotlin and integrates seamlessly with the platform and all of your existing code. Today, we’re sharing the team’s work on Jetpack Compose. Jetpack Compose is designed to simplify UI development by combining a reactive programming model with the conciseness and ease-of-use of Kotlin. It’s compatible with the existing UI toolkit, so you can mix and match views with direct access to all of the Android and Jetpack APIs. It’s also fully declarative for defining UI components. And, it’s designed with Material, animations, and tools in mind from the start. Starting today we’re developing this in the open, and you can find all the code on AOSP.
Today we’re releasing Android Studio 3.5 to Beta. For months, the team has been exclusively focused on refining and polishing day-to-day development workflows, with Project Marble. Android Studio 3.5 includes better IDE memory management for large projects, lower typing latency, lint improvements, CPU usage optimizations, layout editor improvements, emulator improvements, build changes, as well as a complete rewrite of Instant Run, now called Apply Changes, that now reliably accelerates the ability to see your code changes on a device - plus over 400 high- priority bug fixes.
Machine Learning at Android scale
In Android Q, we’ve made significant improvements to Android’s Neural Networks API (NNAPI). First, we have increased the number of Operators supported from 38 to over 90. The vast majority of models can now be accelerated by NNAPI with no alterations. We’ve also introduced an introspection API for advanced users, allowing full control over which hardware components handle acceleration (e.g. DSP vs. NPU). And, we’ve worked closely with hardware vendors to deliver significant improvements in performance, both in latency and power consumption. Working with MediaTek, we were able to accelerate ML Kit’s face detection API by 9X on the Helio P90. Working with Qualcomm, we were able to accelerate Google’s Lens OCR on the Snapdragon 855’s AI Engine, increasing speed by 3X while also reducing power consumption by 3.7X.
Dynamic features and in-app updates
Last year we introduced the Android App Bundle to help you reduce app size and increase installs. Since then, we’ve seen over 80,000 app bundles in production, with average size savings of 20%. And today we have a number of announcements to help you reduce size and deliver updates to your users even faster. Today we’re glad to share that dynamic feature modules are moving from beta to stable. With dynamic feature modules, you can reduce your app size even more by choosing which parts of your app to deliver - based on conditions like device features, country. You can even deliver modules on-demand, instead of at install time. And today we’re also moving in-app updates from beta to stable. The ability to dynamically update apps is something you’ve been requesting for a long time. Let’s say you have a crucial bug in your app, and you need to push it out right away; you don’t want to wait until users discover an update in the Play Store. Now you can.
User privacy and security in Android Q
As a developer community, we all care about getting this right. It’s about building a platform that offers powerful capabilities for developers, while making sure that user safety and privacy is protected. We introduced Android Q Beta a few months ago with over 50 features and improvements around user privacy and security. These Q changes provide users more transparency and control.
As always, we are working hard to do everything we can for developers adopting the new release. We know you have your own features to build. That’s why, with these Q changes, we’ve worked very hard to minimize the impact for you, as well as to incorporate your feedback. We’ve given as long a notice period as possible, as well as complete and detailed technical information up front, to make it as easy as possible to adopt. We also want to thank the community for your ongoing feedback. It’s been a huge help to the team who are working hard to get this right. A great example are the Beta 3 storage changes, where your feedback helped us evolve the feature over the course of the Betas. Android has a longstanding commitment to minimizing all breaking changes. Our commitment is unchanged, and we’ll work hard to keep Android the open, flexible, and developer friendly platform we all love.
Be a part of Google I/O!
We’ve got a lot of great content in store for you over the next three days, including over 45 sessions across Android. We’re excited for you to join us in-person here at Shoreline, at an I/O Extended event, or online through the livestream. We’re constantly investing in our platform that connects developers to billions of users around the world. To the entire Android community, thank you for your continued support and feedback, and for being a part of Android.
Posted by Chris Turkstra, Director, Actions on Google
People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.
At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.
Enhance your presence in Search and the Assistant
Help people with their “how to” questions
Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.
Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.
Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:
For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.
Check out how REI is getting extra mileage out of their YouTube video:
How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.
Easier engagement with your apps
Help people quickly get things done with App Actions
If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.
Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.
If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.
Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.
Build for devices in the home
Take advantage of Smart Displays’ interactive screens
Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.
Here’s an example of what you can build when you can leverage the full screen of a Smart Display:
Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.
Enable smart home devices to communicate locally
There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.
Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.
Make setup more seamless
And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.
Make your devices smart with Assistant Connect
Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.
New device types and traits
For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.
Get started with our new tools for all types of developers
Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.
If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.
Today Android is celebrating two amazing milestones. It’s Android’s version 10! And today, Android is running on more than 2.5B active Android devices.
With Android Q, we’ve focused on three themes: innovation, security and privacy, and digital wellbeing. We want to help you take advantage of the latest new technology -- 5G, foldables, edge-to-edge screens, on-device AI, and more -- while making sure users' security, privacy, and wellbeing are always a top priority.
Earlier at Google I/O we highlighted what’s new in Android Q and unveiled the latest update, Android Q Beta 3. Your feedback continues to be extremely valuable in shaping today’s update as well as our final release to the ecosystem in the fall.
This year, Android Q Beta 3 is available on 15 partner devices from 12 OEMs -- that’s twice as many devices as last year! It’s all thanks to Project Treble and especially to our partners who are committed to accelerating updates to Android users globally -- Huawei, Xiaomi, Nokia, Sony, Vivo, OPPO, OnePlus, ASUS, LGE, TECNO, Essential, and realme.
Visit android.com/beta to see the full list of Beta devices and learn how to get today’s update on your device. If you have a Pixel device, you can enroll here to get Beta 3 -- if you’re already enrolled, watch for the update coming soon. To get started developing with Android Q Beta, visit developer.android.com/preview.
Privacy and security
As we talked about at Google I/O, privacy and security are important to our whole company and in Android Q we’ve added many more protections for users.
In Android Q, privacy has been a central focus, from strengthening protections in the platform to designing new features with privacy in mind. It’s more important than ever to give users control -- and transparency -- over how information is collected and used by apps, and by our phones.
Building on our work in previous releases, Android Q includes extensive changes across the platform to improve privacy and give users control -- from improved system UI to stricter permissions to restrictions on what data apps can use.
For example, Android Q gives users more control over when apps can get location. Apps still ask the user for permission, but now in Android Q the user has greater choice over when to allow access to location -- such as only while the app is in use, all the time, or never. Read the developer guide for details on how to adapt your app for the new location controls.
Outside of location, we also introduced the Scoped Storage feature to give users control over files and prevent apps from accessing sensitive user or app data. Your feedback has helped us refine this feature, and we recently announced several changes to make it easier to support. These are now available in Beta 3.
Another important change is restricting app launches from the background, which prevents apps from unexpectedly jumping into the foreground and taking over focus. In Beta 3 we’re transitioning from toast warnings to actually blocking these launches.
To keep users secure, we’ve extended our BiometricPrompt authentication framework to support biometrics at a system level. We're extending support for passive authentication methods such as face, and we’ve added implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction. The new implicit flow is designed for a lighter-weight alternative for transactions with passive authentication, and there’s no need for users to explicitly confirm.
Android Q also adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections made through Android’s TLS stack, called Conscrypt, regardless of target API level. See the docs for details.
Today we also announced Project Mainline, a new approach to keeping Android users secure and their devices up-to-date with important code changes, direct from Google Play. With Project Mainline, we’re now able to update specific internal components within the OS itself, without requiring a full system update from your device manufacturer. This means we can help keep the OS code on devices fresher, drive a new level of consistency, and bring the latest AOSP code to users faster -- and for a longer period of time.
We plan to update Project Mainline modules in much the same way as app updates are delivered today -- downloading the latest versions from Google Play in the background and loading them the next time the phone starts up. The source code for the modules will continue to live in the Android Open Source Project, and updates will be fully open-sourced as they are released. Also, because they’re open source, they’ll include improvements and bug fixes contributed by our many partners and developer community worldwide.
For users, the benefits are huge, since their devices will always be running the latest versions of the modules, including the latest updates for security, privacy, and consistency. For device makers, carriers, and enterprises, the benefits are also huge, since they can optimize and secure key parts of the OS without the cost of a full system update.
For app and game developers, we expect Project Mainline to help drive consistency of platform implementation in key areas across devices, over time bringing greater uniformity that will reduce development and testing costs and help to make sure your apps work as expected. All devices running Android Q or later will be able to get Project Mainline, and we’re working closely with our partners to make sure their devices are ready.
Innovation and new experiences
Android is shaping the leading edge of innovation. With our ecosystem partners, we’re enabling new experiences through a combination of hardware and software advances.
This year, display technology will take a big leap with foldable devices coming to the Android ecosystem from several top device makers. When folded these devices work like a phone, then you unfold a beautiful tablet-sized screen.
We’ve optimized Android Q to ensure that screen continuity is seamless in these transitions, and apps and games can pick up right where they left off. For multitasking, we’ve made some changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on large screens.
Our partners have already started showing their innovative foldable devices, with more to come. You can get started building and testing today with our foldables emulator in canary release of Android Studio 3.5.
5G networks are the next evolution of wireless technology -- providing consistently faster speeds and lower latency. For developers, 5G can unlock new kinds of experiences in your apps and supercharge existing ones.
Android Q adds platform support for 5G and extends existing APIs to help you transform your apps for 5G. You can use connectivity APIs to detect if the device has a high bandwidth connection and check whether the connection is metered. With these your apps and games can tailor rich, immersive experiences to users over 5G.
With Android’s open ecosystem and range of partners, we expect the Android ecosystem to scale to support 5G quickly. This year, over a dozen Android device makers are launching 5G-ready devices, and more than 20 carriers will launch 5G networks around the world, with some already broad-scale.
On top of hardware innovation, we’re continuing to see Android’s AI transforming the OS itself to make it smarter and easier to use, for a wider range of people. A great example is Live Caption, a new feature in Android Q that automatically captions media playing on your phone.
Many people watch videos with captions on -- the captions help them keep up, even when on the go or in a crowded place. But for 466 million Deaf and Hard of Hearing people around the world, captions are more than a convenience -- they make content accessible. We worked with the Deaf community to develop Live Caption.
Live Caption brings real-time captions to media on your phone - videos, podcasts, and audio messages, across any app—even stuff you record yourself. Best of all, it doesn’t even require a network connection -- everything happens on the device, thanks to a breakthrough in speech recognition that we made earlier this year. The live speech models run right on the phone, and no audio stream ever leaves your device.
For developers, Live Caption expands the audience for your apps and games by making digital media more accessible with a single tap. Live Caption will be available later this year.
Suggested actions in notifications
In Android Pie we introduced smart replies for notifications that let users engage with your apps direct from notifications. We provided the APIs to attach replies and actions, but you needed to build those on your own.
Now in Android Q we want to make smart replies available to all apps right now, without you needing to do anything. Starting in Beta 3, we’re enabling system-provided smart replies and actions that are inserted directly into notifications by default.
Android Q suggestions are powered by an on-device ML service built into the platform -- the same service that backs our text classifier entity recognition service. We’ve built it with user privacy in mind, and the ML processing happens completely on the device, not on a backend server.
Because suggested actions are based on the TextClassifier service, they can take advantage of new capabilities we’ve added in Android Q, such as language detection. You can also use TextClassifier APIs directly to generate system-provided notifications and actions, and you can mix those with your own replies and actions as needed.
Many users prefer apps that offer a UI with a dark theme they can switch to when light is low, to reduce eye strain and save battery. Users have also asked for a simple way to enable dark theme everywhere across their devices. Dark theme has been a popular request for a while, and in Android Q, it’s finally here.
Starting in Android Q Beta 3, users can activate a new system-wide dark theme by going to Settings > Display, using the new Quick Settings tile, or turning on Battery Saver. This changes the system UI to dark, and enables the dark theme of apps that support it. Apps can build their own dark themes, or they can opt-in to a new Force Dark feature that lets the OS create a dark version of their existing theme. All you have to do is opt-in by setting android:forceDarkAllowed="true" in your app’s current theme.
You may also want to take complete control over your app’s dark styling, which is why we’ve also been hard at work improving AppCompat’s DayNight feature. By using DayNight, apps can offer a dark theme to all of their users, regardless of what version of Android they’re using on their devices. For more information, see here.
Many of the latest Android devices feature beautiful edge-to-edge screens, and users want to take advantage of every bit of them. In Android Q we’re introducing a new fully gestural navigation mode that eliminates the navigation bar area and allows apps and games to use the full screen to deliver their content. It retains the familiar Back, Home, and recents navigation through edge swipes rather than visible buttons.
Users can switch to gestures in Settings > System > Gestures. There are currently two gestures: Swiping up from the bottom of the screen takes the user to the Home screen, holding brings up Recents. Swiping from the screen’s left or right edge triggers the Back action.
To blend seamlessly with gestural navigation, apps should go edge-to-edge, drawing behind the navigation bar to create an immersive experience. To implement this, apps should use the setSystemUiVisibility() API to be laid out fullscreen, and then handle WindowInsets as appropriate to ensure that important pieces of UI are not obscured. More information is here.
Digital wellbeing is another theme of our work on Android -- we want to give users the visibility and tools to find balance with the way they use their phones. Last year we launched Digital Wellbeing with Dashboards, App Timers, Flip to Shush, and Wind Down mode. These tools are really helping. App timers helped users stick to their goals over 90% of the time, and users of Wind Down had a 27% drop in nightly usage.
This year we’re continuing to expand our features to help people find balance with digital devices, adding Focus Mode and Family Link.
Focus Mode is designed for all those times you’re working or studying, and you want to to focus to get something done. With focus mode, you can pick the apps that you think might distract you and silence them - for example, pausing email and the News while leaving maps and text message apps active. You can then use Quick Tiles to turn on Focus Mode any time you want to focus. Under the covers, these apps will be paused - until you come out of Focus Mode! Focus Mode is coming to Android 9 Pie and Android Q devices this Fall.
Family Link is a new set of controls to help parents. Starting in Android Q, Family Link will be built right into the Settings on the device. When you set up a new device for your child, Family Link will help you connect it to you. You’ll be able to set daily screen time limits, see the apps where your child is spending time, review any new apps your child wants to install, and even set a device bedtime so your child can disconnect and get to sleep. And now in Android Q you can also set time limits on specific apps… as well as give your kids Bonus Time if you want them to have just 5 more minutes at bedtime. Family Link is coming to Android P and Q devices this Fall. Make sure to check out the other great wellbeing apps in the recent Google Play awards.
Family link lets parents set device bedtime and even give bonus minutes.
We’re continuing to extend the foundations of Android with more capabilities to help you build new experiences for your users -- here are just a few.
Improved peer-to-peer and internet connectivity
In Android Q we’ve refactored the Wi-Fi stack to improve privacy and performance, and also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission. The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Thenetwork suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity.
Wi-Fi performance modes
In Android Q apps can now request adaptive Wi-Fi by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases. The platform works with the device firmware to meet the requirement with the lowest power consumption. To use the new performance modes, call WifiManager.WifiLock.createWifiLock().
Full support for Wi-Fi RTT accurate indoor positioning
In Android 9 Pie we introduced RTT APIs for indoor positioning to accurately measure distance to nearby Wi-Fi Access Points (APs) that support the IEEE 802.11mc protocol, based on measuring the round-trip time of Wi-Fi packets. Now in Android Q, we’ve completed our implementation of the 802.11mc standard, adding an API to obtain location information of each AP being ranged, configured by their owner during installation.
Audio playback capture
You saw how Live Caption can take audio from any app and instantly turn it into on-screen captions. It’s a seamless experience that shows how powerful it can be for one app to share its audio stream with another. In Android Q, any app that plays audio can let other apps capture its audio stream using a new API. In addition to enabling captioning and subtitles, the API lets you support popular use-cases like live-streaming games, all without latency impact on the source app or game.
We’ve designed this new capability with privacy and copyright protection in mind, so the ability for an app to capture another app's audio is constrained, giving apps full control over whether their audio streams can be captured. Read more here.
Dynamic depth for photos
Apps can now request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support. Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases. Dynamic Depth is an open format for the ecosystem -- the latest version of the spec is here. We're working with our device-maker partners to make it available across devices running Android Q and later.
With Dynamic Depth image you can offer specialized blurs and bokeh options in your app
New audio and video codecs
Android Q adds support for the open source video codec AV1, which allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it. The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates.
Vulkan 1.1 and ANGLE
We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. We’re working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. For game and graphics developers using OpenGL, we’re also working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. See the docs for details.
Neural Networks API 1.2
In NNAPI 1.2 we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.
When devices get too warm, they may throttle the CPU and/or GPU, and this can affect apps and games in unexpected ways. Now in Android Q, apps and games can use a thermal API to monitor changes on the device and take action to help restore normal temperature. For example, streaming apps can reduce resolution/bit rate or network traffic, a camera app could disable flash or intensive image enhancement, or a game could reduce frame rate or polygon tesselation. Read more here.
Android Q introduces several improvements to the ART runtime to help your apps start faster, consume less memory, and run smoother -- without requiring any work from you. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.
We’re also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC collects young-generation objects separately, incurring much lower cost as compared to full-heap GC. It makes garbage collection more efficient in terms of time and CPU, reduces jank, and helps apps run better on lower-end devices.
More Android Q Beta devices, more Treble momentum than ever
In 2017 we launched Project Treble as part of Android Oreo, with a goal of accelerating OS updates. Treble provides a consistent, testable interface between Android and the underlying device code from device makers and silicon manufacturers, which makes porting a new OS version much simpler and more modular.
In 2018 we worked closely with our partners to bring the first OS updates to their Treble devices. The result: last year at Google I/O we had 8 devices from 7 partners joining our Android P Beta program, together with our Pixel and Pixel 2 devices. Fast forward to today -- we’re seeing updates to Android Pie accelerating strongly, with 2.5 times the footprint compared to Android Oreo's at the same time last year.
This year with Android Q we’re seeing even more momentum, and we have 21 devices from 12 top global partners joining us to release Android Q Beta 3 -- in addition all Pixel devices. We’re also providing Q Beta 3 Generic System Images (GSI), a testing environment for other supported Treble devices. All of these offer the same behaviors, APIs, and features -- giving you an incredible variety of devices for testing your apps, and more ways for you to get an early look at Android Q.
To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.
How do I get Beta 3?
It's easy! Just enroll any Pixel device here to get the update over-the-air. If you're already enrolled, you'll receive the update soon, and, no action is needed on your part. Downloadable system images are also available.
You can also get Beta 3 on any of the other devices participating in the Android Q Beta program, from some of our top device maker partners. You can see the full list of supported partner and Pixel devices at android.com/beta. For each device you'll find specs and links to the manufacturer's dedicated site for downloads, support, and to report issues.
For even broader testing on supported devices, you can also get Android GSI images, and if you don’t have a device you can test on the Android Emulator -- just download the latest emulator system images via the SDK Manager in Android Studio.