Giving users more control over their location data

Posted by Jen Chai, Product Manager

Location data can deliver amazing, rich mobile experiences for users on Android such as finding a restaurant nearby, tracking the distance of a run, and getting turn-by-turn directions as you drive. Location is also one of the most sensitive types of personal information for a user. We want to give users simple, easy-to-understand controls for what data they are providing to apps, and yesterday, we announced in Android Q that we are giving users more control over location permissions. We are delighted by the innovative location experiences you provide to users through your apps, and we want to make this transition as straightforward for you as possible. This post dives deeper into the location permission changes in Q, what it may mean for your app, and how to get started with any updates needed.

Previously, a user had a single control to allow or deny an app access to device location, which covered location usage by the app both while it was in use and while it wasn't. Starting in Android Q, users have a new option to give an app access to location only when the app is being used; in other words, when the app is in the foreground. This means users will have a choice of three options for providing location to an app:

  • "All the time" - this means an app can access location at any time
  • "While in use" - this means an app can access location only while the app is being used
  • "Deny" - this means an app cannot access location

Some apps or features within an app may only need location while the app is being used. For example, if a feature allows a user to search for a restaurant nearby, the app only needs to understand the user's location when the user opens the app to search for a restaurant.

However, some apps may need location even when the app is not in use. For example, an app that automatically tracks the mileage you drive for tax filing, without requiring you to interact with the app.

The new location control allows users to decide when device location data is provided to an app and prevents an app from getting location data that it may not need. Users will see this new option in the same permissions dialog that is presented today when an app requests access to location. This permission can also be changed at any time for any app from Settings-> Location-> App permission.

Here's how to get started

We know these updates may impact your apps. We respect our developer community, and our goal is to approach any change like this very carefully. We want to support you as much as we can by (1) releasing developer-impacting features in the first Q Beta to give you as much time as possible to make any updates needed in your apps and (2) providing detailed information in follow-up posts like this one as well as in the developer guides and privacy checklist. Please let us know if there are ways we can make the guides more helpful!

If your app has a feature requiring "all the time" permission, you'll need to add the new ACCESS_BACKGROUND_LOCATION permission to your manifest file when you target Android Q. If your app targets Android 9 (API level 28) or lower, the ACCESS_BACKGROUND_LOCATION permission will be automatically added for you by the system if you request either ACCESS_FINE_LOCATION or ACCESS_COARSE_LOCATION. A user can decide to provide or remove these location permissions at any time through Settings. To maintain a good user experience, design your app to gracefully handle when your app doesn't have background location permission or when it doesn't have any access to location.

Users will also be more likely to grant the location permission if they clearly understand why your app needs it. Consider asking for the location permission from users in context, when the user is turning on or interacting with a feature that requires it, such as when they are searching for something nearby. In addition, only ask for the level of access required for that feature. In other words, don't ask for "all the time" permission if the feature only requires "while in use" permission.

To learn more, read the developer guide on how to handle the new location controls.

Android Jetpack Navigation Stable Release

Posted by Ian Lake, Software Engineering Lead & Jisha Abubaker, Product Manager

Cohesive tooling and guidance for implementing predictable in-app navigation

Today we're happy to announce the stable release of the Android Jetpack Navigation component.

The Jetpack Navigation component's suite of libraries, tooling and guidance provides a robust, complete navigation framework, freeing you from the challenges of implementing navigation yourself and giving you certainty that all edge cases are handled correctly.

With the Jetpack Navigation component you can:

  • Handle basic user actions like Up & Back buttons so that they work consistently across devices and screens.
  • Allow users to land on any part of your app via deep links and build consistent and predictable navigation within your app.
  • Improve type safety of arguments passed from one screen to another, decreasing the chances of runtime crashes as users navigate in your app.
  • Add navigation experiences like navigation drawers and bottom navigation consistent with the Material Design guidelines.
  • Visualize and manipulate your navigation flows easily with the Navigation Editor in Android Studio 3.3

The Jetpack Navigation component adheres to the Principles of Navigation, providing consistent and predictable navigation no matter how simple or complex your app may be.

Simplify navigation code with Jetpack Navigation Libraries

The Jetpack Navigation component provides a framework for in-app navigation that makes it possible to abstract away the implementation details, keeping your app code free of navigation boilerplate.

To get started with the Jetpack Navigation component in your project, add the Navigation artifacts available on Google's Maven repository in Java or Kotlin to your app's build.gradle file:

 dependencies {
    def nav_version = 2.0.0

    // Java
    implementation "androidx.navigation:navigation-fragment:$nav_version"
    implementation "androidx.navigation:navigation-ui:$nav_version"

    // Kotlin KTX 
    implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
    implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
  }

Note: If you have not yet migrated to androidx.*, the Jetpack Navigation stable component libraries are also available as android.arch.* artifacts in version 1.0.0.

navigation-runtime : This core library powers the navigation graph, which provides the structure of your in-app navigation: the screens or destinations that make up your app and the actions that link them. You can control how you navigate to destinations with a simple navigate() call. These destinations may be fragments, activities or custom destinations.

navigation-fragment: This library builds upon navigation-runtime and provides out-of-the-box support for fragments as destinations. With this library, fragment transactions are now handled for you automatically.

navigation-ui: This library allows you to easily add navigation drawers, menus and bottom navigation to your app consistent with the Material Design guidelines.

Each of these libraries provide an Android KTX artifact with the -ktx suffix that builds upon the Java API, taking advantage of Kotlin-specific language features.

Tools to help you build predictable navigation workflows

Available in Android Studio 3.3 and above, the Navigation Editor lets you visually create your navigation graph , allowing you to manage user journeys within your app.

With integration into the manifest merger tool, Android Studio can automatically generate the intent filters necessary to enable deep linking to a specific screen in your app. With this feature, you can associate URLs with any screen of your app by simply setting an attribute on the navigation destination.

Navigation often requires passing data from one screen to another. For example, your list screen may pass an item ID to a details screen. Many of the runtime exceptions during navigation have been attributed to a lack of type safety guarantees as you pass arguments. These exceptions are hard to replicate and debug. Learn how you can provide compile time type safety with the Safe Args Gradle Plugin.

Guidance to get it right on the first try

Check out our brand new set of developer guides that encompass best practices to help you implement navigation correctly:

What developers say

Here's what Emery Coxe, Android Lead @ HomeAway, has to say about the Jetpack Navigation component :

"The Navigation library is well-designed and fully configurable, allowing us to integrate the library according to our specific needs.

With the Navigation Library, we refactored our legacy navigation drawer to support a dynamic, runtime-based configuration using custom views. It allowed us to add / remove new screens to the top-level experience of our app without creating any interdependencies between discreetly packaged modules.

We were also able to get rid of all anti-patterns in our app around top-level navigation, removing explicit casts and hardcoded assumptions to instead rely directly on Navigation. This library is a fundamental component of modern Android development, and we intend to adopt it more broadly across our app moving forward.

Get started

Check out the migration guide and the developer guide to learn how you can get started using the Jetpack Navigation component in your app. We also offer a hands-on codelab and a sample app.

Also check out Google's Digital Wellbeing to see another real-world example of in-app navigation using the Android Jetpack Navigation component.

Feedback

Please continue to tell us about your experience with the Navigation component. If you have specific feedback on features or if you run into any issues, please file a bug via one of the following links:

Call Screen beta comes to Pixel phones in Canada



You may have been in a situation where you see an incoming call but you don’t recognize the number. If you’re like me, I pretty much don’t answer these any more as I worry they will be spam. However, that also means you can miss legitimate callers like your kid’s daycare, realtor, or your bank.

Starting today, Canadians with a Pixel can now opt in to the Call Screen beta, a new feature that gives you help from the Google Assistant to find out who's calling and why.

To use Call Screen, when you get an incoming call, just hit the “Screen call” button and the Google Assistant will help you get answers to specific questions like who's calling, why and more. You'll see a transcript of the caller's responses in real-time, and then you can decide whether to pick up, respond by tapping quick replies like “I’ll call you back later,” hang up, or mark the call as spam.

Call Screen is a feature on Pixel devices powered by the Google Assistant to make life easier and simpler. Like many AI-powered features on Pixel, including camera features and our music feature Now Playing which helps your discover new music playing around you, Call Screen processes call details on-device, which means these experiences are fast, private to you, and use up less battery.

Select Canadian Pixel 2 and Pixel 3 owners will receive an email today with instructions on how to opt in to the Call Screen beta. All Pixel users can opt in to the beta here. Call Screen is currently available in English only.

A recipe for beating the record of most-calculated digits of pi

Editor’s note: Today, March 14, is Pi Day (3.14). Here at Google, we’re celebrating the day with a new milestone: A team at Google has broken the Guinness World RecordsTMtitle for most accurate value of pi.

Whether or not you realize it, pi is everywhere you look. It’s the ratio of the circumference of a circle to its diameter, so the next time you check your watch or see the turning wheels of a vehicle go by, you’re looking at pi. And since pi is an irrational number, there’s no end to how many of its digits can be calculated. You might know it as 3.14, but math and science pros are constantly working to calculate more and more digits of pi, so they can test supercomputers (and have a bit of healthy competition, too).

While I’ve been busy thinking about which flavor of pie I’m going to enjoy later today, Googler Emma Haruka Iwao has been busy using Google Compute Engine, powered by Google Cloud, to calculate the most accurate value of pi—ever. That’s 31,415,926,535,897 digits, to be exact. Emma used the power of the cloud for the task, making this the first time the cloud has been used for a pi calculation of this magnitude.

Here’s Emma’s recipe for what started out as a pie-in-the-sky idea to break a Guinness World Records title:

Step 1: Find inspiration for your calculation.

When Emma was 12 years old, she became fascinated with pi. “Pi seems simple—it starts with 3.14. When I was a kid, I downloaded a program to calculate pi on my computer,” she says. “At the time, the world record holders were Yasumasa Kanada and Daisuke Takahashi, who are Japanese, so it was really relatable for me growing up in Japan.”

Later on, when Emma was in college, one of her professors was Dr. Daisuke Takahashi, then the record holder for calculating the most accurate value of pi using a supercomputer. “When I told him I was going to start this project, he shared his advice and some technical strategies with me.”

Step 2: Combine your ingredients.

To calculate pi, Emma used an application called y-cruncher on 25 Google Cloud virtual machines. “The biggest challenge with pi is that it requires a lot of storage and memory to calculate,” Emma says. Her calculation required 170 terabytes of data to complete—that's roughly equivalent to the amount of data in the entire Library of Congress print collections.

Emma

Step 3: Bake for four months.

Emma’s calculation took the virtual machines about 121 days to complete. During that whole time, the Google Cloud infrastructure kept the servers going. If there’d been any failures or interruptions, it would’ve disrupted the calculation. When Emma checked to see if her end result was correct, she felt relieved when the number checked out. “I started to realize it was an exciting accomplishment for my team,” she says.

Step 4: Share a slice of your achievement.

Emma thinks there are a lot of mathematical problems out there to solve, and we’re just at the beginning of exploring how cloud computing can play a role. “When I was a kid, I didn’t have access to supercomputers. But even if you don’t work for Google, you can apply for various scholarships and programs to access computing resources,” she says. “I was very fortunate that there were Japanese world record holders that I could relate to. I’m really happy to be one of the few women in computer science holding the record, and I hope I can show more people who want to work in the industry what’s possible.”

At Google, Emma is a Cloud Developer Advocate, focused on high-performance computing and programming language communities. Her job is to work directly with developers, helping them to do more with the cloud and share information about how products work. And now, she’s also sharing her calculations: Google Cloud has published the computed digits entirely as disk snapshots, so they’re available to anyone who wants to access them. This means anyone can copy the snapshots, work on the results and use the computation resources in less than an hour. Without the cloud, the only way someone could access such a large dataset would be to ship physical hard drives. 

Today, though, Emma and her team are taking a moment to celebrate the new world record. And maybe a piece of pie, too. Emma’s favorite flavor? “I like apple pie—not too sweet.”

For the technical details on how Emma used Google Compute Engine to calculate pi, head over to the Google Cloud Platform blog.

Enabling a Safe Digital Advertising Ecosystem

Google has a crucial stake in a healthy and sustainable digital advertising ecosystem—something we've worked to enable for nearly 20 years. Every day, we invest significant team hours and technological resources in protecting the users, advertisers and publishers that make the internet so useful. And every year, we share key actions and data about our efforts to keep the ecosystem safe by enforcing our policies across platforms.

Bad ads taken down

Dozens of new ads policies to take down billions of bad ads

In 2018, we faced new challenges in areas where online advertising could be used to scam or defraud users offline. For example, we created a new policy banning ads from for-profit bail bond providers because we saw evidence that this sector was taking advantage of vulnerable communities. Similarly, when we saw a rise in ads promoting deceptive experiences to users seeking addiction treatment services, we consulted with experts and restricted advertising to certified organizations. In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.

We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.

Ticket Resellers

As we continue to protect users from bad ads, we’re also working to make it easier for advertisers to ensure their creatives are policy compliant. Similar to our AdSense Policy Center, next month we’ll launch a new Policy manager in Google Ads that will give tips on common policy mistakes to help well-meaning advertisers and make it easier to create and launch compliant ads.

Taking on bad actors with improved technology

Last year, we also made a concerted effort to go after the bad actors behind numerous bad ads, not just the ads themselves. Using improved machine learning technology, we were able to identify and terminate almost one million bad advertiser accounts, nearly double the amount we terminated in 2017. When we take action at the account level, it helps to address the root cause of bad ads and better protect our users.

In 2017, we launched new technology that allows for more granular removal of ads from websites when only a small number of pages on a site are violating our policies. In 2018, we launched 330 detection classifiers to help us better detect "badness" at the page level—that's nearly three times the number of classifiers we launched in 2017. So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages that violated our publisher policies. We use a combination of manual reviews and machine learning to catch these kinds of violations.

Addressing key challenges within the digital ads ecosystem

From reports of “fake news” sites, to questions about who is purchasing political ads, to massive ad fraud operations, there are fundamental concerns about the role of online advertising in society. Last year, we launched a new policy for election ads in the U.S. ahead of the 2018 midterm elections. We verified nearly 143,000 election ads in the U.S. and launched a new political ads transparency report that gives more information about who bought election ads. And in 2019, we’re launching similar tools ahead of elections in the EU and India.

We also continued to tackle the challenge of misinformation and low-quality sites, using several different policies to ensure our ads are supporting legitimate, high-quality publishers. In 2018, we removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites across our ad network for violations of policies directed at misrepresentative, hateful or other low-quality content. More specifically, we removed ads from almost 74,000 pages for violating our “dangerous or derogatory” content policy, and took down approximately 190,000 ads for violating this policy. This policy includes a prohibition on hate speech and protects our users, advertisers and publishers from hateful content across platforms.  


How we took down one of the biggest ad fraud operations ever in 2018

In 2018, we worked closely with cybersecurity firm White Ops, the FBI, and others in the industry to take down one of the largest and most complex international ad fraud operations we’ve ever seen. Codenamed "3ve", the operation used sophisticated tactics aimed at exploiting data centers, computers infected with malware, spoofed fraudulent domains and fake websites. In aggregate, 3ve produced more than 10,000 counterfeit domains, and generated over 3 billion daily bid requests at its peak.

3ve tried to evade our enforcements, but we conducted a coordinated takedown of their infrastructure. We referred the case to the FBI, and late last year charges were announced against eight individuals for crimes including aggravated identity theft and money laundering. Learn more about 3ve and our work to take it down on our Security Blog, as well as through this white paper that we co-authored with White Ops.


We will continue to tackle these issues because as new trends and online experiences emerge, so do new scams and bad actors. In 2019, our work to protect users and enable a safe advertising ecosystem that works well for legitimate advertisers and publishers continues to be a top priority.

Source: Google Ads


Dev Channel Update for Desktop

The dev channel has been updated to 74.0.3729.6 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Krishna Govind
Google Chrome

Introducing Android Q Beta

Posted by Dave Burke, VP of Engineering

In 2019, mobile innovation is stronger than ever, with new technologies from 5G to edge to edge displays and even foldable screens. Android is right at the center of this innovation cycle, and thanks to the broad ecosystem of partners across billions of devices, Android's helping push the boundaries of hardware and software bringing new experiences and capabilities to users.

As the mobile ecosystem evolves, Android is focused on helping users take advantage of the latest innovations, while making sure users' security and privacy are always a top priority. Building on top of efforts like Google Play Protect and runtime permissions, Android Q brings a number of additional privacy and security features for users, as well as enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 support, faster app startup, and more.

Today we're releasing Beta 1 of Android Q for early adopters and a preview SDK for developers. You can get started with Beta 1 today by enrolling any Pixel device (including the original Pixel and Pixel XL, which we've extended support for by popular demand!) Please let us know what you think! Read on for a taste of what's in Android Q, and we'll see you at Google I/O in May when we'll have even more to share.

Building on top of privacy protections in Android

Android was designed with security and privacy at the center. As Android has matured, we've added a wide range of features to protect users, like file-based encryption, OS controls requiring apps to request permission before accessing sensitive resources, locking down camera/mic background access, lockdown mode, encrypted backups, Google Play Protect (which scans over 50 billion apps a day to identify potentially harmful apps and remove them), and much more. In Android Q, we've made even more enhancements to protect our users. Many of these enhancements are part of our work in Project Strobe.

Giving users more control over location

With Android Q, the OS helps users have more control over when apps can get location. As in prior versions of the OS, apps can only get location once the app has asked you for permission, and you have granted it.

One thing that's particularly sensitive is apps' access to location while the app is not in use (in the background). Android Q enables users to give apps permission to see their location never, only when the app is in use (running), or all the time (when in the background).

For example, an app asking for a user's location for food delivery makes sense and the user may want to grant it the ability to do that. But since the app may not need location outside of when it's currently in use, the user may not want to grant that access. Android Q now offers this greater level of control. Read the developer guide for details on how to adapt your app for this new control. Look for more user-centric improvements to come in upcoming Betas. At the same time, our goal is to be very sensitive to always give developers as much notice and support as possible with these changes.

More privacy protections in Android Q

Beyond changes to location, we're making further updates to ensure transparency, give users control, and secure personal data.

In Android Q, the OS gives users even more control over apps, controlling access to shared files. Users will be able to control apps' access to the Photos and Videos or the Audio collections via new runtime permissions. For Downloads, apps must use the system file picker, which allows the user to decide which Download files the app can access. For developers, there are changes to how your apps can use shared areas on external storage. Make sure to read the Scoped Storage changes for details.

We've also seen that users (and developers!) get upset when an app unexpectedly jumps into the foreground and takes over focus. To reduce these interruptions, Android Q will prevent apps from launching an Activity while in the background. If your app is in the background and needs to get the user's attention quickly -- such as for incoming calls or alarms -- you can use a high-priority notification and provide a full-screen intent. See the documentation for more information.

We're limiting access to non-resettable device identifiers, including device IMEI, serial number, and similar identifiers. Read the best practices to help you choose the right identifiers for your use case, and see the details here. We're also randomizing the device's MAC address when connected to different Wi-Fi networks by default -- a setting that was optional in Android 9 Pie.

We are bringing these changes to you early, so you can have as much time as possible to prepare. We've also worked hard to provide developers detailed information up front, we recommend reviewing the detailed docs on the privacy changes and getting started with testing right away.

New ways to engage users

In Android Q, we're enabling new ways to bring users into your apps and streamlining the experience as they transition from other apps.

Foldables and innovative new screens

Foldable devices have opened up some innovative experiences and use-cases. To help your apps to take advantage of these and other large-screen devices, we've made a number of improvements in Android Q, including changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. To you get started building and testing on these new devices, we've been hard at work updating the Android Emulator to support multiple-display type switching -- more details coming soon!

Sharing shortcuts

When a user wants to share content like a photo with someone in another app, the process should be fast. In Android Q we're making this quicker and easier with Sharing Shortcuts, which let users jump directly into another app to share content. Developers can publish share targets that launch a specific activity in their apps with content attached, and these are shown to users in the share UI. Because they're published in advance, the share UI can load instantly when launched.

The Sharing Shortcuts mechanism is similar to how App Shortcuts works, so we've expanded the ShortcutInfo API to make the integration of both features easier. This new API is also supported in the new ShareTarget AndroidX library. This allows apps to use the new functionality, while allowing pre-Q devices to work using Direct Share. You can find an early sample app with source code here.

Settings Panels

You can now also show key system settings directly in the context of your app, through a new Settings Panel API, which takes advantage of the Slices feature that we introduced in Android 9 Pie.

A settings panel is a floating UI that you invoke from your app to show system settings that users might need, such as internet connectivity, NFC, and audio volume. For example, a browser could display a panel with connectivity settings like Airplane Mode, Wi-Fi (including nearby networks), and Mobile Data. There's no need to leave the app; users can manage settings as needed from the panel. To display a settings panel, just fire an intent with one of the new Settings.Panel actions.

Connectivity

In Android Q, we've extended what your apps can do with Android's connectivity stack and added new connectivity APIs.

Connectivity permissions, privacy, and security

Most of our APIs for scanning networks already require COARSE location permission, but in Android Q, for Bluetooth, Cellular and Wi-Fi, we're increasing the protection around those APIs by requiring the FINE location permission instead. If your app only needs to make peer-to-peer connections or suggest networks, check out the improved Wi-Fi APIs below -- they simplify connections and do not require location permission.

In addition to the randomized MAC addresses that Android Q provides when connected to different Wi-Fi networks, we're adding new Wi-Fi standard support, WP3 and OWE, to improve security for home and work networks as well as open/public networks.

Improved peer-to-peer and internet connectivity

In Android Q we refactored the Wi-Fi stack to improve privacy and performance, but also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission.

The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Apps initiate connection requests indirectly by specifying preferred SSIDs & BSSIDs as WiFiNetworkSpecifiers. The platform handles the Wi-Fi scanning itself and displays matching networks in a Wi-Fi Picker. When the user chooses, the platform sets up the connection automatically.

The network suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity. Apps initiate connections indirectly by providing a ranked list of networks and credentials as WifiNetworkSuggestions. The platform will seamlessly connect based on past performance when in range of those networks.

Wi-Fi performance mode

You can now request adaptive Wi-Fi in Android Q by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases.

To use the new performance modes, call WifiManager.WifiLock.createWifiLock() with WIFI_MODE_FULL_LOW_LATENCY or WIFI_MODE_FULL_HIGH_PERF. In these modes, the platform works with the device firmware to meet the requirement with lowest power consumption.

Camera, media, graphics

Dynamic depth format for photos

Many cameras on mobile devices can simulate narrow depth of field by blurring the foreground or background relative to the subject. They capture depth metadata for various points in the image and apply a static blur to the image, after which they discard the depth metadata.

Starting in Android Q, apps can request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support.

Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases in the future. We're making Dynamic Depth an open format for the ecosystem, and we're working with our device-maker partners to make it available across devices running Android Q and later.

With Dynamic Depth image you can offer specialized blurs and bokeh options in your app.

New audio and video codecs

Android Q introduces support for the open source video codec AV1. This allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it.

The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates using VideoCodecCapabilities.getSupportedPerformancePoints(). This allows you to pick the best quality video content to render on any given device.

Native MIDI API

For apps that perform their audio processing in C++, Android Q introduces a native MIDI API to communicate with MIDI devices through the NDK. This API allows MIDI data to be retrieved inside an audio callback using a non-blocking read, enabling low latency processing of MIDI messages. Give it a try with the sample app and source code here.

ANGLE on Vulkan

To enable more consistency for game and graphics developers, we are working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. ANGLE is a graphics abstraction layer designed for high-performance OpenGL compatibility across implementations. Through ANGLE, the many apps and games using OpenGL ES can take advantage of the performance and stability of Vulkan and benefit from a consistent, vendor-independent implementation of ES on Android devices. In Android Q, we're planning to support OpenGL ES 2.0, with ES 3.0 next on our roadmap.

We'll expand the implementation with more OpenGL functionality, bug fixes, and performance optimizations. See the docs for details on the current ANGLE support in Android, how to use it, and our plans moving forward. You can start testing with our initial support by opting-in through developer options in Settings. Give it a try today!

Vulkan everywhere

We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. Our goal is to make Vulkan on Android a broadly supported and consistent developer API for graphics. We're working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. Going forward, this will help provide a uniform high-performance graphics API for apps and games to use.

Neural Networks API 1.2

Since introducing the Neural Networks API (NNAPI) in 2017, we've continued to expand the number of operations supported and improve existing functionality. In Android Q, we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.

Strengthening Android's Foundations

ART performance

Android Q introduces several new improvements to the ART runtime which help apps start faster and consume less memory, without requiring any work from developers.

Since Android Nougat, ART has offered Profile Guided Optimization (PGO), which speeds app startup over time by identifying and precompiling frequently executed parts of your code. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run, giving a significant jump-start to the overall optimization process. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.

We're also continuing to make improvements in ART itself. For example, in Android Q we've optimized the Zygote process by starting your app's process earlier and moving it to a security container, so it's ready to launch immediately. We're storing more information in the app's heap image, such as classes, and using threading to load the image faster. We're also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC is more efficient as it collects young-generation objects separately, incurring much lower cost as compared to full-heap GC, while still reclaiming a good amount of space. This makes garbage collection overall more efficient in terms of time and CPU, reducing jank and helping apps run better on lower-end devices.

Security for apps

BiometricPrompt is our unified authentication framework to support biometrics at a system level. In Android Q we're extending support for passive authentication methods such as face, and adding implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction in the TEE during the authentication. The implicit flow is designed for a lighter-weight alternative for transactions with passive authentication. We've also improved the fallback for device credentials when needed.

Android Q adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections. See the docs for details.

Compatibility through public APIs

Another thing we all care about is ensuring that apps run smoothly as the OS changes and evolves. Apps using non-SDK APIs risk crashes for users and emergency rollouts for developers. In Android Q we're continuing our long-term effort begun in Android P to move apps toward only using public APIs. We know that moving your app away from non-SDK APIs will take time, so we're giving you advance notice.

In Android Q we're restricting access to more non-SDK interfaces and asking you to use the public equivalents instead. To help you make the transition and prevent your apps from breaking, we're enabling the restrictions only when your app is targeting Android Q. We'll continue adding public alternative APIs based on your requests; in cases where there is no public API that meets your use case, please let us know.

It's important to test your apps for uses of non-SDK interfaces. We recommend using the StrictMode method detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI. Even if the APIs are exempted (grey-listed) at this time, it's best to plan for the future and eliminate their use to reduce compatibility issues. For more details on the restrictions in Android Q, see the developer guide.

Modern Android

We're expanding our efforts to have all apps take full advantage of the security and performance features in the latest version of Android. Later this year, Google Play will require you to set your app's targetSdkVersion to 28 (Android 9 Pie) in new apps and updates. In line with these changes, Android Q will warn users with a dialog when they first run an app that targets a platform earlier than API level 23 (Android Marshmallow). Here's a checklist of resources to help you migrate your app.

We're also moving the ecosystem toward readiness for 64-bit devices. Later this year, Google Play will require 64-bit support in all apps. If your app uses native SDKs or libraries, keep in mind that you'll need to provide 64-bit compliant versions of those SDKs or libraries. See the developer guide for details on how to get ready.

Get started with Android Q Beta

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to enable and test with Android Q storage changes, new location permission states, restrictions on background app launch, and restrictions on device identifiers. See the privacy documentation for details.

To get started, just install your current app from Google Play onto a device or Android Virtual Device running Android Q Beta and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. Take a look at the migration guide for steps and a recommended timeline.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report, the Android Q Beta API reference, and developer guides as a starting point. Also, on the Android Q Beta developer site, you'll find release notes and support resources for reporting issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Android Q Beta?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). Downloadable system images for those devices are also available. If you don't have a Pixel device, you can use the Android Emulator, and download the latest emulator system images via the SDK Manager in Android Studio.

We plan to update the preview system images and SDK regularly throughout the preview. We'll have more features to share as the Beta program moves forward.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

Supporting people with disabilities: Be My Eyes and phone support now available

15 percent of the world’s population has some form of disability—that’s over 1 billion people. Last January, we introduced a dedicated Disability Support team available to help answer questions about assistive features and functionalities within Google products. Access to a Disability Support team—and specifically, video and phone support—was a popular request we heard from the community.

Now, people with questions on assistive technology and/or accessibility features within Google’s products can utilize the Specialized Help section on the Be My Eyes app or connect directly through phone support with a Google Disability Support specialist, Monday through Friday 8:00 a.m. until 5:00 p.m. PT, in English only.

Be My Eyes is a free app available for both iOS and Android that connects people who are blind and low-vision to nearly two million sighted volunteers in the Be My Eyes community. Through a live connection, a volunteer can assist someone with a task that requires visual assistance, such as checking expiry dates, distinguishing colors, reading instructions or navigating new surroundings. This new partnership comes from a common goal between Be My Eyes and Google to help people with disabilities live more independent lives.

Specialized Help_Be My Eyes.png

Image showing two phones in front of the other displaying the Google profile on the Specialized Help section of the Be My Eyes app.

Google’s Disability Support team is composed of strong advocates for inclusion who are eager to work with Googlers to continuously improve and shape Google’s products with user feedback. The team has been working on implementing Be My Eyes and phone support to the community and looks forward to rolling out this support starting today.

Disability Support team.jpg

The Disability Support team at work providing phone support

Visit the Google Accessibility Help Center to learn more about Google Accessibility and head to g.co/disabilitysupport for steps to use Be My Eyes and more ways to connect with a Disability Support specialist.

Dev Channel Update for Chrome OS

The Dev channel has been updated to 74.0.3729.0 (Platform version: 11895.4.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Daniel Gagnon
Google Chrome