Author Archives: Android Developers

Shut the HAL Up

Posted by Jeff Vander Stoep, Senior Software Engineer, Android Security

Updates are essential for security, but they can be difficult and expensive for device manufacturers. Project Treble is making updates easier by separating the underlying vendor implementation from the core Android framework. This modularization allows platform and vendor-provided components to be updated independently of each other. While easier and faster updates are awesome, Treble's increased modularity is also designed to improve security.

Isolating HALs

A Hardware Abstraction Layer (HAL) provides an interface between device-agnostic code and device-specific hardware implementations. HALs are commonly packaged as shared libraries loaded directly into the process that requires hardware interaction. Security boundaries are enforced at the process level. Therefore, loading the HAL into a process means that the HAL is running in the same security context as the process it's loaded into.

The traditional method of running HALs in-process means that the process needs all the permissions required by each in-process HAL, including direct access to kernel drivers. Likewise, all HALs in a process have access to the same set of permissions as the rest of the process, including permissions required by other in-process HALs. This results in over-privileged processes and HALs that have access to permissions and hardware that they shouldn't.

Figure 1. Traditional method of multiple HALs in one process.

Moving HALs into their own processes better adheres to the principle of least privilege. This provides two distinct advantages:

  1. Each HAL runs in its own sandbox and is permitted access to only the hardware driver it controls and the permissions granted to the process are limited to the permissions required to do its job.
  2. Similarly, the process loses access to hardware drivers and other permissions and capabilities needed by the HALs.
Figure 2. Each HAL runs in its own process.

Moving HALs into their own processes is great for security, but it comes at the cost of increased IPC overhead between the client process and the HAL. Improvements to the binder driver made IPC between HALs and clients practical. Introducing scatter-gather into binder improves the performance of each transaction by removing the need for the serialization/deserialization steps and reducing the number of copy operations performed on data from three down to one. Android O also introduces binder domains to provide separate communication streams for vendor and platform components. Apps and the Android frameworks continue to use /dev/binder, but vendor-provided components now use /dev/vndbinder. Communication between the platform and vendor components must use /dev/hwbinder. Other means of IPC between platform and vendor are disallowed.

Case study: System Server

Many of the services offered to apps by the core Android OS are provided by the system server. As Android has grown, so has system server's responsibilities and permissions, making it an attractive target for an attacker. As part of project Treble, approximately 20 HALs were moved out of system server, including the HALs for sensors, GPS, fingerprint, Wi-Fi, and more. Previously, a compromise in any of those HALs would gain privileged system permissions, but in Android O, permissions are restricted to the subset needed by the specific HAL.

Case study: media frameworks

Efforts to harden the media stack in Android Nougat continued in Android O. In Nougat, mediaserver was split into multiple components to better adhere to the principle of least privilege, with audio hardware access restricted to audioserver, camera hardware access restricted to cameraserver, and so on. In Android O, most direct hardware access has been entirely removed from the media frameworks. For example HALs for audio, camera, and DRM have been moved out of audioserver, cameraserver, and drmserver respectively.

Reducing and isolating the attack surface of the kernel

The Linux kernel is the primary enforcer of the security model on Android. Attempts to escape sandboxing mechanisms often involve attacking the kernel. An analysis of kernel vulnerabilities on Android showed that they overwhelmingly occurred in and were reached through hardware drivers.

De-privileging system server and the media frameworks is important because they interact directly with installed apps. Removing direct access to hardware drivers makes bugs difficult to reach and adds another layer of defense to Android's security model.

Identifying Intrusive Mobile Apps using Peer Group Analysis

Posted by Martin Pelikan, Giles Hogben, and Ulfar Erlingsson of Google's Security and Privacy team

Mobile apps entertain and assist us, make it easy to communicate with friends and family, and provide tools ranging from maps to electronic wallets. But these apps could also seek more device information than they need to do their job, such as personal data and sensor data from components, like cameras and GPS trackers.

To protect our users and help developers navigate this complex environment, Google analyzes privacy and security signals for each app in Google Play. We then compare that app to other apps with similar features, known as functional peers. Creating peer groups allows us to calibrate our estimates of users' expectations and set adequate boundaries of behaviors that may be considered unsafe or intrusive. This process helps detect apps that collect or send sensitive data without a clear need, and makes it easier for users to find apps that provide the right functionality and respect their privacy. For example, most coloring book apps don't need to know a user's precise location to function and this can be established by analyzing other coloring book apps. By contrast, mapping and navigation apps need to know a user's location, and often require GPS sensor access.

One way to create app peer groups is to create a fixed set of categories and then assign each app into one or more categories, such as tools, productivity, and games. However, fixed categories are too coarse and inflexible to capture and track the many distinctions in the rapidly changing set of mobile apps. Manual curation and maintenance of such categories is also a tedious and error-prone task.

To address this, Google developed a machine-learning algorithm for clustering mobile apps with similar capabilities. Our approach uses deep learning of vector embeddings to identify peer groups of apps with similar functionality, using app metadata, such as text descriptions, and user metrics, such as installs. Then peer groups are used to identify anomalous, potentially harmful signals related to privacy and security, from each app's requested permissions and its observed behaviors. The correlation between different peer groups and their security signals helps different teams at Google decide which apps to promote and determine which apps deserve a more careful look by our security and privacy experts. We also use the result to help app developers improve the privacy and security of their apps.

Apps are split into groups of similar functionality, and in each cluster of similar apps the established baseline is used to find anomalous privacy and security signals.

These techniques build upon earlier ideas, such as using peer groups to analyze privacy-related signals, deep learning for language models to make those peer groups better, and automated data analysis to draw conclusions.

Many teams across Google collaborated to create this algorithm and the surrounding process. Thanks to several, essential team members including Andrew Ahn, Vikas Arora, Hongji Bao, Jun Hong, Nwokedi Idika, Iulia Ion, Suman Jana, Daehwan Kim, Kenny Lim, Jiahui Liu, Sai Teja Peddinti, Sebastian Porst, Gowdy Rajappan, Aaron Rothman, Monir Sharif, Sooel Son, Michael Vrable, and Qiang Yan.

For more information on Google's efforts to detect and fight potentially harmful apps (PHAs) on Android, see Google Android Security Team's Classifications for Potentially Harmful Applications.

References

S. Jana, Ú. Erlingsson, I. Ion (2015). Apples and Oranges: Detecting Least-Privilege Violators with Peer Group Analysis. arXiv:1510.07308 [cs.CR].

T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean (2013). Distributed Representations of Words and Phrases and their Compositionality. Advances in Neural Information Processing Systems 26 (NIPS 2013).

Ú. Erlingsson (2016). Data-driven software security: Models and methods. Proceedings of the 29th IEEE Computer Security Foundations Symposium (CSF'16), Lisboa, Portugal.

Calling all indie developers in the US & Canada: sign up for the Google Play Indie Games Festival in San Francisco

Posted by Jamil Moledina, Games Strategic Lead, Google Play

Calling all indie developers with fun and creative mobile games: we want to see your latest work! We'll be back with the second Google Play Indie Games Festival taking place in San Francisco on September 23rd.

If you're an indie developer based in the US or Canada and want to submit your game, visit the submission form and enter now through August 6th at 11:59PM PST.

If chosen as one of the 20 Finalists, you could have a chance to demo your game at the event and compete for prizes and bragging rights, to go home as one of the three festival winners!

How useful did you find this blogpost?

Android vitals: Increase engagement and installs through improved app performance

Posted by Fergus Hurley, Product Manager, Google Play

Poor app performance is something that many users have experienced. Think about that last time you experienced an app crashing, failing to respond, or rendering slowly. Consider your reaction when checking the battery usage on your own device, and seeing an app using excessive battery. When an app performs badly, users notice. In fact, in an internal analysis of app reviews on Google Play, we noticed that half of the 1-star reviews mentioned app stability.

Conversely, people consistently reward the best performing apps with better ratings and reviews. This leads to better rankings on Google Play, which helps increase installs. Not only that, but users stay more engaged, and are willing to spend more time and money.

At Google I/O 2017, we announced the new Android vitals dashboard in the Google Play Console. Android vitals is designed to help you understand and analyze bad app behaviors, so you can improve your app's performance and reap the benefits of better performance.

Android vitals in the Google Play Console

Android vitals helps identify opportunities to improve your app's performance. The dashboards are useful for engineers and business owners alike, offering quick reference performance metrics to monitor your app so you can analyze the data and dedicate the right resources to make improvements.

You'll see the following data collected from Android devices whose users have opted in to automatically share usage and diagnostics data:

  • Stability: ANR rate & crash rate
  • Render time: slow rendering (16ms) and frozen UI frames (700ms)
  • Battery usage: stuck wake locks and excessive wakeups

See how Busuu increased their rating from 4.1☆ to 4.5☆ by focusing on app performance

Busuu is one of the world's largest language learning apps. Hear from Antoine Sakho, Head of Product about how Busuu increased user ratings.

Learn more about engineering for high performance with tools from Android and Google Play

Read our best practice article on Android vitals to understand the data shown in the dashboards, and how you can improve your app's performance and stability. Watch the I/O session to learn about more tools from Android and Google Play that you can use to identify and fix bad behaviors:

Learn more about other Play Console features, and stay up to date with news and tips to succeed on Google Play, with the Playbook app. Join the beta and install it today.

How useful did you find this blogpost?

Android Things Hackster Community

Posted by Dave Smith, Developer Advocate for IoT

Android Things makes building connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. Since the initial preview launch back in December, the community has turned some amazing ideas into exciting prototypes using the platform.

To empower these makers and developers using Android Things to share and learn from each other, we have partnered with Hackster.io to create a community where aspiring IoT developers can go to showcase their projects and get inspired by the work of others. Hackster.io is a community of 200,000 engineers and developers dedicated to building internet-connected hardware projects. They also seek to educate and challenge members through live workshops and design contests.

We are eager to see the projects that you come up with. More importantly, we're excited to see how your work can inspire other developers to create something great with Android Things. Visit our Hackster.io community to see the amazing projects others have already built and join the community today!

Android Things Webinar

We will be hosting a webinar in cooperation with Hackster.io on July 7th, 2017 at 10AM PST titled Bootstrapping IoT Products with Android Things. During this time, you will learn how we have designed Android Things to address many of the pain points experienced by developers attempting to build IoT products. You will also have the opportunity to send in questions you have regarding the platform and ecosystem. Register today to join us for this exciting event!

Android Things Console developer preview

Posted by Wayne Piekarski, Developer Advocate for IoT

Today we are launching a preview of the Android Things Console. This console allows developers to manage the software running on their fleet of Android Things IoT devices, including creating factory images, as well as updating the operating system and developer-provided APKs. Devices need to run a system image downloaded via the Android Things Console in order to receive future updates, such as the upcoming Developer Preview 5. Google provides all of the infrastructure for over-the-air (OTA) updates, so developers can focus on their specific application and not have to build their own implementation – getting their IoT devices to enter the market faster and more securely than before.

Let's take a tour of the console, and see the features it offers.

Product Creation and Product Settings

The developer first defines a product, which includes selecting a name and the type of System-on-Module (SoM) that the device is based on. Many developers want to use Google Play Services when building IoT devices, and this is configured here as an optional feature. The size of the OEM partition is also configured, and must be large enough to include the size of any future APK growth.

Factory Images

A device needs an initial base firmware to receive future updates for the correct product from your console. For starters, you can simply use "Create Build Configuration" to build a default factory image with an empty bundle that is configured for your product. This factory image can then be downloaded and flashed to your device, and you can start developing on it by sideloading an APK.

Later on, once you have prepared an application that you would like to deploy to all the devices in your product, you can upload a bundle to the console. This bundle is a ZIP file that contains a main APK file, user space drivers as a service in an APK, and any additional APKs launched by the main APK. A bootanimation.zip file is also supported, which will be displayed during boot up. The uploaded bundle ZIP file is then used to produce a complete system image that can be deployed to devices. More information about the bundle ZIP file contents is available in the documentation.

OTA Updates

This tab allows the developer to select which system image should be pushed to the fleet of product devices. The developer selects one, and then "Push to Devices" starts the process. The update will then be securely pushed to all of the devices, installed to one of the A/B partitions, and made active when the device is rebooted. If any failures are detected, the device automatically rolls back to the previous known working version, so future updates are still possible. Developers will be able to test new releases of Android Things in advance and decide whether devices should be updated automatically.

Feedback

The Android Things Console is currently a preview, and we are working on many more features and customizations. We encourage all Android Things developers to check out the Android Things Console and provide feedback. You can do this by filing bug reports and feature requests, and asking any questions on Stack Overflow. To learn more about the Android Things Console, read the detailed documentation. We also encourage everyone to join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas.

What’s new in WebView security

Posted by Xiaowen Xin and Renu Chaudhary, Android Security Team

The processing of external and untrusted content is often one of the most important functions of an app. A newsreader shows the top news articles and a shopping app displays the catalog of items for sale. This comes with associated risks as the processing of untrusted content is also one of the main ways that an attacker can compromise your app, i.e. by passing you malformed content.

Many apps handle untrusted content using WebView, and we've made many improvements in Android over the years to protect it and your app against compromise. With Android Lollipop, we started delivering WebView as an independent APK, updated every six weeks from the Play store, so that we can get important fixes to users quickly. With the newest WebView, we've added a couple more important security enhancements.

Isolating the renderer process in Android O

Starting with Android O, WebView will have the renderer running in an isolated process separate from the host app, taking advantage of the isolation between processes provided by Android that has been available for other applications.

Similar to Chrome, WebView now provides two levels of isolation:

  1. The rendering engine has been split into a separate process. This insulates the host app from bugs or crashes in the renderer process and makes it harder for a malicious website that can exploit the renderer to then exploit the host app.
  2. To further contain it, the renderer process is run within an isolated process sandbox that restricts it to a limited set of resources. For example, the rendering engine cannot write to disk or talk to the network on its own.
    It is also bound to the same seccomp filter (blogpost on seccomp is coming soon) as used by Chrome on Android. The seccomp filter reduces the number of system calls the renderer process can access and also restricts the allowed arguments to the system calls.

Incorporating Safe Browsing

The newest version of WebView incorporates Google's Safe Browsing protections to detect and warn users about potentially dangerous sites.. When correctly configured, WebView checks URLs against Safe Browsing's malware and phishing database and displays a warning message before users visit a dangerous site. On Chrome, this helpful information is displayed more than 250 million times a month, and now it's available in WebView on Android.

Enabling Safe Browsing

To enable Safe Browsing for all WebViews in your app, add in a manifest tag:

<manifest>
     <meta-data android:name="android.webkit.WebView.EnableSafeBrowsing"
                android:value="true" />
      . . .
     <application> . . . </application>
</manifest>

Because WebView is distributed as a separate APK, Safe Browsing for WebView is available today for devices running Android 5.0 and above. With just one added line in your manifest, you can update your app and improve security for most of your users immediately.

Ending support for Android Market on Android 2.1 and lower

Posted by Maximilian Ruppaner, Software Engineer on Google Play

On June 30, 2017, Google will be ending support for the Android Market app on Android 2.1 Eclair and older devices. When this change happens, users on these devices will no longer be able to access, or install other apps from, the Android Market. The change will happen without a notification on the device, due to technical restrictions in the original Android Market app.

It has been 7 years since Android 2.1 Eclair launched. Most app developers are no longer supporting these Android versions in their apps given these devices now account for only a small number of installs.

We will still be supporting later versions of Android Market for as long as feasible. Google Play, the replacement for Android Market, is available on Android 2.2 and above.

Semantic Time support now available on the Awareness APIs

Posted by Ritesh Nayak M, Product Manager

Last year at I/O we launched the Awareness API, a simple yet powerful API that let developers use signals such as Location, Weather, Time and User Activity to build contextually relevant app experiences.

Available via Google Play services, the Awareness API offers two ways to take advantage of context signals within your app. The Snapshot API lets your app request information about the user's current context, while the Fence API lets your app react to changes in user's context, and when it matches a certain set of conditions. For example, "tell me whenever the user is walking and their headphone is plugged in".

Until now, you could specify a time fence on the Awareness APIs but were restricted to using absolute/canonical representation of time. Based on developer feedback, we realized that the flexibility of the API in regards to building time fences did not support higher level abstractions people use when they think and talk about time. "This weekend", "on the next holiday", "after sunset", are all very common and colloquial ways of expressing time. That's why we're adding Semantic time support to these APIs starting today

For e.g., if you were building a fitness app and wanted a way to prompt users everyday morning to start their routine, or if you're a reading app that wants to turn on night mode after dusk; you would have to query a 3p API for sunrise/sunset information at the user location and then write up an Awareness fence with those canonical time values. With our latest update, you can use our TIME_INSTANT_SUNRISE and TIME_INSTANT_SUNSET constants and let the platform manage all the complexity for you.

Let's look at an example. Suppose you're building a fitness app which prompts users on Tuesday, and Thursday around sunrise to begin their morning work out. You can set up this triggering using the following lines of code.

// A sun-state-based fence that is TRUE only on Tuesday and Thursday during Sunrise 
AwarenessFence.and(
    TimeFence.aroundTimeInstant(TimeFence.TIME_INSTANT_SUNRISE,
            -10 * ONE_MINUTE_MILLIS, 5 * ONE_MINUTE_MILLIS),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_TUESDAY,
                0, ONE_DAY_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_THURSDAY,
                0, ONE_DAY_MILLIS)));

One of our favorite semantic time features is public holidays. Every country and regions within it have different holidays. Assume you were a local hiking & adventure app that wants to show users activities they can indulge in on a holiday that falls on a Friday or a Monday. You can use a combination of Days and Holiday flags to identify this state for all your users around the world. You can do this with just 3 lines of code and have this work in any part of the world.

// A local-time fence that is TRUE only on public holidays in the
// device locale that fall on Fridays or Mondays.
AwarenessFence.and(
    TimeFence.inTimeInterval(TimeFence.TIME_INTERVAL_HOLIDAY),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_FRIDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_MONDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS)));

In both example cases, Awareness does the heavy lifting of localizing time and holidays based on the device locale settings.

We're excited to see what problems you'll solve using this powerful API. Please join our mailing list to get updates about this and other Context APIs at Google.

Android Things Developer Preview 4.1

Posted by Wayne Piekarski, Developer Advocate for IoT

Today, we're releasing a new Developer Preview 4.1 (DP4.1) of Android Things, with updates for new supported hardware and bug fixes to the platform. Android Things is Google's platform to enable Android Developers to create Internet of Things (IoT) devices, and seamlessly scale from prototype to production.

New hardware

A new Pico i.MX6UL revision B board has been released, which supports many common external peripherals from partners such as Adafruit and Pimoroni. There were some prototype Pico i.MX6UL boards made available to some early beta testers, and these are not compatible with DP4.1.

Improvements

DP4.1 also includes some performance improvements since DP4, such as boot time optimizations that improve the startup time of i.MX7D based hardware. This Developer Preview also includes a version of Google Play Services, specifically optimized for IoT devices. This new IoT variant is a lot smaller and optimized for use with Android Things, and requires the use of play-services 11.0.0 or later in your build.gradle. For more information about the supported features in the IoT variant of Google Play Services, see the information page.

Google I/O

Android Things had a large presence at Google I/O this year, with 6 talks covering different aspects of Android Things for developers, and these are available as videos in a playlist for those who could not attend:

What’s New In Google’s IoT Platform? Ubiquitous Computing at Google
Bringing Device Production to Everyone With Android Things
From Prototype to Production Devices with Android Things
Developing for Android Things Using Android Studio
Using Google Cloud and TensorFlow on Android Things
Building for Enterprise IoT Using Android Things and Google Cloud Platform

Google I/O also had a codelab area, where attendees could sit down and test out Android Things development with some simple guided training guides. These codelabs are available for anyone to try at https://codelabs.developers.google.com/?cat=IoT

Feedback

Thank you to all the developers who submitted feedback for the previous developer previews. Please continue sending us your feedback by filing bug reports and feature requests, and asking any questions on stackoverflow. To download images for DP4.1, visit the Android Things download page and find the changes in the release notes. You can also join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas, with over 5,600 members.