Seccomp filter in Android O

Posted by Paul Lawrence, Android Security Engineer
In Android-powered devices, the kernel does the heavy lifting to enforce the Android security model. As the security team has worked to harden Android's userspace and isolate and deprivilege processes, the kernel has become the focus of more security attacks. System calls are a common way for attackers to target the kernel.
All Android software communicates with the Linux kernel using system calls, or syscalls for short. The kernel provides many device- and SOC-specific syscalls that allow userspace processes, including apps, to directly interact with the kernel. All apps rely on this mechanism to access collections of behavior indexed by unique system calls, such as opening a file or sending a Binder message. However, many of these syscalls are not used or officially supported by Android.
Android O takes advantage of a Linux feature called seccomp that makes unused system calls inaccessible to application software. Because these syscalls cannot be accessed by apps, they can't be exploited by potentially harmful apps.

seccomp filter

Android O includes a single seccomp filter installed into zygote, the process from which all the Android applications are derived. Because the filter is installed into zygote—and therefore all apps—the Android security team took extra caution to not break existing apps. The seccomp filter allows:
  • all the syscalls exposed via bionic (the C runtime for Android). These are defined in bionic/libc/SYSCALLS.TXT.
  • syscalls to allow Android to boot
  • syscalls used by popular Android applications, as determined by running Google's full app compatibility suite
Android O's seccomp filter blocks certain syscalls, such as swapon/swapoff, which have been implicated in some security attacks, and the key control syscalls, which are not useful to apps. In total, the filter blocks 17 of 271 syscalls in arm64 and 70 of 364 in arm.


Test your app for illegal syscalls on a device running Android O.

Detecting an illegal syscall

In Android O, the system crashes an app that uses an illegal syscall. The log printout shows the illegal syscall, for example:
03-09 16:39:32.122 15107 15107 I crash_dump32: performing dump of process 14942 (target tid = 14971)
03-09 16:39:32.127 15107 15107 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
03-09 16:39:32.127 15107 15107 F DEBUG   : Build fingerprint: 'google/sailfish/sailfish:O/OPP1.170223.013/3795621:userdebug/dev-keys'
03-09 16:39:32.127 15107 15107 F DEBUG   : Revision: '0'
03-09 16:39:32.127 15107 15107 F DEBUG   : ABI: 'arm'
03-09 16:39:32.127 15107 15107 F DEBUG   : pid: 14942, tid: 14971, name: WorkHandler  >>> com.redacted <<<
03-09 16:39:32.127 15107 15107 F DEBUG   : signal 31 (SIGSYS), code 1 (SYS_SECCOMP), fault addr --------
03-09 16:39:32.127 15107 15107 F DEBUG   : Cause: seccomp prevented call to disallowed system call 55
03-09 16:39:32.127 15107 15107 F DEBUG   :     r0 00000091  r1 00000007  r2 ccd8c008  r3 00000001
03-09 16:39:32.127 15107 15107 F DEBUG   :     r4 00000000  r5 00000000  r6 00000000  r7 00000037
Affected developers should rework their apps to not call the illegal syscall.

Toggling seccomp filters during testing

In addition to logging errors, the seccomp installer respects setenforce on devices running userdebug and eng builds, which allows you to test whether seccomp is responsible for an issue. If you type:
adb shell setenforce 0 && adb stop && adb start
then no seccomp policy will be installed into zygote. Because you cannot remove a seccomp policy from a running process, you have to restart the shell for this option to take effect.

Device manufacturers

Because Android O includes the relevant seccomp filters at //bionic/libc/seccomp, device manufacturers don't need to do any additional implementation. However, there is a CTS test that checks for seccomp at //cts/tests/tests/security/jni/android_security_cts_SeccompTest.cpp. The test checks that add_key and keyctl syscalls are blocked and openat is allowed, along with some app-specific syscalls that must be present for compatibility.

An Update to Open Images – Now with Bounding-Boxes

Last year we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.

Today, we introduce an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels. Details include:
  • 1.2M bounding-boxes around objects for 600 categories on the training set. These have been produced semi-automatically by an enhanced version of the technique outlined in [1], and are all human-verified.
  • Complete bounding-box annotation for all object instances of the 600 categories on the validation set, all manually drawn (830K boxes). The bounding-box annotations in the training and validations sets will enable research on object detection on this dataset. The 600 categories offer a broader range than those in the ILSVRC and COCO detection challenges, and include new objects such as fedora hat and snowman.
  • 4.3M human-verified image-level labels on the training set (over all categories). This will enable large-scale experiments on object classification, based on a clean training set with reliable labels.
Annotated images from the Open Images dataset. Left: FAMILY MAKING A SNOWMAN by mwvchamber. Right: STANZA STUDENTI.S.S. ANNUNZIATA by ersupalermo. Both images used under CC BY 2.0 license. See more examples here.
We hope that this update to Open Images will stimulate the broader research community to experiment with object classification and detection models, and facilitate the development and evaluation of new techniques.

[1] We don't need no bounding-boxes: Training object class detectors using only human verification, Papadopoulos, Uijlings, Keller, and Ferrari, CVPR 2016

Around the world with #teampixel

With vacation mode in full swing, #teampixel members are steadily trekking into the far corners of the globe. This week’s picks range from a peaceful afternoon in a Beijing temple to the windy roads of the Great St. Bernard Pass. Take a look at our summer faves in this week’s #pixelperfect slideshow, and don’t forget to pack the sunscreen.

Have a Pixel? Tag your photos with #teampixel, and you might get featured on the ‘gram.

Dev Channel Update for Chrome OS

The Dev channel has been updated to 61.0.3159.8 (Platform version: 9756.1.0, 9756.2.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements.  A list of changes can be found here.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Ketaki Deshpande
Google Chrome

Final removal of trust in WoSign and StartCom Certificates

As previously announced, Chrome has been in the process of removing trust from certificates issued by the CA WoSign and its subsidiary StartCom, as a result of several incidents not in keeping with the high standards expected of CAs.

We started the phase out in Chrome 56 by only trusting certificates issued prior to October 21st 2016, and subsequently restricted trust to a set of whitelisted hostnames based on the Alexa Top 1M. We have been reducing the size of the whitelist over the course of several Chrome releases.

Beginning with Chrome 61, the whitelist will be removed, resulting in full distrust of the existing WoSign and StartCom root certificates and all certificates they have issued.

Based on the Chromium Development Calendar, this change is visible in the Chrome Dev channel now, the Chrome Beta channel around late July 2017, and will be released to Stable around mid September 2017.

Sites still using StartCom or WoSign-issued certificates should consider replacing these certificates as a matter of urgency to minimize disruption for Chrome users.

Making two-step verification (2SV) deployment easier

Security Key Enforcement was launched in January 2017 and allows a G Suite Enterprise domain admin to enforce the use of security keys as a two-factor authentication option to protect users against phishing. In addition to security key enforcement, G Suite domain admins also have the options of other 2SV methods such as the Google Authenticator app, text message, or phone call. To make 2SV deployment easier at your domain, we've added two new options in the Admin console:

Admin-led security key enrollment for end-users: Admins can now enroll security keys on behalf of their users. After navigating to the User page from Admin console, click ADD NEW KEY, and you can add a new security using the standard security key enrollment process.

2SV enrollment periods: Currently, whenever a new user is created in an organizational unit where two-step verification (2SV) is enforced, that user has to use 2SV from his or her first login. From administrator feedback, we found that enrollment periods would help onboarding users to 2SV more efficiently.

Going forward, administrators can now specify an enrollment period from the Admin console, during which newly created users can sign in with just their passwords and complete their 2SV setup.
To learn more, check out our updated Set up 2-Step Verification guide in the Help Center.

Launch Details
Release track:
Launching to both Rapid release and Scheduled release

2SV enrollment periods are available to all G Suite edition domains
Security key enrollment improvements are available to all G Suite Enterprise edition domains

Rollout pace:
Full rollout (1-3 days for feature visibility)

Admins only

Admin action suggested/FYI

More Information
Help Center

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

TCP BBR congestion control comes to GCP – your Internet just got faster

We're excited to announce that Google Cloud Platform (GCP) now features a cutting-edge new congestion control algorithm, TCP BBR, which achieves higher bandwidths and lower latencies for internet traffic. This is the same BBR that powers TCP traffic from and that improved YouTube network throughput by 4 percent on average globally  and by more than 14 percent in some countries.

"BBR allows the 500,000 WordPress sites on our digital experience platform to load at lightning speed. According to Google’s tests, BBR's throughput can reach as much as 2,700x higher than today's best loss-based congestion control; queueing delays can be 25x lower. Network innovations like BBR are just one of the many reasons we partner with GCP. Jason Cohen, Founder and CTO, WP Engine

GCP customers, like WP Engine, automatically benefit from BBR in two ways:

  • From GCP services to cloud users: First, when GCP customers talk to GCP services like Cloud Bigtable, Cloud Spanner or Cloud Storage, the traffic from the GCP service to the application is sent using BBR. This means speedier access to your data.
  • From Google Cloud to internet users: When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their website, the content is sent to users' browsers using BBR. This means faster webpage downloads for users of your site.

At Google, our long-term goal is to make the internet faster. Over the years, we’ve made changes to make TCP faster, and developed the Chrome web browser and the QUIC protocol. BBR is the next step. Here's the paper describing the BBR algorithm at a high level, the Internet Drafts describing BBR in detail and the BBR code for Linux TCP and QUIC.

What is BBR?

BBR ("Bottleneck Bandwidth and Round-trip propagation time") is a new congestion control algorithm developed at Google. Congestion control algorithms  running inside every computer, phone or tablet connected to a network  that decide how fast to send data.

How does a congestion control algorithm make this decision? The internet has largely used loss-based congestion control since the late 1980s, relying only on indications of lost packets as the signal to slow down. This worked well for many years, because internet switches’ and routers’ small buffers were well-matched to the low bandwidth of internet links. As a result, buffers tended to fill up and drop excess packets right at the moment when senders had really begun sending data too fast.

But loss-based congestion control is problematic in today's diverse networks:

  • In shallow buffers, packet loss happens before congestion. With today's high-speed, long-haul links that use commodity switches with shallow buffers, loss-based congestion control can result in abysmal throughput because it overreacts, halving the sending rate upon packet loss, even if the packet loss comes from transient traffic bursts (this kind of packet loss can be quite frequent even when the link is mostly idle).
  • In deep buffers, congestion happens before packet loss. At the edge of today's internet, loss-based congestion control causes the infamous “bufferbloat” problem, by repeatedly filling the deep buffers in many last-mile links and causing seconds of needless queuing delay.

We need an algorithm that responds to actual congestion, rather than packet loss. BBR tackles this with a ground-up rewrite of congestion control. We started from scratch, using a completely new paradigm: to decide how fast to send data over the network, BBR considers how fast the network is delivering data. For a given network connection, it uses recent measurements of the network's delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it's willing to allow in the network at any time.

Benefits for Google Cloud customers

Deploying BBR has resulted in higher throughput, lower latency and better quality of experience across Google services, relative to the previous congestion control algorithm, CUBIC. Take, for example, YouTube’s experience with BBR. Here, BBR yielded 4 percent higher network throughput, because it more effectively discovers and utilizes the bandwidth offered by the network. BBR also keeps network queues shorter, reducing round-trip time by 33 percent; this means faster responses and lower delays for latency-sensitive applications like web browsing, chat and gaming. Moreover, by not overreacting to packet loss, BBR provides 11 percent higher mean-time-between-rebuffers. These represent substantial improvements for all large user populations around the world, across both desktop and mobile users.

These results are particularly impressive because YouTube is already highly optimized; improving the experience for users watching video has long been an obsession here at Google. Ongoing experiments provide evidence that even better results are possible with continued iteration and tuning.

The benefits of BBR translate beyond Google and YouTube, because they're fundamental. A few synthetic microbenchmarks illustrate the nature (though not necessarily the typical magnitude) of the advantages:

  • Higher throughput: BBR enables big throughput improvements on high-speed, long-haul links. Consider a typical server-class computer with a 10 Gigabit Ethernet link, sending over a path with a 100 ms round-trip time (say, Chicago to Berlin) with a packet loss rate of 1%. In such a case, BBR's throughput is 2700x higher than today's best loss-based congestion control, CUBIC (CUBIC gets about 3.3 Mbps, while BBR gets over 9,100 Mbps). Because of this loss resiliency, a single BBR connection can fully utilize a path with packet loss. This makes it a great match for HTTP/2, which uses a single connection, and means users no longer need to resort to workarounds like opening several TCP connections to reach full utilization. The end result is faster traffic on today's high-speed backbones, and significantly increased bandwidth and reduced download times for webpages, videos or other data.
  • Lower latency: BBR enables significant reductions in latency in last-mile networks that connect users to the internet. Consider a typical last-mile link with 10 Megabits of bandwidth, a 40 ms round-trip time, and a typical 1000-packet bottleneck buffer. In a scenario like this, BBR keeps queuing delay 25x lower than CUBIC (CUBIC has a median round-trip time of 1090 ms, versus just 43 ms for BBR). BBR reduces queues and thus queuing delays on last-mile links while watching videos or downloading software, for faster web surfing and more responsive video conferencing and gaming. Because of this ability to curb bufferbloat, one might say that BBR could also stand for BufferBloat Resilience, in addition to Bottleneck Bandwidth and Round-trip propagation time.

GCP is continually evolving, leveraging Google technologies like Espresso, Jupiter, Andromeda, gRPC, Maglev, Cloud Bigtable and Spanner. Open source TCP BBR is just the latest example of how Google innovations provide industry-leading performance.

If you're interested in participating in discussions, keeping up with the latest BBR-related news, watching videos of talks about BBR or pitching in on open source BBR development or testing, join the public bbr-dev e-mail group.

How we’re collaborating with Citrix to deliver cloud-based desktop apps

Businesses of all types are accelerating their transition to the cloud, and for many, desktop infrastructure and applications are part of this journey. Customers often tell us they want to be able to use their current desktop applications from any device and any place just as easily and securely as they can use G Suite.

That’s why today, we’re announcing a collaboration with Citrix to help deliver desktop applications running in a cloud-hosted environment.

Managing and delivering hosted desktop applications requires several pieces of technology: Google brings highly scalable and reliable infrastructure, a global network to reach customers and employees wherever they may be, and a team of security engineers who work to keep Google Cloud customers secure. Citrix brings the application management, backup and redundancy from XenApp, its desktop virtualization suite, and application delivery with Netscaler. Finally, Google Chromebooks and Android devices together with Citrix XenApp offer a highly secure, managed end-point that provide users a safe and user friendly experience on which to use applications.

All this requires close partnership and excellence in engineering. Google and Citrix have collaborated for years and we're expanding that relationship today in a few key ways:

  • Simplifying the path for customers to more securely transition to the cloud by bringing Citrix Cloud to Google Cloud Platform (GCP)

  • Bringing the application load balancing expertise of Netscaler to the world of containers via Netscaler CPX on GCP

  • Integrating Sharefile with G Suite to use Gmail and edit and store Google Docs natively.

  • Expanding use of secure devices with Citrix Receiver for Chrome and Android link

This collaboration helps address key challenges faced by enterprises moving to the cloud quickly and securely. Both Google and Citrix look forward to making our products work together and to delivering a great combined experience for our customers.

Source: Google Cloud

Daydream Labs: Teaching Skills in VR

You can read every recipe, but to really learn how to cook, you need time in the kitchen. Wouldn't it be great if you could slip on a VR headset and have a famous chef walk you through the basics step by step? In the future, you might be able to learn how to cook a delicious five-course meal—all in VR. In fact, virtual reality could help people learn all kinds of skills.

At Daydream Labs, we tried to better understand how interactive learning might work in VR. So we set up an experiment, which aimed at teaching coffee making. We built a training prototype featuring a 3D model of an espresso machine which reacts like a real one would when you press the buttons, turn the knobs or drop the milk. We also added a detailed tutorial. Then, we tasked one group of people to learn how to pull espresso shots by doing it in VR. (At the end, we gave people a detailed report on how they’d done, including an analysis of the quality of their coffee.) For the purpose of comparison, another group learned by watching YouTube videos. Both groups were able to train for as long as they liked before trying to make a coffee in the real world; people assigned to watch the YouTube tutorial normally did so three times, and people who took the VR training normally went through it twice.

A scene from our coffee training prototype

We were excited to find out that people learned faster and better in VR. Both the number of mistakes made and the time to complete an espresso were significantly lower for those trained in VR (although, in fairness, our tasting panel wasn't terribly impressed with the espressos made by either group!) It's impossible to tell from one experiment, of course, but these early results are promising. We also learned a lot of about how to design future experiments. Here's a glimpse at some of those insights.

Another scene from our coffee training prototype

First, milk coffee was a bad choice. The physical sensation of tamping simply can't be replicated with a haptic buzz. And no matter what warning we flashed if someone virtually touched a hot steam nozzle, they frequently got too close to it in the real world, and we needed a chaperone at the ready to grab their hand away. This suggests that VR technology isn’t quite there when it comes to learning some skills. Until gloves with much better tracking and haptics are mainstream, VR training will be limited to inputs like moving things around or pressing buttons. And if the digital analog is too far removed from the thing it's simulating, it probably won’t help all that much with actually learning the skill.

We also learned that people don’t follow instructions. We see this in all of the prototypes made in Daydream Labs, but it was especially problematic in the trainer. Instructions on controllers? People left their hands by their sides. Written on a backboard? They were too busy with what was right in front of them. Delivered as a voiceover? They rushed ahead without waiting. We even added a “hint” button, but people thought that it was cheating—and forgot about it after a step or two anyways. We ended up needing to combine all of these methods and add in-scene markers, too. Large green arrows pointing at whatever the user was supposed to interact with next worked well enough to allow us to run the test. But we’ve by no means solved this problem, and we learned that lots more work needs to be done about incorporating instructions effectively.

Finally, we discovered that it was too difficult to track all the steps a person took. Every choice we gave a user led to an exponential growth in the number of paths through the tutorial. Worse, people didn't always follow our linear “railroad-style” path, so we had to model all kinds of situations; for example, letting the user steam the milk before grinding the coffee. In the end, it was much easier to model the trainer like a video game, where every object has its own state. So instead of the trainer keeping track of all the steps the user did in order ("user has added milk to cup", we had it track whether a key step had been achieved ("cup contains milk").

Despite these challenges, we consider this prototype a success: people learned something new in VR and enjoyed the process. Some of them even came back to use the espresso trainer again after they’d tried to make a real coffee. In fact, once they had the real-world experience, the virtual training had more context and was more meaningful to them. It may be that VR is a useful way to introduce people to a new skill, and then can help them practice and consolidate once they’ve tried it in the real world. One thing’s for sure—there’s a lot more to learn about learning in VR!

Bringing new Redirect Method features to YouTube

A month ago, we told you about four new steps we’re taking to combat terrorist content on YouTube. One of our core areas of focus is more work to counter online violent extremism. As a first step we’re now rolling out features from Jigsaw’s Redirect Method on YouTube.

Over the past years, Jigsaw partnered with Moonshot CVE to conduct extensive research to understand how extremist groups leverage technology to spread their message and recruit new members. From there, they created the Redirect Method, which uses curated video content to redirect people away from violent extremist propaganda and steer them toward video content that confronts extremist messages and debunks its mythology. Today, YouTube is rolling out a feature using the model proven by the Redirect Method: when people search for certain keywords on YouTube, we will display a playlist of videos debunking violent extremist recruiting narratives.

This early product integration of the Redirect Method on YouTube is our latest effort to provide more resources and more content that can help change minds of people at risk of being radicalized. Over the coming weeks, we hope to build on this by:

  • Expanding the new YouTube product functionality to a wider set of search queries in other languages beyond English.
  • Using machine learning to dynamically update the search query terms.
  • Working with expert NGOs on developing new video content designed to counter violent extremist messaging at different parts of the radicalization funnel.
  • Collaborating with Jigsaw to expand the “Redirect Method” in Europe.

This work is made possible by our partnerships with NGOs that are experts in this field, and we will continue to collaborate closely with them to help support their research through our technological tools. We hope our work together will also help open and broaden a dialogue about other work that can be done to counter radicalization of potential recruits.

As we develop this model of the Redirect Method on YouTube, we’ll measure success by how much this content is engaged. Stay tuned for more.

The YouTube Team

Source: YouTube Blog