Monthly Archives: June 2020

Filter out disruptive noise in Google Meet

Quick launch summary 

To help limit interruptions to your meeting, Google Meet can now intelligently filter out background noise like keyboard typing, doors opening and closing, and construction outside your window. Cloud-based AI is used to remove noise from your audio input while still letting your voice through. 

We had previously announced this top-requested feature and are now beginning to roll it out to G Suite Enterprise and G Suite Enterprise for Education customers using Meet on the web. We will bring the feature to mobile users soon, and will announce on the G Suite Updates blog when it’s available. 



Getting started 


Rollout pace 

  • Now available to all web users in most countries. 
  • For users in Australia, Brazil, India, Japan, and New Zealand, extended rollout (potentially longer than 15 days for feature visibility) starting on June 30, 2020. 
  • Not currently available in some countries (currently including South Africa, the UAE, and surrounding locales). See our Help Center for more availability details

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education customers* 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, and G Suite for Nonprofits customers 

Resources 

Roadmap 


*Availability in alternative packages is variable and based on your services.

Four years later, Google’s first Code Next class is graduating

My weekday routine is a balancing act. When I walk to the subway station at six in the morning, it's typically still dark outside. If I'm lucky, I'll snag a seat on the 5 train for the hour-long ride from the Bronx to Manhattan, but most days I'm standing—balancing with one hand on a pole and the other gripping my phone (usually working on something on Google Docs for class at the last minute).

Cindy Hernandez

I'm headed to Yale in the fall!

I’m one of the 54 students that make up Google Code Next’s first graduating class. Code Next, which started in 2015, is a free computer science education program that supports the next generation of Black and Latinx Tech leaders. 

For four years after school and on the weekends, my classmates and I participated in a rigorous curriculum focused on computer science, problem solving and leadership—balancing that on top of our schoolwork. Our coaches from Google, who have lots of different backgrounds (from software engineering to youth development), provided hands-on coding instruction, inspiration, and guidance as we navigate our way through the Code Next program. We have developed websites, applications, and hardware models.

I had never coded before participating in Code Next. I didn’t think it was for me, but my mother pushed me to sign up, so I gave it a try. Looking back on the past four years, I admit, I’m lucky that I listened. During my freshman and sophomore years, I was at Code Next every day, working on projects, even before the assignment was due and often just for fun. 

I work really hard on what I’m passionate about and coding became my passion. One time, we were asked to make a digital ping pong game from scratch—we had to write all of the code ourselves. There were awards for certain categories (like display and ease of use), and I won most of them, if not all. I always remember that moment because I was really proud of myself, bringing the awards to the coaches to show them what we had done.

There was another time when I participated in a coding competition hosted on Google’s campus. It wasn’t affiliated with Code Next, but my coaches still showed up to watch and support me from the sidelines. I ended up winning first place by designing a website from scratch. It  was a huge accomplishment for me. I had never coded before Code Next so to win the competition where everyone is really smart, I thought, “Wow, maybe this is something I’m good at and maybe I can turn this into a career for myself in the future.”

I hope to be a software engineer one day. I dream of going to Japan, learning Japanese and maybe even working there. Until then, I’ll be attending Yale University in the fall—I’m the first person in my family to go to college. 

If it weren’t for all my coaches at Code Next I definitely would not be where I'm at today. It was because of Code Next and the way it was taught that I truly found my passion. Here are a few other proud graduates of Google’s first Code Next class. They’ve shared a bit about themselves, their aspirations and dreams for the future. 

System hardening in Android 11

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

System hardening in Android 11

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

SpineNet: A Novel Architecture for Object Detection Discovered with Neural Architecture Search



Convolutional neural networks created for image tasks typically encode an input image into a sequence of intermediate features that capture the semantics of an image (from local to global), where each subsequent layer has a lower spatial dimension. However, this scale-decreased model may not be able to deliver strong features for multi-scale visual recognition tasks where recognition and localization are both important (e.g., object detection and segmentation). Several works including FPN and DeepLabv3+ propose multi-scale encoder-decoder architectures to address this issue, where a scale-decreased network (e.g., a ResNet) is taken as the encoder (commonly referred to as a backbone model). A decoder network is then applied to the backbone to recover the spatial information.

While this architecture has yielded improved success for image recognition and localization tasks, it still relies on a scale-decreased backbone that throws away spatial information by down-sampling, which the decoder then must attempt to recover. What if one were to design an alternate backbone model that avoids this loss of spatial information, and is thus inherently well-suited for simultaneous image recognition and localization?

In our recent CVPR 2020 paper “SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization”, we propose a meta architecture called a scale-permuted model that enables two major improvements on backbone architecture design. First, the spatial resolution of intermediate feature maps should be able to increase or decrease anytime so that the model can retain spatial information as it grows deeper. Second, the connections between feature maps should be able to go across feature scales to facilitate multi-scale feature fusion. We then use neural architecture search (NAS) with a novel search space design that includes these features to discover an effective scale-permuted model. We demonstrate that this model is successful in multi-scale visual recognition tasks, outperforming networks with standard, scale-reduced backbones. To facilitate continued work in this space, we have open sourced the SpineNet code to the Tensorflow TPU GitHub repository in Tensorflow 1 and TensorFlow Model Garden GitHub repository in Tensorflow 2.
A scale-decreased backbone is shown on the left and a scale-permuted backbone is shown on the right. Each rectangle represents a building block. Colors and shapes represent different spatial resolutions and feature dimensions. Arrows represent connections among building blocks.
Design of SpineNet Architecture
In order to efficiently design the architecture for SpineNet, and avoid a time-intensive manual search of what is optimal, we leverage NAS to determine an optimal architecture. The backbone model is learned on the object detection task using the COCO dataset, which requires simultaneous recognition and localization. During architecture search, we learn three things:
  • Scale permutations: The orderings of network building blocks are important because each block can only be built from those that already exist (i.e., with a “lower ordering”). We define the search space of scale permutations by rearranging intermediate and output blocks, respectively.
  • Cross-scale connections: We define two input connections for each block in the search space. The parent blocks can be any block with a lower ordering or a block from the stem network.
  • Block adjustments (optional): We allow the block to adjust its scale level and type.
The architecture search process from a scale-decreased backbone to a scale-permuted backbone.
Taking the ResNet-50 backbone as the seed for the NAS search, we first learn scale-permutation and cross-scale connections. All candidate models in the search space have roughly the same computation as ResNet-50 since we just permute the ordering of feature blocks to obtain candidate models. The learned scale-permuted model outperforms ResNet-50-FPN by +2.9% average precision (AP) in the object detection task. The efficiency can be further improved (-10% FLOPs) by adding search options to adjust scale and type (e.g., residual block or bottleneck block, used in the ResNet model family) of each candidate feature block.

We name the learned 49-layer scale-permuted backbone architecture SpineNet-49. SpineNet-49 can be further scaled up to SpineNet-96/143/190 by repeating blocks two, three, or four times and increasing the feature dimension. An architecture comparison between ResNet-50-FPN and the final SpineNet-49 is shown below.
The architecture comparison between a ResNet backbone (left) and the SpineNet backbone (right) derived from it using NAS.
Performance
We demonstrate the performance of SpineNet models through comparison with ResNet-FPN. Using similar building blocks, SpineNet models outperform their ResNet-FPN counterparts by ~3% AP at various scales while using 10-20% fewer FLOPs. In particular, our largest model, SpineNet-190, achieves 52.1% AP on COCO for a single model without multi-scale testing during inference, significantly outperforming prior detectors. SpineNet also transfers to classification tasks, achieving 5% top-1 accuracy improvement on the challenging iNaturalist fine-grained dataset.
Performance comparisons of SpineNet models and ResNet-FPN models adopting the RetinaNet detection framework on COCO bounding box detection.
Performance comparisons of SpineNet models and ResNet models on ImageNet classification and iNaturalist fine-grained image classification.
Conclusion
In this work, we identify that the conventional scale-decreased model, even with a decoder network, is not effective for simultaneous recognition and localization. We propose the scale-permuted model, a new meta-architecture, to address the issue. To prove the effectiveness of scale-permuted models, we learn SpineNet by Neural Architecture Search in object detection and demonstrate it can be used directly in image classification. In the future, we hope the scale-permuted model will become the meta-architecture design of backbones across many visual tasks beyond detection and classification.

Acknowledgements
Special thanks to the co-authors of the paper: Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V. Le, and Xiaodan Song. We also would like to acknowledge Yeqing Li, Youlong Cheng, Jing Li, Jianwei Xie, Russell Power, Hongkun Yu, Chad Richards, Liang-Chieh Chen, Anelia Angelova, and the larger Google Brain Team for their help.

Source: Google AI Blog


System hardening in Android 11

Posted by Platform Hardening Team

In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.

Initializing memory

We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.

In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.

Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.

Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.

Initializing Userspace:

Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.

Initializing the Kernel:

Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.

For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.

Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.

Scudo is now Android's default native allocator

In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:

Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.

It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.

In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.

Finding Heap Memory Safety Bugs in the Wild (GWP-ASan)

Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.

GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.

Software Tag-Based KASAN

Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.

KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.

Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.

Expanding existing compiler mitigations

We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.

In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.

The effectiveness of our software codec sandbox

Prior to the Release of Android 10 we announced a new constrained sandbox for software codecs. We’re really pleased with the results. Thus far, Android 10 is the first Android release since the infamous stagefright vulnerabilities in Android 5.0 with zero critical-severity vulnerabilities in the media frameworks.

Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.

Stadia Savepoint: June updates

With June coming to an end, it's time for another update in our Stadia Savepoint series. Here are the updates we’ve made this month to the Stadia platform:

Touch controls on mobile

Access touch controls within any game on your mobile device when a controller is not already connected.

Expanded OnePlus compatibility

Stadia is now compatible with OnePlus 5, OnePlus 6, and OnePlus 7 series mobile devices. More info here

Per-device resolution settings

Added the ability to set your preferred resolution on each device that you play Stadia on. 

Experiments tab supports additional mobile devices

Any Android phone that can install the Stadia app can play games using the Experiments tab in the settings menu. 

Wireless Stadia Controller functionality on mobile

We’re rolling out support for wireless play using the Stadia Controller on your mobile device. Just link your Stadia Controller to your phone by following the linking code shown on your screen.

This month, players adventured across the lands of Tamriel in The Elder Scrolls Online and learned how to pull off trick combos on boats in Wave Break, in addition to many other games now available for purchase on the Stadia store. We also announced new games coming to Stadia, including the survival adventure Windbound on August 28 and a chance to enter a world inspired by classic JRPGs with Cris Tales on November 17.

If you sign up for Stadia, you’ll get one free month of Stadia Pro and instant access to eighteen games, including PLAYERUNKNOWN’S BATTLEGROUNDS, Zombie Army 4: Dead War, Destiny 2: The Collection, and The Elder Scrolls Online. In addition, if you’ve ever signed up for Stadia Pro, you’ll receive $10 off on your next purchase of any game from the Stadia store. 

Start playing Stadia on your TV for $99.99 with the new Stadia Premiere Edition, complete with a Stadia Controller and Chromecast Ultra. 

Stadia Pro updates

Recent content launches on Stadia

New games coming to Stadia

That’s it for June—we’ll be back soon to share more updates. As always, stay tuned to the Stadia Community Blog, Facebook, and Twitter for the latest news.

Announcing Enhanced Smart Home Analytics

Posted by Toni Klopfenstein, Developer Advocate

When creating scalable applications, consistent and reliable monitoring of resources is a valuable tool for any developer. Today we are releasing enhanced analytics and logging for Smart Home Actions. This feature enables you to more quickly identify and respond to errors or quality issues that may arise.

Request Latency Dashboard

You can now access the smart home dashboard with pre-populated metrics charts for your Actions on the Analytics tab in the Actions Console, or through Cloud Monitoring. These metrics help you quantify the health and usage of your Action, and gain insight into how users engage with your Action. You can view:

  • Execution types and device traits used
  • Daily users and request counts
  • User query response latency
  • Success rate for Smart Home engagements
  • Comparison of cloud and local fulfilment interactions

Successful Requests Dashboard

Cloud Logging provides detailed logs based on the events observed in Cloud Monitoring.

We've added additional features to the error logs to help you quickly debug why intents fail, which particular device commands malfunction, or if your local fulfilment falls back to cloud fulfilment.

New details added to the event logs include:

  • Cloud vs. local fulfilment
  • EXECUTE vs. QUERY intents
  • Locale of request
  • Device Type

You can additionally export these logs through Cloud Pub/Sub, and build log-based metrics and alerts for your development teams to gain insights into common issues.

For more guidance on accessing your Smart Home Action analytics and logs, check out the developer guide or watch the video.

We want to hear from you! Continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

To our YouTube TV members: an update to our content and price

In 2017, we introduced YouTube TV, live TV designed for the YouTube generation  those who want to stream TV when and how they want, without commitments. We’ve just passed the three-year mark, so I wanted to take this opportunity to update you on how we’re thinking about YouTube TV.

Since launch, we’ve listened to your feedback and worked to build an experience that fits the needs of everyone in your family, by adding highly-requested content like PBS and Discovery Network brands, including HGTV and Food Network, and launching new features to reinvent how you watch live TV.

As we continue to build a best-in-class experience for you, we have a few updates to share: new content launching today, new features we’ve recently introduced, and an updated price.

More content to enjoy, starting today


Earlier this year, we let you know that we’d soon be adding more of ViacomCBS’s family of brands to YouTube TV, which includes 8 of your favorite channels launching today: BET, CMT, Comedy Central, MTV, Nickelodeon, Paramount Network, TV Land and VH1.

That means you can follow the biggest stories in news, politics and pop culture with “The Daily Show with Trevor Noah;” catch up with Catelynn, Cheyenne, Maci, Mackenzie and Amber on “Teen Mom OG;” join the search for America’s next drag superstar with “RuPaul’s Drag Race;” go on an adventure with “SpongeBob SquarePants;” and follow the fictional lives of the Dutton family on the new season of “Yellowstone,” airing now.

BET Her, MTV2, MTV Classic, Nick Jr., NickToons, and TeenNick are also set to come to YouTube TV at a later date.

In addition to our base plan of now more than 85+ channels, we also recently introduced Cinemax and HBO Max, which includes all of HBO plus a robust library of content and original series, to our growing list of add-on channels, making YouTube TV your one-stop shop for entertainment.

The latest features to try while you sit back with your favorite shows


We’re always listening to our member’s feedback on the channels they want to see on YouTube TV, but we’re also continuously building new features and making improvements that reinvent how you watch TV and interact with your favorite content on YouTube TV. Here are just a few of those features and updates we’ve launched recently:


  • Jump to the news that matters most to you: We’ve been testing a new feature that allows you to jump to various segments within select news programs on YouTube TV, and have just brought this to all users. Similar to our key plays view for sports, on some programs you’ll be able to jump to specific news clips within the complete recording. This feature is available on TV screens now and will come to mobile devices in the coming weeks.
  • Control over your recorded content: In addition to your unlimited DVR space, YouTube TV members can pause, rewind, and fast forward through all their recorded shows, regardless of network.
  • Go easy on the eyes with Dark Mode: We recently introduced a dark theme to both desktop and mobile devices to help tone down your screen’s glare and experience YouTube TV with a dark background.
  • Mark a show as watched: You now have an option to select “Mark as Watched” on desktop and mobile devices for any TV show you’ve already seen, a top requested feature from our members.
  • A fresh new look for the Live Guide: Based on your feedback, we’ve updated the Live Guide on desktop so you can see what’s on, now, and also scroll ahead 7 days into the future.


An update to your price


As we continue to evaluate how to provide the best possible service and content for you, our membership price will be $64.99. This new price takes effect today, June 30, for new members. Existing subscribers will see these changes reflected in their subsequent billing cycle on or after July 30.

We don’t take these decisions lightly, and realize how hard this is for our members. That said, this new price reflects the rising cost of content and we also believe it reflects the complete value of YouTube TV, from our breadth of content to the features that are changing how we watch live TV. YouTube TV is the only streaming service that includes a DVR with unlimited storage space, plus 6 accounts per household each with its own unique recommendations, and 3 concurrent streams. It's all included in the base cost of YouTube TV, with no contract and no hidden fees.

While we would love every member to continue to stay with our service, we understand that some of you may choose to pause or cancel your membership. We want to make YouTube TV flexible for you, so members can pause or cancel anytime here.

As the streaming industry continues to evolve, we are working to build new flexible models for YouTube TV users, so we can continue to provide a robust and innovative experience for everyone in your household without the commitments of traditional TV.

Thank you for being a part of the YouTube TV family. We’ll continue to work to make it the best place to watch live TV, how you want it.

Christian Oestlien, Vice President of Product Management, YouTube TV

Source: YouTube Blog


Connected Sheets now generally available, replacing Sheets data connector

What’s changing

We’re making Connected Sheets generally available to G Suite Enterprise and G Suite Enterprise for Education customers. Connected Sheets helps you analyze BigQuery data in Google Sheets. It was previously available in beta. Connected Sheets will replace Sheets data connector, a more limited way to connect Sheets and BigQuery.

Read more about how you can use it to analyze petabytes of data with Google Sheets in our Cloud Blog post.

Who’s impacted

End users

Why you’d use it

Connected Sheets links Google Sheets to BigQuery, so you can analyze large BigQuery datasets using familiar spreadsheet tools and operations. This means users don’t need to know SQL and can generate insights with basic spreadsheet operations like formulas, charts, and pivot tables.

This makes it easier for more members of your organization to understand, collaborate on, and generate insights from data. Specifically, it can help subject matter experts work with data without relying on analysts, who may be less familiar with the context of the data or be overloaded with a wide range of data requests.

Connected Sheets includes all the capabilities of the legacy Sheets data connector with additional enhancements. Enhancements include the ability to analyze and visualize data in Sheets without needing to first extract the data, being able to see a preview of data through a Sheet, and scheduling data refreshes to avoid analyzing stale data.

Learn more about how you can analyze petabytes of data with Google Sheets on the Cloud Blog

Getting started


  • Admins: No action required, Connected Sheets will be ON by default. To use it, you must have set up BigQuery for your organization, and users must have access to tables or views in BigQuery. Use our Help Center to learn more about how to set up Connected Sheets.
  • End users: This feature will be ON by default. To use it, must have access to tables or views in BigQuery. Use our Help Center to learn more about Connected Sheets.

Rollout pace


  • Rapid and Scheduled Release domains: Extended rollout (potentially more than 15 days for feature visibility) starting on June 30, 2020. We expect rollout to complete within a month. 

Availability 


  • Available to G Suite Enterprise and G Suite Enterprise for Education customers* 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, and G Suite for Nonprofits customers 

Resources 



Roadmap 


*Availability in alternative packages is variable and based on your services.