Tag Archives: c#

An update on Memory Safety in Chrome

Security is a cat-and-mouse game. As attackers innovate, browsers always have to mount new defenses to stay ahead, and Chrome has invested in ever-stronger multi-process architecture built on sandboxing and site isolation. Combined with fuzzing, these are still our primary lines of defense, but they are reaching their limits, and we can no longer solely rely on this strategy to defeat in-the-wild attacks.

Last year, we showed that more than 70% of our severe security bugs are memory safety problems. That is, mistakes with pointers in the C or C++ languages which cause memory to be misinterpreted.

This sounds like a problem! And, certainly, memory safety is an issue which needs to be taken seriously by the global software engineering community. Yet it’s also an opportunity because many bugs have the same sorts of root-causes, meaning we may be able to squash a high proportion of our bugs in one step.

Chrome has been exploring three broad avenues to seize this opportunity:

  1. Make C++ safer through compile-time checks that pointers are correct.
  2. Make C++ safer through runtime checks that pointers are correct.
  3. Investigating use of a memory safe language for parts of our codebase.

“Compile-time checks” mean that safety is guaranteed during the Chrome build process, before Chrome even gets to your device. “Runtime” means we do checks whilst Chrome is running on your device.

Runtime checks have a performance cost. Checking the correctness of a pointer is an infinitesimal cost in memory and CPU time. But with millions of pointers, it adds up. And since Chrome performance is important to billions of users, many of whom are using low-power mobile devices without much memory, an increase in these checks would result in a slower web.

Ideally we’d choose option 1 - make C++ safer, at compile time. Unfortunately, the language just isn’t designed that way. You can learn more about the investigation we've done in this area in Borrowing Trouble: The Difficulties Of A C++ Borrow-Checker that we're also publishing today.

So, we’re mostly left with options 2 and 3 - make C++ safer (but slower!) or start to use a different language. Chrome Security is experimenting with both of these approaches.

You’ll see major investments in C++ safety solutions - such as MiraclePtr and ABSL/STL hardened modes. In each case, we hope to eliminate a sizable fraction of our exploitable security bugs, but we also expect some performance penalty. For example, MiraclePtr prevents use-after-free bugs by quarantining memory that may still be referenced. On many mobile devices, memory is very precious and it’s hard to spare some for a quarantine. Nevertheless, MiraclePtr stands a chance of eliminating over 50% of the use-after-free bugs in the browser process - an enormous win for Chrome security, right now.

In parallel, we’ll be exploring whether we can use a memory safe language for parts of Chrome in the future. The leading contender is Rust, invented by our friends at Mozilla. This is (largely) compile-time safe; that is, the Rust compiler spots mistakes with pointers before the code even gets to your device, and thus there’s no performance penalty. Yet there are open questions about whether we can make C++ and Rust work well enough together. Even if we started writing new large components in Rust tomorrow, we’d be unlikely to eliminate a significant proportion of security vulnerabilities for many years. And can we make the language boundary clean enough that we can write parts of existing components in Rust? We don’t know yet. We’ve started to land limited, non-user-facing Rust experiments in the Chromium source code tree, but we’re not yet using it in production versions of Chrome - we remain in an experimental phase.

That’s why we’re pursuing both strategies in parallel. Watch this space for updates on our adventures in making C++ safer, and efforts to experiment with a new language in Chrome.

Rust/C++ interop in the Android Platform

One of the main challenges of evaluating Rust for use within the Android platform was ensuring we could provide sufficient interoperability with our existing codebase. If Rust is to meet its goals of improving security, stability, and quality Android-wide, we need to be able to use Rust anywhere in the codebase that native code is required. To accomplish this, we need to provide the majority of functionality platform developers use. As we discussed previously, we have too much C++ to consider ignoring it, rewriting all of it is infeasible, and rewriting older code would likely be counterproductive as the bugs in that code have largely been fixed. This means interoperability is the most practical way forward.

Before introducing Rust into the Android Open Source Project (AOSP), we needed to demonstrate that Rust interoperability with C and C++ is sufficient for practical, convenient, and safe use within Android. Adding a new language has costs; we needed to demonstrate that Rust would be able to scale across the codebase and meet its potential in order to justify those costs. This post will cover the analysis we did more than a year ago while we evaluated Rust for use in Android. We also present a follow-up analysis with some insights into how the original analysis has held up as Android projects have adopted Rust.

Language interoperability in Android

Existing language interoperability in Android focuses on well defined foreign-function interface (FFI) boundaries, which is where code written in one programming language calls into code written in a different language. Rust support will likewise focus on the FFI boundary as this is consistent with how AOSP projects are developed, how code is shared, and how dependencies are managed. For Rust interoperability with C, the C application binary interface (ABI) is already sufficient.

Interoperability with C++ is more challenging and is the focus of this post. While both Rust and C++ support using the C ABI, it is not sufficient for idiomatic usage of either language. Simply enumerating the features of each language results in an unsurprising conclusion: many concepts are not easily translatable, nor do we necessarily want them to be. After all, we’re introducing Rust because many features and characteristics of C++ make it difficult to write safe and correct code. Therefore, our goal is not to consider all language features, but rather to analyze how Android uses C++ and ensure that interop is convenient for the vast majority of our use cases.

We analyzed code and interfaces in the Android platform specifically, not codebases in general. While this means our specific conclusions may not be accurate for other codebases, we hope the methodology can help others to make a more informed decision about introducing Rust into their large codebase. Our colleagues on the Chrome browser team have done a similar analysis, which you can find here.

This analysis was not originally intended to be published outside of Google: our goal was to make a data-driven decision on whether or not Rust was a good choice for systems development in Android. While the analysis is intended to be accurate and actionable, it was never intended to be comprehensive, and we’ve pointed out a couple of areas where it could be more complete. However, we also note that initial investigations into these areas showed that they would not significantly impact the results, which is why we decided to not invest the additional effort.

Methodology

Exported functions from Rust and C++ libraries are where we consider interop to be essential. Our goals are simple:

  • Rust must be able to call functions from C++ libraries and vice versa.
  • FFI should require a minimum of boilerplate.
  • FFI should not require deep expertise.

While making Rust functions callable from C++ is a goal, this analysis focuses on making C++ functions available to Rust so that new Rust code can be added while taking advantage of existing implementations in C++. To that end, we look at exported C++ functions and consider existing and planned compatibility with Rust via the C ABI and compatibility libraries. Types are extracted by running objdump on shared libraries to find external C++ functions they use1 and running c++filt to parse the C++ types. This gives functions and their arguments. It does not consider return values, but a preliminary analysis2 of those revealed that they would not significantly affect the results.

We then classify each of these types into one of the following buckets:

Supported by bindgen

These are generally simple types involving primitives (including pointers and references to them). For these types, Rust’s existing FFI will handle them correctly, and Android’s build system will auto-generate the bindings.

Supported by cxx compat crate

These are handled by the cxx crate. This currently includes std::string, std::vector, and C++ methods (including pointers/references to these types). Users simply have to define the types and functions they want to share across languages and cxx will generate the code to do that safely.

Native support

These types are not directly supported, but the interfaces that use them have been manually reworked to add Rust support. Specifically, this includes types used by AIDL and protobufs.

We have also implemented a native interface for StatsD as the existing C++ interface relies on method overloading, which is not well supported by bindgen and cxx3. Usage of this system does not show up in the analysis because the C++ API does not use any unique types.

Potential addition to cxx

This is currently common data structures such as std::optional and std::chrono::duration and custom string and vector implementations.

These can either be supported natively by a future contribution to cxx, or by using its ExternType facilities. We have only included types in this category that we believe are relatively straightforward to implement and have a reasonable chance of being accepted into the cxx project.

We don't need/intend to support

Some types are exposed in today’s C++ APIs that are either an implicit part of the API, not an API we expect to want to use from Rust, or are language specific. Examples of types we do not intend to support include:

  • Mutexes - we expect that locking will take place in one language or the other, rather than needing to pass mutexes between languages, as per our coarse-grained philosophy.
  • native_handle - this is a JNI interface type, so it is inappropriate for use in Rust/C++ communication.
  • std::locale& - Android uses a separate locale system from C++ locales. This type primarily appears in output due to e.g., cout usage, which would be inappropriate to use in Rust.

Overall, this category represents types that we do not believe a Rust developer should be using.

HIDL

Android is in the process of deprecating HIDL and migrating to AIDL for HALs for new services.We’re also migrating some existing implementations to stable AIDL. Our current plan is to not support HIDL, preferring to migrate to stable AIDL instead. These types thus currently fall into the “We don't need/intend to support'' bucket above, but we break them out to be more specific. If there is sufficient demand for HIDL support, we may revisit this decision later.

Other

This contains all types that do not fit into any of the above buckets. It is currently mostly std::string being passed by value, which is not supported by cxx.

Top C++ libraries

One of the primary reasons for supporting interop is to allow reuse of existing code. With this in mind, we determined the most commonly used C++ libraries in Android: liblog, libbase, libutils, libcutils, libhidlbase, libbinder, libhardware, libz, libcrypto, and libui. We then analyzed all of the external C++ functions used by these libraries and their arguments to determine how well they would interoperate with Rust.

Overall, 81% of types are in the first three categories (which we currently fully support) and 87% are in the first four categories (which includes those we believe we can easily support). Almost all of the remaining types are those we believe we do not need to support.

Mainline modules

In addition to analyzing popular C++ libraries, we also examined Mainline modules. Supporting this context is critical as Android is migrating some of its core functionality to Mainline, including much of the native code we hope to augment with Rust. Additionally, their modularity presents an opportunity for interop support.

We analyzed 64 binaries and libraries in 21 modules. For each analyzed library we examined their used C++ functions and analyzed the types of their arguments to determine how well they would interoperate with Rust in the same way we did above for the top 10 libraries.

Here 88% of types are in the first three categories and 90% in the first four, with almost all of the remaining being types we do not need to handle.

Analysis of Rust/C++ Interop in AOSP

With almost a year of Rust development in AOSP behind us, and more than a hundred thousand lines of code written in Rust, we can now examine how our original analysis has held up based on how C/C++ code is currently called from Rust in AOSP.4

The results largely match what we expected from our analysis with bindgen handling the majority of interop needs. Extensive use of AIDL by the new Keystore2 service results in the primary difference between our original analysis and actual Rust usage in the “Native Support” category.

A few current examples of interop are:

  • Cxx in Bluetooth - While Rust is intended to be the primary language for Bluetooth, migrating from the existing C/C++ implementation will happen in stages. Using cxx allows the Bluetooth team to more easily serve legacy protocols like HIDL until they are phased out by using the existing C++ support to incrementally migrate their service.
  • AIDL in keystore - Keystore implements AIDL services and interacts with apps and other services over AIDL. Providing this functionality would be difficult to support with tools like cxx or bindgen, but the native AIDL support is simple and ergonomic to use.
  • Manually-written wrappers in profcollectd - While our goal is to provide seamless interop for most use cases, we also want to demonstrate that, even when auto-generated interop solutions are not an option, manually creating them can be simple and straightforward. Profcollectd is a small daemon that only exists on non-production engineering builds. Instead of using cxx it uses some small manually-written C wrappers around C++ libraries that it then passes to bindgen.

Conclusion

Bindgen and cxx provide the vast majority of Rust/C++ interoperability needed by Android. For some of the exceptions, such as AIDL, the native version provides convenient interop between Rust and other languages. Manually written wrappers can be used to handle the few remaining types and functions not supported by other options as well as to create ergonomic Rust APIs. Overall, we believe interoperability between Rust and C++ is already largely sufficient for convenient use of Rust within Android.

If you are considering how Rust could integrate into your C++ project, we recommend doing a similar analysis of your codebase. When addressing interop gaps, we recommend that you consider upstreaming support to existing compat libraries like cxx.

Acknowledgements

Our first attempt at quantifying Rust/C++ interop involved analyzing the potential mismatches between the languages. This led to a lot of interesting information, but was difficult to draw actionable conclusions from. Rather than enumerating all the potential places where interop could occur, Stephen Hines suggested that we instead consider how code is currently shared between C/C++ projects as a reasonable proxy for where we’ll also likely want interop for Rust. This provided us with actionable information that was straightforward to prioritize and implement. Looking back, the data from our real-world Rust usage has reinforced that the initial methodology was sound. Thanks Stephen!

Also, thanks to:

  • Andrei Homescu and Stephen Crane for contributing AIDL support to AOSP.
  • Ivan Lozano for contributing protobuf support to AOSP.
  • David Tolnay for publishing cxx and accepting our contributions.
  • The many authors and contributors to bindgen.
  • Jeff Vander Stoep and Adrian Taylor for contributions to this post.


  1. We used undefined symbols of function type as reported by objdump to perform this analysis. This means that any header-only functions will be absent from our analysis, and internal (non-API) functions which are called by header-only functions may appear in it. 

  2. We extracted return values by parsing DWARF symbols, which give the return types of functions. 

  3. Even without automated binding generation, manually implementing the bindings is straightforward. 

  4. In the case of handwritten C/C++ wrappers, we analyzed the functions they call, not the wrappers themselves. For all uses of our native AIDL library, we analyzed the types used in the C++ version of the library. 

Detecting Memory Corruption Bugs With HWASan

Posted by Evgenii Stepanov, Staff Software Engineer, Dynamic Tools

Native code in memory-unsafe languages like C and C++ is often vulnerable to memory corruption bugs. Our data shows that issues like use-after-free, double-free, and heap buffer overflows generally constitute more than 65% of High & Critical security bugs in Chrome and Android.

In previous years our memory bug detection efforts were focused on Address Sanitizer (ASan). ASan catches these errors but causes your app to use 2x-3x extra memory and to run slower.

To better tackle these problems we’ve developed Hardware-Assisted Address Sanitizer (HWASan). HWASan typically only requires 15% more memory. It’s also a lot faster than ASan. HWASan’s performance makes it usable not only for unit testing, but also for interactive human-driven testing. We use this to find memory issues in the Android OS itself, and now we've made it easy for app developers to use it too. HWASan is fast enough that some Android developers use it on their development devices for everyday tasks.

Under the hood

HWASan is based on memory tagging and depends on the Top Byte Ignore feature present in all 64-bit ARM CPUs and the associated kernel support. Every memory allocation is assigned a random 8-bit tag that is stored in the most significant byte (MSB) of the address, but ignored by the CPU. As a result, this tagged pointer can be used in place of a regular pointer without any code changes.

Under the hood, HWASan uses shadow memory - a sparse map that assigns a tag value to each 16 byte block of program memory. Compile time code instrumentation is used to insert checks that compare pointer and memory tags for every memory access, and raise an error if they do not match.

This approach allows us to detect both use-after-free and buffer-overflow types of bugs. The memory tag in the shadow is changed to a random value during allocation and deallocation. As a result, attempting to access deallocated memory with a dangling pointer will almost certainly fail due to a tag mismatch. The same is true for an attempt to access memory outside of the allocated region, which is very likely to have a different tag. Stack and global variables are similarly protected.

Use-after-free bug detection with memory tagging.

Use-after-free bug detection with memory tagging.

This approach is non deterministic: because of the limited number of possible tags, an invalid memory access has 1 chance out of 256 (approximately 0.4%) to pass undetected. We have not observed this as a problem in practice, but, due to the tag randomness, running the program the second time is very likely to find any bugs that the first run has missed.

An advantage of HWASan over ASan is its ability to find bugs that happen far from their origination point - for example, a use-after-free where the memory is accessed long after it has been deallocated, or a buffer overflow with a large offset. This is not the case with ASan, which uses red zones around memory allocations, and a quarantine for the temporary storage of recently deallocated memory blocks. Both redzones and the quarantine are of limited size, and error detection is unlikely beyond that. HWASan uses a different approach that does not have these limitations.

Usage

When a bug is discovered the process is terminated and a crash dump is printed to logcat. The “Abort message” field contains a HWASan report, which shows the access type (read or write), access address, thread id and the stack trace of the bad memory access. This is followed by a stack trace for the original allocation, and, for use-after-free bugs, a stack trace showing where the deallocation took place. Advanced users can find extra debugging information below this, including a map of memory tags for nearby locations.

signal 6 (SIGABRT), code -1 (SI_QUEUE), fault addr --------
Abort message: '==21586==ERROR: HWAddressSanitizer: tag-mismatch on address 0x0042a0807af0 at pc 0x007b23b8786c
WRITE of size 1 at 0x0042a0807af0 tags: db/19 (ptr/mem) in thread T0
    #0 0x7b23b87868 (/data/app/com.example.myapp/lib/arm64/native.so+0x2868)
    #1 0x7b8f1e4ccc (/apex/com.android.art/lib64/libart.so+0x198ccc)
[...]

0x0042a0807af0 is located 0 bytes to the right of 16-byte region [0x0042a0807ae0,0x0042a0807af0)
allocated here:
    #0 0x7b92a322bc (/path/to/libclang_rt.hwasan-aarch64-android.so+0x212bc)
    #1 0x7b23b87840 (/data/app/com.example.myapp/lib/arm64/native.so+0x2840)
[...]

An example snippet from a HWASan crash report.

Google uses HWASan extensively in Android development, and now you can too. Find out more -- including the details of how to rebuild your app for use with HWASan -- at https://developer.android.com/ndk/guides/hwasan. Prebuilt HWASan system images are available on the AOSP build server (or you can build your own). They can be easily flashed onto a compatible device using the recently announced web flash tool.

Introducing Oboe: A C++ library for low latency audio

Posted by Don Turner, Developer Advocate, Android Audio Framework

This week we released the first production-ready version of Oboe - a C++ library for building real-time audio apps. Oboe provides the lowest possible audio latency across the widest range of Android devices, as well as several other benefits.

Single API

Oboe takes advantage of the improved performance and features of AAudio on Oreo MR1 (API 27+) whilst maintaining backward compatibility (using OpenSL ES) on API 16+. It's kind of like AndroidX for native audio.

Diagram showing the underlying audio API which Oboe will use

Less code to write and maintain

Using Oboe you can create an audio stream in just 3 lines of code (vs 50+ lines in OpenSL ES):

AudioStreamBuilder builder;
AudioStream *stream = nullptr;
Result result = builder.openStream(&stream);

Other benefits

  • Convenient C++ API (uses the C++11 standard)
  • Fast release process: supplied as a source library, bug fixes can be rolled out in days, quite a bit faster than the Android platform release cycle
  • Less guesswork: Provides workarounds for known audio bugs and has sensible default behaviour for stream properties, such as sample rate and audio data formats
  • Open source and maintained by Google engineers (although we welcome outside contributions)

Getting started

Take a look at the short video introduction:

Check out the documentation, code samples and API reference. There's even a codelab which shows you how to build a rhythm-based game.

If you have any issues, please file them here, we'd love to hear how you get on.

Making Great Mobile Games with Firebase

So much goes into building and maintaining a mobile game. Let’s say you want to ship it with a level builder for sharing content with other players and, looking forward, you want to roll out new content and unlockables linked with player behavior. Of course, you also need players to be able to easily sign into your soon-to-be hit game.

With a DIY approach, you’d be faced with having to build user management, data storage, server side logic, and more. This will take a lot of your time, and importantly, it would take critical resources away from what you really want to do: build that amazing new mobile game!

Our Firebase SDKs for Unity and C++ provide you with the tools you need to add these features and more to your game with ease. Plus, to help you better understand how Firebase can help you build your next chart-topper, we’ve built a sample game in Unity and open sourced it: MechaHamster. Check it out on Google Play or download the project from GitHub to see how easy it is to integrate Firebase into your game.
Before you dive into the code for Mecha Hamster, here’s a rundown of the Firebase products that can help your game be successful.

Analytics

One of the best tools you have to maintain a high-performing game is your analytics. With Google Analytics for Firebase, you can see where your players might be struggling and make adjustments as needed. Analytics also integrates with Adwords and other major ad networks to maximize your campaign performance. If you monetize your game using AdMob, you can link your two accounts and see the lifetime value (LTV) of your players, from in-game purchases and AdMob, right from your Analytics console. And with Streamview, you can see how players are interacting with your game in realtime.

Test Lab for Android - Game Loop Test

Before releasing updates to your game, you’ll want to make sure it works correctly. However, manual testing can be time consuming when faced with a large variety of target devices. To help solve this, we recently launched Firebase Test Lab for Android Game Loop Test at Google I/O. If you add a demo mode to your game, Test Lab will automatically verify your game is working on a wide range of devices. You can read more in our deep dive blog post here.

Authentication

Another thing you’ll want to be sure to take care of before launch is easy sign-in, so your users can start playing as quickly as possible. Firebase Authentication can help by handling all sign-in and authentication, from simple email + password logins to support for common identity providers like Google, Facebook, Twitter, and Github. Just announced recently at I/O, Firebase also now supports phone number authentication. And Firebase Authentication shares state cross-device, so your users can pick up where they left off, no matter what platforms they’re using.

Remote Config

As more players start using your game, you realize that there are few spots that are frustrating for your audience. You may even see churn rates start to rise, so you decide that you need to push some adjustments. With Firebase Remote Config, you can change values in the console and push them out to players. Some players having trouble navigating levels? You can adjust the difficulty and update remotely. Remote Config can even benefit your development cycle; team members can tweak and test parameters without having to make new builds.

Realtime Database

Now that you have a robust player community, you’re probably starting to see a bunch of great player-built levels. With Firebase Realtime Database, you can store player data and sync it in real-time, meaning that the level builder you’ve built can store and share data easily with other players. You don't need your own server and it’s optimized for offline use. Plus, Realtime Database integrates with Firebase Auth for secure access to user specific data.

Cloud Messaging & Dynamic Links

A few months go by and your game is thriving, with high engagement and an active community. You’re ready to release your next wave of new content, but how can you efficiently get the word out to your users? Firebase Cloud Messaging lets you target messages to player segments, without any coding required. And Firebase Dynamic Links allow your users to share this new content — or an invitation to your game — with other players. Dynamic Links survive the app install process, so a new player can install your app and then dive right into the piece of content that was shared with him or her.

At Firebase, our mission is to help mobile developers build better apps and grow successful businesses. When it comes to games, that means taking care of the boring stuff, so you can focus on what matters — making a great game. Our mobile SDKs for C++ and Unity are available now at firebase.google.com/games.

By Darin Hilton, Art Director

FORTIFY in Android

Posted by George Burgess, Software Engineer

FORTIFY is an important security feature that's been available in Android since mid-2012. After migrating from GCC to clang as the default C/C++ compiler early last year, we invested a lot of time and effort to ensure that FORTIFY on clang is of comparable quality. To accomplish this, we redesigned how some key FORTIFY features worked, which we'll discuss below.

Before we get into some of the details of our new FORTIFY, let's go through a brief overview of what FORTIFY does, and how it's used.

What is FORTIFY?


FORTIFY is a set of extensions to the C standard library that tries to catch the incorrect use of standard functions, such as memset, sprintf, open, and others. It has three primary features:

  • If FORTIFY detects a bad call to a standard library function at compile-time, it won't allow your code to compile until the bug is fixed.
  • If FORTIFY doesn't have enough information, or if the code is definitely safe, FORTIFY compiles away into nothing. This means that FORTIFY has 0 runtime overhead when used in a context where it can't find a bug.
  • Otherwise, FORTIFY adds checks to dynamically determine if the questionable code is buggy. If it detects bugs, FORTIFY will print out some debugging information and abort the program.

Consider the following example, which is a bug that FORTIFY caught in real-world code:

struct Foo {
    int val;
    struct Foo *next;
};
void initFoo(struct Foo *f) {
    memset(&f, 0, sizeof(struct Foo));
}
FORTIFY caught that we erroneously passed &f as the first argument to memset, instead of f. Ordinarily, this kind of bug can be difficult to track down: it manifests as potentially writing 8 bytes extra of 0s into a random part of your stack, and not actually doing anything to *f. So, depending on your compiler optimization settings, how initFoo is used, and your project's testing standards, this could slip by unnoticed for quite a while. With FORTIFY, you get a compile-time error that looks like:

/path/to/file.c: call to unavailable function 'memset': memset called with size bigger than buffer
    memset(&f, 0, sizeof(struct Foo));
    ^~~~~~
For an example of how run-time checks work, consider the following function:

// 2147483648 == pow(2, 31). Use sizeof so we get the nul terminator,
// as well.
#define MAX_INT_STR_SIZE sizeof("2147483648")
struct IntAsStr {
    char asStr[MAX_INT_STR_SIZE];
    int num;
};
void initAsStr(struct IntAsStr *ias) {
    sprintf(ias->asStr, "%d", ias->num);
}
This code works fine for all positive numbers. However, when you pass in an IntAsStr with num <= -1000000, the sprintf will write MAX_INT_STR_SIZE+1 bytes to ias->asStr. Without FORTIFY, this off-by-one error (that ends up clearing one of the bytes in num) may go silently unnoticed. With it, the program prints out a stack trace, a memory map, and will abort with a core dump.

FORTIFY also performs a handful of other checks, such as ensuring calls to open have the proper arguments, but it's primarily used for catching memory-related errors like the ones mentioned above.
However, FORTIFY can't catch every memory-related bug that exists. For example, consider the following code:

__attribute__((noinline)) // Tell the compiler to never inline this function.
inline void intToStr(int i, char *asStr) { sprintf(asStr, “%d”, num); }


char *intToDupedStr(int i) {
    const int MAX_INT_STR_SIZE = sizeof(“2147483648”);
    char buf[MAX_INT_STR_SIZE];
    intToStr(i, buf);
    return strdup(buf);
}
Because FORTIFY determines the size of a buffer based on the buffer's type and—if visible—its allocation site, it can't catch this bug. In this case, FORTIFY gives up because:

  • the pointer is not a type with a pointee size we can determine with confidence because char * can point to a variable amount of bytes
  • FORTIFY can't see where the pointer was allocated, because asStr could point to anything.

If you're wondering why we have a noinline attribute there, it's because FORTIFY may be able to catch this bug if intToStr gets inlined into intToDupedStr. This is because it would let the compiler see that asStr points to the same memory as buf, which is a region of sizeof(buf) bytes of memory.

How FORTIFY works


FORTIFY works by intercepting all direct calls to standard library functions at compile-time, and redirecting those calls to special FORTIFY'ed versions of said library functions. Each library function is composed of parts that emit run-time diagnostics, and—if applicable—parts that emit compile-time diagnostics. Here is a simplified example of the run-time parts of a FORTIFY'ed memset (taken from string.h). An actual FORTIFY implementation may include a few extra optimizations or checks.

_FORTIFY_FUNCTION
inline void *memset(void *dest, int ch, size_t count) {
    size_t dest_size = __builtin_object_size(dest);
    if (dest_size == (size_t)-1)
        return __memset_real(dest, ch, count);
    return __memset_chk(dest, ch, count, dest_size);
}
In this example:

  • _FORTIFY_FUNCTION expands to a handful of compiler-specific attributes to make all direct calls to memset call this special wrapper.
  • __memset_real is used to bypass FORTIFY to call the "regular" memset function.
  • __memset_chk is the special FORTIFY'ed memset. If count > dest_size, __memset_chk aborts the program. Otherwise, it simply calls through to __memset_real.
  • __builtin_object_size is where the magic happens: it's a lot like size sizeof, but instead of telling you the size of a type, it tries to figure out how many bytes exist at the given pointer during compilation. If it fails, it hands back (size_t)-1.

The __builtin_object_size might seem sketchy. After all, how can the compiler figure out how many bytes exist at an unknown pointer? Well... It can't. :) This is why _FORTIFY_FUNCTION requires inlining for all of these functions: inlining the memset call might make an allocation that the pointer points to (e.g. a local variable, result of calling malloc, …) visible. If it does, we can often determine an accurate result for __builtin_object_size.

The compile-time diagnostic bits are heavily centered around __builtin_object_size, as well. Essentially, if your compiler has a way to emit diagnostics if an expression can be proven to be true, then you can add that to the wrapper. This is possible on both GCC and clang with compiler-specific attributes, so adding diagnostics is as simple as tacking on the correct attributes.

Why not Sanitize?


If you're familiar with C/C++ memory checking tools, you may be wondering why FORTIFY is useful when things like clang's AddressSanitizer exist. The sanitizers are excellent for catching and tracking down memory-related errors, and can catch many issues that FORTIFY can't, but we recommend FORTIFY for two reasons:

  • In addition to checking your code for bugs while it's running, FORTIFY can emit compile-time errors for code that's obviously incorrect, whereas the sanitizers only abort your program when a problem occurs. Since it's generally accepted that catching issues as early as possible is good, we'd like to give compile-time errors when we can.
  • FORTIFY is lightweight enough to enable in production. Enabling it on parts of our own code showed a maximum CPU performance degradation of ~1.5% (average 0.1%), virtually no memory overhead, and a very small increase in binary size. On the other hand, sanitizers can slow code down by well over 2x, and often eat up a lot of memory and storage space.

Because of this, we enable FORTIFY in production builds of Android to mitigate the amount of damage that some bugs can cause. In particular, FORTIFY can turn potential remote code execution bugs into bugs that simply abort the broken application. Again, sanitizers are capable of detecting more bugs than FORTIFY, so we absolutely encourage their use in development/debugging builds. But the cost of running them for binaries shipped to users is simply way too high to leave them enabled for production builds.

FORTIFY redesign


FORTIFY's initial implementation used a handful of tricks from the world of C89, with a few GCC-specific attributes and language extensions sprinkled in. Because Clang cannot emulate how GCC works to fully support the original FORTIFY implementation, we redesigned large parts of it to make it as effective as possible on clang. In particular, our clang-style FORTIFY implementation makes use of clang-specific attributes and language extensions, as well as some function overloading (clang will happily apply C++ overloading rules to your C functions if you use its overloadable attribute).

We tested hundreds of millions of lines of code with this new FORTIFY, including all of Android, all of Chrome OS (which needed its own reimplementation of FORTIFY), our internal codebase, and many popular open source projects.

This testing revealed that our approach broke existing code in a variety of exciting ways, like:
template <typename OpenFunc>
bool writeOutputFile(OpenFunc &&openFile, const char *data, size_t len) {}

bool writeOutputFile(const char *data, int len) {
    // Error: Can’t deduce type for the newly-overloaded `open` function.
    return writeOutputFile(&::open, data, len);
}
and
struct Foo { void *(*fn)(void *, const void *, size_t); }
void runFoo(struct Foo f) {
    // Error: Which overload of memcpy do we want to take the address of?
    if (f.fn == memcpy) {
        return;
    }
    // [snip]
}


There was also an open-source project that tried to parse system headers like stdio.h in order to determine what functions it has. Adding the clang FORTIFY bits greatly confused the parser, which caused its build to fail.

Despite these large changes, we saw a fairly low amount of breakage. For example, when compiling Chrome OS, fewer than 2% of our packages saw compile-time errors, all of which were trivial fixes in a couple of files. And while that may be "good enough," it is not ideal, so we refined our approach to further reduce incompatibilities. Some of these iterations even required changing how clang worked, but the clang+LLVM community was very helpful and receptive to our proposed adjustments and additions, such as:


We recently pushed it to AOSP, and starting in Android O, the Android platform will be protected by clang FORTIFY. We're still putting some finishing touches on the NDK, so developers should expect to see our upgraded FORTIFY implementation there in the near future. In addition, as we alluded to above, Chrome OS also has a similar FORTIFY implementation now, and we hope to work with the open-source community in the coming months to get a similar implementation* into glibc, the GNU C library.

* For those who are interested, this will look very different than the Chrome OS patch. Clang recently gained an attribute called diagnose_if, which ends up allowing for a much cleaner FORTIFY implementation than our original approach for glibc, and produces far prettier errors/warnings than we currently can. We expect to have a similar diagnose_if-powered implementation in a later version of Android.

Focusing our Google Play games services efforts

Posted By James Smith, Product Manager, Google Play

In order to help developers make great games and build their businesses, we offer Google Play Games Services (GPGS). GPGS provides powerful tools to build, analyze and retain your audience and optimize your game. After listening to developer feedback and examining usage, we have decided to remove some of the features so we can focus on making our offering more useful.

In December, we announced the end of support for the creation of new iOS accounts given the low usage of GPGS on iOS. Additionally, our latest Native SDK release (2.3) will no longer support integration with iOS and going forward we will not be supporting or updating the iOS SDK.

We've also examined the features that GPGS offers. While developers use engagement and reporting tools extensively, there is lower usage for Gifts, Requests, and Quests. We therefore plan to stop supporting Gifts, Requests, and Quests. In order to help developers that do use these features plan for their removal, we will leave them open for 12 months, deactivating them by 31st March 2018. We'll be continuing support for other features such as Sign-in, Achievements, Leaderboards and Multiplayer.

Play games services remains an important part of the tools we provide developers, and we're working hard on future GPGS updates. We continue to be strongly committed to providing high quality services for Games, including new tools such as official Firebase support for Unity and C++ developers, and integration with Firebase Analytics. These changes allow us to focus our efforts on the services developers value most to build high quality, engaging games.

How useful did you find this blogpost? 


AdMob is “leveling up” your business with new app monetization innovations

Yesterday, at the Game Developers Conference (GDC), we announced important updates to AdMob that could help you unlock new business with rewarded video formats and free, unlimited and real-time analytics reporting. These features will help you monetize your games more effectively by helping you keep your players engaged with more immersive ads and by giving you a faster understanding of how they are interacting with your game.


Over the last year, developers embraced AdMob’s platform to mediate rewarded video ads to nine leading rewarded networks, including TapJoy which we announced yesterday. And we’re not stopping there. AdMob developers around the world now have access to Google’s own video advertising demand from Google AdWords, significantly increasing the breadth and scale of rewarded demand available. That means AdMob now offers a single platform solution including mediation, demand, and reservations. And for those publishers currently using either IronSource or Mopub, adaptors are now available to add Google demand.

Our native ads have also seen tremendous growth over the last year as a way for developers to deliver rich, immersive ad experiences. For developers building their games in Unity, a popular gaming engine, we will shortly be releasing a plugin that supports native ads and native ad mediation on both iOS and Android. This will expand AdMob’s platform and network support for Unity developers beyond banner, interstitial, and rewarded ads available today.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity.

C++ and Unity developers can access Firebase Analytics stream view for real-time player insights
Here’s what developers have to say about rewarded ads in AdMob:
"Implementing AdMob rewarded ads helped us not just monetize non-spending users but increase overall revenue of the game, including IAP revenue. Also, AdMob mediation made it easy to compare our ad performance across ad networks." - Somin Oh, Ad Monetization Manager @ JoyCity
If you haven’t tried AdMob rewarded ads yet, here’s how you can get started. We’re hosting a series of Hangouts on Air around rewarded demand and mediation. During these session we will share best practices to implement and optimize the format and highlight key areas that AdMob rewarded can help you be successful in, including:

  • Access the scale of Google’s video demand: With AdWords’ global presence and advertiser base, AdMob publishers will benefit from geographically diverse demand.
  • Diversified demand with rewarded mediation: Help ensure that there’s always an ad available to show, and that the ad shown is the most valuable to you.
  • User-friendly ad formats: Clear guidance to users at all touch points, including opt-out option for full user control over ad experience.
  • Great User Experiences: AdMob provides highly engaged ads such as landscape and portrait video formats in rewarded ads where there is no incentivization of the download giving a clear value exchange for the user.
  • Popular Game Engine Support: New integrations with Cocos2d-x and Unity game engines allow you to seamlessly support rewarded ads in your games. 

It’s been a privilege to meet so many of you at GDC and learn about the amazing games that you’re all building. We are committed to continuing on this journey with you to build a smart monetization platform for you to grow long-term gaming businesses.

Make sure to stay connected on all things AdMob, follow our Twitter, LinkedIn and Google+ pages.

The AdMob Team

Source: Inside AdMob


“Level up” your gaming business with new innovations for apps

Originally shared on the Inside AdMob Blog
Posted by Sissie Hsiao, Product Director, Mobile Advertising, Google. Last played Fire Emblem Heroes for Android

Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.

Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.

  • New playable and video ad formats to get more people into your game
  • Integrations to help you create better monetization experiences 
  • Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format

There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.

studio.justad.mobi-Management-studio-test_ad.php-browser&saved&id=703423(Nexus 5X)_nexus5x-portrait.png
Jam City’s playable ad for Cookie Jam

Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.

"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.

Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.

Improve the video experience with ads designed for mobile viewing

Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.

Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.

Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology

The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.

Unlock new business with rewarded video formats, and free, unlimited reporting

Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.

002-v1-entryPoint_v2.png
C++ and Unity developers can now access Firebase Analytics for real-time player insights

With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.

This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.

Until next year, GDC!

1 - App Monetization Report, November 2016, App Annie
2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data
3 - Google Internal Data

“Level up” your gaming business with new innovations for apps

Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.

Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.

  • New playable and video ad formats to get more people into your game
  • Integrations to help you create better monetization experiences 
  • Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format

There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.

Jam City’s playable ad for Cookie Jam

Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.

"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.

Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.

Improve the video experience with ads designed for mobile viewing

Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.

Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.

Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology

The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.

Unlock new business with rewarded video formats, and free, unlimited reporting

Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.

C++ and Unity developers can now access Firebase Analytics for real-time player insights

With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.

This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.

Until next year, GDC!

Posted by: Sissie Hsiao, Product Director, Mobile Advertising, Google
Last played Fire Emblem Heroes for Android
1 - App Monetization Report, November 2016, App Annie
2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data
3 - Google Internal Data

Source: Inside AdMob